I have an array of floats that is raw audio data from a 3rd party source. I would like to pass this through to a Speech Recognition request via appendAudioPCMBuffer
but that accepts an AVAudioPCMBuffer
. How could I convert my NSMutableArray
to AVAudioPCMBuffer
?
For reference, this is how the buffer
variable gets created before its passed to this function. It is written in C.
void CallNativePlugin( const float buffer[], int size ) {
NSMutableArray *myArray = [[NSMutableArray alloc] init];
for (int i = 0; i < size; i++) {
NSNumber *number = [[NSNumber alloc] initWithFloat:buffer[i]];
[myArray addObject:number];
[delegateObject recognizeSpeechFromBuffer:myArray ];
}
}
Then the current code I have to take that buffer
and pass is to the speech recognizer (objective-c):
-(void) recognizeSpeechFromBuffer: (NSMutableArray*) buffer {
NSLog( @"Array length: %lu@", (unsigned long) buffer.count );
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
recognitionRequest.shouldReportPartialResults = YES;
recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest.resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if (result) {
NSLog(@"RESULT:%@",result.bestTranscription.formattedString);
isFinal = !result.isFinal;
}
if (error) {
recognitionRequest = nil;
recognitionTask = nil;
}
}];
// Do something like [recognitionRequest appendAudioPCMBuffer:buffer];
}
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…