RemoteIO Audio Problem - simulator = good - device= bad
O.K,所以我正在使用核心音频从10个不同的样本源中提取音频,然后在我的回调函数中将它们混合在一起。
它在模拟器中运行完美,一切都很好。但是,当我尝试在4.2 iphone设备上运行它时遇到了麻烦。
如果我在回调中混合2个音频文件,一切正常。
如果我混合了5或6个音频文件,则会播放音频,但是经过短时间后,音频会降级,最终音频将不会到达扬声器。 (回调不会停止)。
如果我尝试混合10个音频文件,则会运行回调,但根本没有音频输出。
这几乎就像回调用尽了时间,这可能解释了我混音5或6的情况,却无法解释混音10个音频源而根本不播放音频的最后一种情况。
我不确定以下内容是否有影响,但是在调试时,此消息始终显示在控制台上。这可以说明问题是什么吗?
1 2 3 4 5 6 7 8 9 10 | mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11523] [Switching to thread 11523] Re-enabling shared library breakpoint 1 continue warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/usr/lib/info/dns.so (file not found). |
**设置我的回调**
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | #pragma mark - #pragma mark Callback setup & control - (void) setupCallback { OSStatus status; // Describe audio component AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_RemoteIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; // Get component AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); // Get audio units status = AudioComponentInstanceNew(inputComponent, &audioUnit); UInt32 flag = 1; // Enable IO for playback status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag)); //Apply format status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &stereoStreamFormat, sizeof(stereoStreamFormat)); // Set up the playback callback AURenderCallbackStruct callbackStruct; callbackStruct.inputProc = playbackCallback; //!!****assignment from incompatible pointer warning here *****!!!!!! //set the reference to"self" this becomes *inRefCon in the playback callback callbackStruct.inputProcRefCon = self; status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); // Initialise status = AudioUnitInitialize(audioUnit); // error check this status } |
回叫
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | static OSStatus playbackCallback ( void *inRefCon, // A pointer to a struct containing the complete audio data // to play, as well as state information such as the // first sample to play on this invocation of the callback. AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence // between sounds; for silence, also memset the ioData buffers to 0. AudioTimeStamp *inTimeStamp, // Unused here. UInt32 inBusNumber, // The mixer unit input bus that is requesting some new // frames of audio data to play. UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s) // pointed to by the ioData parameter. AudioBufferList *ioData // On output, the audio data to play. The callback's primary // responsibility is to fill the buffer(s) in the // AudioBufferList. ) { Engine *remoteIOplayer = (Engine *)inRefCon; AudioUnitSampleType *outSamplesChannelLeft; AudioUnitSampleType *outSamplesChannelRight; outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData; outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData; int thetime =0; thetime=remoteIOplayer.sampletime; for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) { // get NextPacket returns a 32 bit value, one frame. AudioUnitSampleType *suml=0; AudioUnitSampleType *sumr=0; //NSLog (@"frame number - %i", frameNumber); for(int j=0;j<10;j++) { AudioUnitSampleType valuetoaddl=0; AudioUnitSampleType valuetoaddr=0; //valuetoadd = [remoteIOplayer getSample:j ]; valuetoaddl = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:0 ]; //valuetoaddl = [remoteIOplayer getSample:j]; valuetoaddr = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:1 ]; suml = suml+(valuetoaddl/10); sumr = sumr+(valuetoaddr/10); } outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml; outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr; remoteIOplayer.sampletime +=1; } return noErr; } |
我的音频获取功能
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | -(AudioUnitSampleType) getNonInterleavedSample:(int) index currenttime:(int)time channel:(int)ch; { AudioUnitSampleType returnvalue= 0; soundStruct snd=soundStructArray[index]; UInt64 sn= snd.frameCount; UInt64 st=sampletime; UInt64 read= (UInt64)(st%sn); if(ch==0) { if (snd.sendvalue==1) { returnvalue = snd.audioDataLeft[read]; }else { returnvalue=0; } }else if(ch==1) { if (snd.sendvalue==1) { returnvalue = snd.audioDataRight[read]; }else { returnvalue=0; } soundStructArray[index].sampleNumber=read; } if(soundStructArray[index].sampleNumber >soundStructArray[index].frameCount) { soundStructArray[index].sampleNumber=0; } return returnvalue; } |
编辑1
作为对@andre的响应,我将回调更改为以下内容,但仍然没有帮助。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | static OSStatus playbackCallback ( void *inRefCon, // A pointer to a struct containing the complete audio data // to play, as well as state information such as the // first sample to play on this invocation of the callback. AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence // between sounds; for silence, also memset the ioData buffers to 0. AudioTimeStamp *inTimeStamp, // Unused here. UInt32 inBusNumber, // The mixer unit input bus that is requesting some new // frames of audio data to play. UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s) // pointed to by the ioData parameter. AudioBufferList *ioData // On output, the audio data to play. The callback's primary // responsibility is to fill the buffer(s) in the // AudioBufferList. ) { Engine *remoteIOplayer = (Engine *)inRefCon; AudioUnitSampleType *outSamplesChannelLeft; AudioUnitSampleType *outSamplesChannelRight; outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData; outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData; int thetime =0; thetime=remoteIOplayer.sampletime; for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) { // get NextPacket returns a 32 bit value, one frame. AudioUnitSampleType suml=0; AudioUnitSampleType sumr=0; //NSLog (@"frame number - %i", frameNumber); for(int j=0;j<16;j++) { soundStruct snd=remoteIOplayer->soundStructArray[j]; UInt64 sn= snd.frameCount; UInt64 st=remoteIOplayer.sampletime; UInt64 read= (UInt64)(st%sn); suml+= snd.audioDataLeft[read]; suml+= snd.audioDataRight[read]; } outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml; outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr; remoteIOplayer.sampletime +=1; } return noErr; } |
就像安德烈所说,最好不要在回调中进行任何Objective-C函数调用。您还应该将inputProcRefCon更改为C结构而不是Objective-C对象。
此外,您似乎可能要逐帧"手动"将数据复制到缓冲区中。而是使用memcopy在其中复制数据块。
此外,我很确定您没有在回调中执行磁盘I / O,但是如果您这样做,您也不应执行此操作。
以我的经验,请尽量不要在RemoteIO回调中使用Objective-C函数调用。他们会放慢脚步。尝试使用C结构在Callback中移动" getNonInterleavedSample"函数来访问音频数据。
我认为您的CPU受到限制;与各种设备相比,模拟器在处理速度方面要强大得多。
回调可能无法跟上被调用的频率。
编辑:您是否可以"预先计算"混合(提前进行混合还是在另一个线程中进行混合),以便在触发回调时已经混合好了,而回调要做的工作较少?