[键入文字]
3.2
录音通路控制 3.2.1 AudioRecord创建
JAVA应用部分
FM采取了先录再放的机制,我们首先介绍录制部分。
FMRadio::enableRadioFMRadioService::fmOnFMRadioService::startFMFMRadioService::Record::runAudioRecord(MediaRecorder.AudioSource.FM_RX, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize)AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM)AudioRecord::startRecordingAudioTrack::playAudioRecord::readAudioTrack::writeBuffer FM应用中会创建一个RecordThread,在其中创建AudioRecord和AudioTrack实例,AudioRecord将FM的输入数据保存至共用的Buffer中,再将Buffer中的数据出传入到AudioTrack中,播放出声音。在这里我们采用的方式是先录再放的方式。还有一种方式可以直接在ARM9侧配置ADIE CONFIG,将FM的数据直接输出到对应的device。这个后面在介绍ARM9端的配置时再详细介绍。图中的红色标记部分为我们的通路设置主线-device,这时我们输入的是MediaRecorder.AudioSource.FM_RX。
[键入文字]
[键入文字]
JNI部分:
new AudioRecord()native_setupandroid_media_AudioRecord_setupAudioSystem::IsInputChannel检查Input ChannelFormat = AudioSystem::PCM_16/8_BIT设置formatframeCount = buffSizeInBytes / frameSize 设置framesizelpRecorder = new AudioRecord()lpRecorder->set(inputSource, sampleRate, format, channels, frameCount, flags, cbf, user, notificationFrames, threadCanCallJava) JNI调用中有JNINativeMethod类型的list。android_media_AudioRecord.cpp定义如下:
static JNINativeMethod gMethods[] = { //name signature funcPtr {“native_start”, “()I”, (void*)android_media_AudioRecord_start}, {“native_stop”, “()V”, (void*)android_media_AudioRecord_stop},
{“native_setup”, “(Ljava/lang/Object;IIIII)I”, (void*)android_media_AudioRecord_setup}, ……………………………………………………………………………………… {“native_read_in_byte_array”, “(*BII)I”, (void*)android_media_AudioRecord_readInByteArray}, {“native_read_in_short_array”, “(*SII)I”, (void*)android_media_AudioRecord_readInShortArray},
……………………………………………………………………………………… }
可以看到与应用中调用AudioRecord对应的setup, start, read的本地实现。
例如:android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this, jint source, jint sampleRateInHertz, jint channels, jint audioFormat, jint buffSizeInBytes) { ……………… AudioRecord *lpRecorder = NULL; lpRecorder = new AudioRecord();
lpRecorder->set(source, sampleRateInHertz, format, channels, frameCount, 0,
[键入文字]
[键入文字]
recorderCallback, lpCallbackData, 0, true);
……………… }
其中会有AudioRecord的本地实现,最终调用了AudioRecord::set来实现。 其中的InputSource为MediaRecorder.AudioSource.FM_RX。
[键入文字]
[键入文字]
Framework部分:
AudioRecord->set(inputSource, sampleRate, format, channels, frameCount, flags, cbf, user, notificationFrames, threadCanCallJava)inputSource默认值设置,samplerate默认值设置,format默认值设置AudioSystem::isValidFormat判读format是否有效AudioSystem::isInputChannel判断是否是输入ChannelAudioSystem::getInput(inputSource, sampleRate, format, channels, (AudioSystem::audio_in_acoustics)flags)获取到类型为audio_io_handle_t input检查frameCount是否合法AudioRecord::openRecord(sampleRate, format, channelCount, frameCount, flags, input),input是从AudioSystem::getInput中返回的audio_io_handle_taps = AudioSystem::get_audio_policy_service()AudioPolicyService->getInput(inputSource, sampleRate, format, channels, acoustics),交给AudioPolicyServce来处理AudioFlinger *af = AudioSystem::get_audio_flinger();af->openRecord(getpid(), input, sampleRate, format, channelCount, frameCount, flags<<16, &status)交给AudioFlinger来处理AudioManagerBase *mpPolicyManager->getInput(inputSource, sampleRate, format, channels, acoustics),交给AudioManagerBase来处理device = AudioManagerBase ->getDeviceForInputSource(inputSource)通过inputSource获取对应的device,这里获取的是AudioSystem::DEVICE_IN_FMinput = (AudioPolicyService *)mpClientInterface->openInput(device, sampleRate, Format, Channels, acoustics)交给AudioPolicyService来处理thread = checkRecordThread_l(input)根据前面获取的input获取RecordThread。recordTrack = new RecordThread::RecordTrack(thread, client, sampleRate, format, channelCount, frameCount, flags)recordHandle = new RecordHandle(recordTrack)返回RecordHandle给clientthread = new RecordThread(this, input, reqSamplingRate, reqChannels, ++mNextThreadId)return mNextThreadId我们的input的返回 实际上就是这个值AudioFlinger *af = AudioSystem::get_audio_flinger();af->openInput(pDevices, pSamplingRate, pFormat, pChannels, acoustics)交给AudioFlinger来处理AudioStreamIn *input = mAudioHardware->openInputStream(*pDevice, &format, &channels, &sampleRate, &status, acoustics),交给Audio HAL处理
创建AudioRecord并获取audio_io_handle_t input,由此input再获得: 设置inputSource为MediaRecorder.AudioSource.FM_RX后, 通过AudioSystem::getInput AudioPolicyService::getInput
AudioManagerBase::getInput,其中通过AudioManagerBase::getDeviceForInputSource获取到MediaRecorder.AudioSource.FM_RX对应的device为AudioSystem::DEVICE_IN_FM,下面都使用这个device为主线来分析audio通路设置过程。 AudioPolicyService::getInput
AudioFlinger::openInput来获取input,这里的input实际返回的是RecordThread的index。 用此input,来调用AudioFlinger::openRecord。其中会创建RecordTrack,其中分配共享内存。返回一个RecordHandle给Client。记录在AudioRecord::mAudioRecord中
AudioFlinger::openInput中会调用Audio HAL层的AudioStreamIn->openInputStream,这个在Audio HAL部分介绍。
[键入文字]
[键入文字]
Audio HAL部分:
AudioStreamIn* AudioHardware::openInputStream(devices, format, channels, sampleRate, status, acoustic_flags) AudioSystem::IsInputDevice检查Input DevicemMode == AudioSystem::MODE_IN_CALL时,sampleRate和format检查in = new AudioStreamInMSM72xx 创建StreamIn实例in->set(this, devices, format, channels, sampleRate, acoustic_flags) 设置StreamInformat检查hw->getInputSampleRate sampleRate检查Channels检查mFd = open(/dev/msm_pcm_in)ioctl(mFd, AUDIO_GET_CONFIG, &config)ioctl(mFd, AUDIO_SET_CONFIG, &config)打开输入设备并对其进行配置记录AudioHardware::mDevices = devicesfd = open(/dev/msm_preproc_ctl)ioctl(fd, AUDIO_SET_AGC/NS/IIR/, &cfg)根据enable_preproc_mask来设定
AudioHardware::openInputStream中会去检查传入参数的合法性,创建出AudioStreamInMSM72xx的实例,通过set对其进行配置,并将此实例返回给AudioFlinger::RecordThread::mInput,后续针对mInput的流操作,均是对AudioStreamInMSM72xx这个实例进行操作。
将传入的devices记录在AudioHardware::mDevices中。 对AGC/NS/IIR等进行预先配置。 Kernel部分:
[键入文字]
百度搜索“77cn”或“免费范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,免费范文网,提供经典小说综合文库Android_Audio_深入分析(2)在线全文阅读。
相关推荐: