目录
背景:
正文:
1. 先播放声音,再打开录屏工具,是如果保证正常录制的?
2. 对于secondary output,他的输出设备是怎么选择的?
3. 数据流是如何运转的?
背景:
Android的screen recorder应用层中对于手机内部声音录制部分的实现是通过AudioPlaybackCaptureConfiguration来进行录制系统声音,实现如下:
88 int size = AudioRecord.getMinBufferSize(
89 mConfig.sampleRate, mConfig.channelInMask,
90 mConfig.encoding) * 2;
91
92 Log.d(TAG, "audio buffer size: " + size);
93
94 AudioFormat format = new AudioFormat.Builder()
95 .setEncoding(mConfig.encoding)
96 .setSampleRate(mConfig.sampleRate)
97 .setChannelMask(mConfig.channelOutMask)
98 .build();
99
100 AudioPlaybackCaptureConfiguration playbackConfig =
101 new AudioPlaybackCaptureConfiguration.Builder(mMediaProjection)
102 .addMatchingUsage(AudioAttributes.USAGE_MEDIA)
103 .addMatchingUsage(AudioAttributes.USAGE_UNKNOWN)
104 .addMatchingUsage(AudioAttributes.USAGE_GAME)
105 .build();
106
// 由于有AudioPlaybackCaptureConfiguration的存在,所以会触发audiopolicy的注册,
并且通过policy获取到record的行为,这个recorder的source为REMOTE_SUBMIX。
具体在audiorecord::buildAudioPlaybackCaptureRecord中
107 mAudioRecord = new AudioRecord.Builder()
108 .setAudioFormat(format)
109 .setAudioPlaybackCaptureConfig(playbackConfig)
110 .build();
应用层面以及policymix这块这篇文章里面不打算细讲,主要是针对上面几个疑问进行解释。
正文:
1. 先播放声音,再打开录屏工具,是如果保证正常录制的?
不管后台有没有声音在播放,我们的入口都是AudioRecord:
- 在AudioRecord创建的时候,会通过AudioFlinger::createRecord()创建record handle,在创建record handle之前会先从AudioPolicyManager中获取到对应的input handle的,这个input handle是根据AudioRecord中的属性,从AudioPolicyManager中获取。audio record中比较重要的属性如source(REMOTE SUBMIX)以及audiomananger中policyMix等信息来创建、获取对应的input handle。
- 另外一点是在AudioRecord::start的时候,会触发AudioPolicyManager的startInput的逻辑,startInput中对REMOTE SUBMIX的判断,从而往音频系统里面添加AUDIO_DEVICE_OUT_REMOTE_SUBMIX的设备,如下面代码:
// automatically enable the remote submix output when input is started if not
// used by a policy mix of type MIX_TYPE_RECORDERS
// For remote submix (a virtual device), we open only one input per capture request.
if (audio_is_remote_submix_device(inputDesc->getDeviceType())) {
String8 address = String8("");
if (policyMix == nullptr) {
address = String8("0");
} else if (policyMix->mMixType == MIX_TYPE_PLAYERS) {
address = policyMix->mDeviceAddress;
}
if (address != "") {
setDeviceConnectionStateInt(AUDIO_DEVICE_OUT_REMOTE_SUBMIX,
AUDIO_POLICY_DEVICE_STATE_AVAILABLE,
address, "remote-submix", AUDIO_FORMAT_DEFAULT);
}
}
下面这个是policymix在AudioPolicyManager中的注册信息:
Audio Policy Mix:
Audio Policy Mix 1 (0xb400007de1102a80):
- mix type: MIX_TYPE_PLAYERS
- Route Flags: MIX_ROUTE_FLAG_RENDER|MIX_ROUTE_FLAG_LOOP_BACK|MIX_ROUTE_FLAG_LOOP_BACK_AND_RENDER|MIX_ROUTE_FLAG_ALL
- device type: AUDIO_DEVICE_OUT_REMOTE_SUBMIX
- device address: -785045321:ap:1mixp:0
- output: 69
- Criterion 0: RULE_MATCH_ATTRIBUTE_USAGE AUDIO_USAGE_MEDIA
- Criterion 1: RULE_MATCH_ATTRIBUTE_USAGE AUDIO_USAGE_VOICE_COMMUNICATION
接下来是重点:下面是调用的流程
AudioRecord::start-》startInput()-》setDeviceConnectionStateInt-》checkForDeviceAndOutputChanges-》checkSecondaryOutputs-》invalidateStream()->AudioTrack::restoreTrack_l()->AudioFlinger::createTrack()->TrackHandle::start()->PatchTrack::start()
下面是AudioPolicyManager中重要函数checkSecondaryOutputs的代码解析:
总结:遍历所有的outputs的thread,查找所有线程上的clients是否有符合policymix的规则的,然后在确认下有无该policymix的secondaryOutput存在,如果存在,则后续对该client所在播放的stream type进行invalidate。
void AudioPolicyManager::checkSecondaryOutputs() {
std::set<audio_stream_type_t> streamsToInvalidate;
// 遍历所有的播放线程
for (size_t i = 0; i < mOutputs.size(); i++) {
const sp<SwAudioOutputDescriptor>& outputDescriptor = mOutputs[i];
// 看看这些播放线程里面的所有播放的client端
for (const sp<TrackClientDescriptor>& client : outputDescriptor->getClientIterable()) {
sp<AudioPolicyMix> primaryMix;
std::vector<sp<AudioPolicyMix>> secondaryMixes;
// 这里会判断这个client是否允许你,以及是否符合你的policymix中的条件,
// 如果是的话,则会往secondaryMixes中追加该mix,
// 表明这个这个client播放的声音可以采用该policy mix
status_t status = mPolicyMixes.getOutputForAttr(client->attributes(), client->uid(),
client->flags(), primaryMix, &secondaryMixes);
std::vector<sp<SwAudioOutputDescriptor>> secondaryDescs;
// 遍历所有的secondary mix,对于有output信息的mix存放到secondaryDescs列表中。
// 在checkSecondaryOutputs之前的checkOutputsForDevice的时候,
// 会去打开REMOTE_SUBMIX的output,而将secondary mix中setOutput指向
// 新打开的这个handle
for (auto &secondaryMix : secondaryMixes) {
sp<SwAudioOutputDescriptor> outputDesc = secondaryMix->getOutput();
if (outputDesc != nullptr &&
outputDesc->mIoHandle != AUDIO_IO_HANDLE_NONE) {
secondaryDescs.push_back(outputDesc);
}
}
if (status != OK ||
!std::equal(client->getSecondaryOutputs().begin(),
client->getSecondaryOutputs().end(),
secondaryDescs.begin(), secondaryDescs.end())) {
streamsToInvalidate.insert(client->stream());
}
}
}
for (audio_stream_type_t stream : streamsToInvalidate) {
// 将会触发如music类型的track进入invalidate状态,从而recreate的逻辑
mpClientInterface->invalidateStream(stream);
}
}
下面是AudioFlinger::createTrack()中针对secondaryOutput的部分代码:
总结:根据是否有secondaryOutput来创建对应的patchRecord和pathTrack,并将patchTrack添加到secondaryThread中,patchTrack跟patchRecord也会彼此建立Peer Proxy,还需要添加到track中。 这块应该是数据流的重点,后面应该会再整理下:
if (lStatus == NO_ERROR) {
// Connect secondary outputs. Failure on a secondary output must not imped the primary
// Any secondary output setup failure will lead to a desync between the AP and AF until
// the track is destroyed.
TeePatches teePatches;
for (audio_io_handle_t secondaryOutput : secondaryOutputs) {
PlaybackThread *secondaryThread = checkPlaybackThread_l(secondaryOutput);
if (secondaryThread == NULL) {
ALOGE("no playback thread found for secondary output %d", output.outputId);
continue;
}
size_t sourceFrameCount = thread->frameCount() * output.sampleRate
/ thread->sampleRate();
size_t sinkFrameCount = secondaryThread->frameCount() * output.sampleRate
/ secondaryThread->sampleRate();
// If the secondary output has just been opened, the first secondaryThread write
// will not block as it will fill the empty startup buffer of the HAL,
// so a second sink buffer needs to be ready for the immediate next blocking write.
// Additionally, have a margin of one main thread buffer as the scheduling jitter
// can reorder the writes (eg if thread A&B have the same write intervale,
// the scheduler could schedule AB...BA)
size_t frameCountToBeReady = 2 * sinkFrameCount + sourceFrameCount;
// Total secondary output buffer must be at least as the read frames plus
// the margin of a few buffers on both sides in case the
// threads scheduling has some jitter.
// That value should not impact latency as the secondary track is started before
// its buffer is full, see frameCountToBeReady.
size_t frameCount = frameCountToBeReady + 2 * (sourceFrameCount + sinkFrameCount);
// The frameCount should also not be smaller than the secondary thread min frame
// count
size_t minFrameCount = AudioSystem::calculateMinFrameCount(
[&] { Mutex::Autolock _l(secondaryThread->mLock);
return secondaryThread->latency_l(); }(),
secondaryThread->mNormalFrameCount,
secondaryThread->mSampleRate,
output.sampleRate,
input.speed);
frameCount = std::max(frameCount, minFrameCount);
using namespace std::chrono_literals;
auto inChannelMask = audio_channel_mask_out_to_in(input.config.channel_mask);
sp patchRecord = new RecordThread::PatchRecord(nullptr /* thread */,
output.sampleRate,
inChannelMask,
input.config.format,
frameCount,
NULL /* buffer */,
(size_t)0 /* bufferSize */,
AUDIO_INPUT_FLAG_DIRECT,
0ns /* timeout */);
status_t status = patchRecord->initCheck();
if (status != NO_ERROR) {
ALOGE("Secondary output patchRecord init failed: %d", status);
continue;
}
// TODO: We could check compatibility of the secondaryThread with the PatchTrack
// for fast usage: thread has fast mixer, sample rate matches, etc.;
// for now, we exclude fast tracks by removing the Fast flag.
const audio_output_flags_t outputFlags =
(audio_output_flags_t)(output.flags & ~AUDIO_OUTPUT_FLAG_FAST);
sp patchTrack = new PlaybackThread::PatchTrack(secondaryThread,
streamType,
output.sampleRate,
input.config.channel_mask,
input.config.format,
frameCount,
patchRecord->buffer(),
patchRecord->bufferSize(),
outputFlags,
0ns /* timeout */,
frameCountToBeReady);
status = patchTrack->initCheck();
if (status != NO_ERROR) {
ALOGE("Secondary output patchTrack init failed: %d", status);
continue;
}
teePatches.push_back({patchRecord, patchTrack});
secondaryThread->addPatchTrack(patchTrack);
// In case the downstream patchTrack on the secondaryThread temporarily outlives
// our created track, ensure the corresponding patchRecord is still alive.
patchTrack->setPeerProxy(patchRecord, true /* holdReference */);
patchRecord->setPeerProxy(patchTrack, false /* holdReference */);
}
track->setTeePatches(std::move(teePatches));
}
2. 对于secondary output,他的输出设备是怎么选择的?
在上面的部分中有提到在checkSecondaryOutputs之前,会进行checkOutputsForDevice的操作,该部分的操作将会创建OUT REMOTE SUBMIX的output,在创建结束的时候,会进行首次的setOutputDevice的行为:
AudioPolicyManager::checkOutputsForDevice()
desc = new SwAudioOutputDescriptor(profile, mpClientInterface);
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
status_t status = desc->open(nullptr, DeviceVector(device),
AUDIO_STREAM_DEFAULT, AUDIO_OUTPUT_FLAG_NONE, &output);
if (status == NO_ERROR) {
if (output != AUDIO_IO_HANDLE_NONE) {
addOutput(output, desc);
if (audio_is_remote_submix_device(deviceType) && address != "0") {
sp<AudioPolicyMix> policyMix;
if (mPolicyMixes.getAudioPolicyMix(deviceType, address, policyMix)
== NO_ERROR) {
policyMix->setOutput(desc);
desc->mPolicyMix = policyMix;
}
}
}
} else {
output = AUDIO_IO_HANDLE_NONE;
}
if (output == AUDIO_IO_HANDLE_NONE) {
} else {
outputs.add(output);
if (device_distinguishes_on_address(deviceType)) {
setOutputDevices(desc, DeviceVector(device), true/*force*/, 0/*delay*/,
NULL/*patch handle*/);
}
}
}
3. 数据流是如何运转的?
Track::releaseBuffer()->interceptBuffer()->patchRecord::writeFrames() 将数据写到bufferProvider
->
A. patchRecord::getNextBuffer(BufferProvider)
->
A.1. pathTrack::obtainBuffer(ProxyBuffer)->mProxy::obtainBuffer() 获取大小给record参考
A.2. RecordTrack::getNextBuffer(ProxyBuffer)->mServerProxy::obtainBuffer() bufferprovider的指针指向record的cblk的空间
B. memcpy 将数据拷贝到获取到的BufferProvider,也就是拷贝到record的cblk下的空间,而patchTrack的mBuffer其实是指向patchRecord的buffer的,这个在audioflinger中创建PatchTrack的时候已经传递过来了
C. patchRecord::releaseBuffer(BufferProvider)
->
C.1. patchTrack::releaseBuffer(ProxyBuffer) ->proxy::releaseBuffer(ProxyBuffer)
C.2. TrackBase::releaseBuffer() -> pathRecord::serverProxy::releaseBuffer(ServerProxyBuffer)
(上面这部分的pathRecord正符合上行的逻辑,server端填充数据,client端使用数据,releasebuffer,而server端填充数据的多少是根据patchTrack中client端去获取的buffer的大小,
同样patchTrack也完成了客户端填写数据的逻辑,填充buffer,releaseBuffer,等待audiomixer来消费数据,并更新buffer的信息)
PatchTrack在output线程中,一直处在如果数据充足,那就往底层写数据的逻辑,而PatchRecord,则是在等AudioTrack被mixer releaseBuffer的时机,来将数据写到obtainBuffer中buffer中。而这个buffer又是PatchTrack的buffer,这样patchTrack只需要通过proxy来进行releaseBuffer的操作,从而更新mCblk里面的信息。这样PatchTrack所在的output线程就可以捞到数据,往底层写了。
所以根据以上,可以判断出来数据流是从AudioTrack送到PatchRecord,再送到PatchTrack送到remote submix的hal的,然后app则通过自己的AudioRecord将数据给捞上来。
所以重点就是patchRecord跟patchTrack都是Track这端传输数据的产物,以及他们共用buffer的这两点。