haikuwebkit/LayoutTests/webrtc/multi-audio.html

59 lines
2.0 KiB
HTML
Raw Permalink Normal View History

Source/WebCore: Use one audio unit for all tracks of a given process https://bugs.webkit.org/show_bug.cgi?id=212406 Reviewed by Eric Carlson. Before the patch, we were creating one audio unit per track to render. This is potentially inefficient as this requires to IPC on iOS each audio data. Instead, we could have one single remote unit that will receive the mixed content of all tracks. For that purpose, introduce AudioMediaStreamTrackRendererUnit as a singleton. AudioMediaStreamTrackRendererCocoa will just register/unregister sources to AudioMediaStreamTrackRendererUnit. AudioMediaStreamTrackRendererUnit will then start/stop as needed and do the mixing. This requires a change in AudioSampleDataSource to support mixing in case track volumes are different. If we have to mix and with different volumes, we first pull the samples in a scratch buffer, apply volume and then mix it with the other tracks. In the future, we might also do the audio rendering with the CoreAudioSharedUnit directly so as to improve as much as possible echo cancellation. Interruption is handled by the fact that all tracks should stop playing, thus stop their renderer, thus unregister themselves from the renderer unit. it might be more future proof to add the unit as an interruption observer as a follow-up. Manually tested plus LayoutTests/webrtc/multi-audio.html * SourcesCocoa.txt: * WebCore.xcodeproj/project.pbxproj: * platform/audio/mac/AudioSampleBufferList.cpp: (WebCore::mixBuffers): (WebCore::AudioSampleBufferList::mixFrom): * platform/audio/mac/AudioSampleBufferList.h: * platform/audio/mac/AudioSampleDataSource.mm: (WebCore::AudioSampleDataSource::pullSamplesInternal): * platform/mediastream/mac/AudioMediaStreamTrackRendererCocoa.cpp: (WebCore::AudioMediaStreamTrackRendererCocoa::start): (WebCore::AudioMediaStreamTrackRendererCocoa::stop): (WebCore::AudioMediaStreamTrackRendererCocoa::clear): (WebCore::AudioMediaStreamTrackRendererCocoa::setVolume): (WebCore::AudioMediaStreamTrackRendererCocoa::pushSamples): (WebCore::AudioMediaStreamTrackRendererCocoa::createAudioUnit): Deleted. (WebCore::AudioMediaStreamTrackRendererCocoa::render): Deleted. (WebCore::AudioMediaStreamTrackRendererCocoa::inputProc): Deleted. * platform/mediastream/mac/AudioMediaStreamTrackRendererCocoa.h: (): Deleted. * platform/mediastream/mac/AudioMediaStreamTrackRendererUnit.cpp: Added. (WebCore::AudioMediaStreamTrackRendererUnit::singleton): (WebCore::AudioMediaStreamTrackRendererUnit::~AudioMediaStreamTrackRendererUnit): (WebCore::AudioMediaStreamTrackRendererUnit::addSource): (WebCore::AudioMediaStreamTrackRendererUnit::removeSource): (WebCore::AudioMediaStreamTrackRendererUnit::createAudioUnitIfNeeded): (WebCore::AudioMediaStreamTrackRendererUnit::start): (WebCore::AudioMediaStreamTrackRendererUnit::stop): (WebCore::AudioMediaStreamTrackRendererUnit::formatDescription): (WebCore::AudioMediaStreamTrackRendererUnit::createAudioUnit): (WebCore::AudioMediaStreamTrackRendererUnit::render): (WebCore::AudioMediaStreamTrackRendererUnit::inputProc): * platform/mediastream/mac/AudioMediaStreamTrackRendererUnit.h: Copied from Source/WebCore/platform/mediastream/mac/AudioMediaStreamTrackRendererCocoa.h. LayoutTests: Use one audio unit for all MediaStreamTracks of a given process https://bugs.webkit.org/show_bug.cgi?id=212406 Reviewed by Eric Carlson. * webrtc/multi-audio-expected.txt: Added. * webrtc/multi-audio.html: Added. Canonical link: https://commits.webkit.org/225707@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@262710 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-06-08 14:00:58 +00:00
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Testing multi audio with multiple volumes</title>
<script src="../resources/testharness.js"></script>
<script src="../resources/testharnessreport.js"></script>
</head>
<body>
<video id="video1" autoplay=""></video>
<video id="video2" autoplay=""></video>
<canvas id="canvas" width="640" height="480"></canvas>
<script src ="routines.js"></script>
<script>
var pc1, pc2;
promise_test(async (test) => {
if (window.testRunner)
testRunner.setUserMediaPermission(true);
let remoteStream1, remoteStream2;
let counter = 0;
const localStream = await navigator.mediaDevices.getUserMedia({ audio: true, video : true });
const stream = await new Promise((resolve, reject) => {
createConnections((firstConnection) => {
pc1 = firstConnection;
firstConnection.addTrack(localStream.getAudioTracks()[0], localStream);
firstConnection.addTrack(localStream.getVideoTracks()[0], localStream);
const clone = localStream.clone();
firstConnection.addTrack(clone.getAudioTracks()[0], clone);
firstConnection.addTrack(clone.getVideoTracks()[0], clone);
}, (secondConnection) => {
pc2 = secondConnection;
secondConnection.ontrack = (trackEvent) => {
if (!remoteStream1)
remoteStream1 = trackEvent.streams[0];
else if (trackEvent.streams[0] !== remoteStream1)
remoteStream2 = trackEvent.streams[0];
if (++counter >= 4)
resolve();
};
});
setTimeout(() => reject("Test timed out"), 5000);
});
video1.volume = 0.01;
video1.srcObject = remoteStream1;
video2.volume = 1;
video2.srcObject = remoteStream2;
await video1.play();
await video2.play();
video1.volume = 0.1;
}, "Multi audio with different volumes");
</script>
</body>
</html>