haikuwebkit/Source/WebCore/platform/audio/PushPullFIFO.h

83 lines
3.2 KiB
C
Raw Permalink Normal View History

Add proper support for AudioContextOptions.sampleRate https://bugs.webkit.org/show_bug.cgi?id=216425 Reviewed by Eric Carlson. Add proper support for AudioContextOptions.sampleRate. Previously, our AudioContext always ran at the hardware's sampleRate, no matter what value was set for AudioContextOptions.sampleRate. This patch is based on the following Chromium changes: - https://chromium-review.googlesource.com/c/chromium/src/+/1482957 - https://codereview.chromium.org/2549093009 - https://codereview.chromium.org/14189035 * Modules/webaudio/DefaultAudioDestinationNode.cpp: (WebCore::DefaultAudioDestinationNode::createDestination): When creating an AudioDestination, pass the requested AudioContext sample rate instead of the hardware sample rate. * Sources.txt: * WebCore.xcodeproj/project.pbxproj: * platform/audio/AudioFIFO.cpp: Removed. * platform/audio/AudioFIFO.h: Removed. * platform/audio/AudioPullFIFO.cpp: Removed. * platform/audio/AudioPullFIFO.h: Removed. * platform/audio/PushPullFIFO.cpp: Added. * platform/audio/PushPullFIFO.h: Added. Replace AudioFIFO and AudioPullFIFO with a new PushPullFIFO replacement, similarly to what was done in Chromium in: - https://codereview.chromium.org/2549093009 * platform/audio/MultiChannelResampler.cpp: (WebCore::MultiChannelResampler::MultiChannelResampler): * platform/audio/MultiChannelResampler.h: * platform/audio/SincResampler.cpp: (WebCore::SincResampler::SincResampler): (WebCore::SincResampler::updateRegions): (WebCore::SincResampler::initializeKernel): (WebCore::SincResampler::process): * platform/audio/SincResampler.h: Add parameter to MultiChannelResampler & SincResampler to allow the client to specify the size of the buffer in frames when the resampler calls AudioSourceProvider::provideInput() to get input data. This is necessary because our WebAudio implementation uses a static buffer size of 128 frames. This is similar to what was done in Chromium in: - https://codereview.chromium.org/14189035 * platform/audio/cocoa/AudioDestinationCocoa.cpp: (WebCore::AudioDestinationCocoa::AudioDestinationCocoa): (WebCore::AudioDestinationCocoa::setAudioStreamBasicDescription): (WebCore::AudioDestinationCocoa::render): (WebCore::AudioDestinationCocoa::provideInput): * platform/audio/cocoa/AudioDestinationCocoa.h: - Adopt PushPullFIFO to resolve the buffer size mismatch between the WebAudio engine and the callback function from the actual audio device, similarly to what was done in Chromium. - When the context's sample rate differs from the hardware sample rate, instantiate a MultiChannelResampler and use it in render() to do the resampling. * platform/audio/ios/AudioDestinationIOS.cpp: (WebCore::AudioDestinationCocoa::configure): * platform/audio/mac/AudioDestinationMac.cpp: (WebCore::AudioDestinationCocoa::configure): Drop sampleRate parameter as it is no longer needed. * platform/mock/MockAudioDestinationCocoa.cpp: (WebCore::MockAudioDestinationCocoa::tick): Canonical link: https://commits.webkit.org/229304@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267014 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-14 15:47:53 +00:00
/*
* Copyright 2017 The Chromium Authors. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
* ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
#include <wtf/Forward.h>
#include <wtf/Noncopyable.h>
#include <wtf/RefPtr.h>
namespace WebCore {
class AudioBus;
// PushPullFIFO class is an intermediate audio sample storage between
// WebKit-WebAudio and the renderer. The renderer's hardware callback buffer size
// varies on the platform, but the WebAudio always renders 128 frames (render
// quantum, RQ) thus FIFO is needed to handle the general case.
class PushPullFIFO {
WTF_MAKE_FAST_ALLOCATED;
WTF_MAKE_NONCOPYABLE(PushPullFIFO);
public:
// Maximum FIFO length. (512 render quanta)
Use more inline initialization and constexpr in WebAudio code https://bugs.webkit.org/show_bug.cgi?id=216939 Reviewed by Darin Adler. Use more inline initialization and constexpr in WebAudio code. * Modules/webaudio/AudioBufferSourceNode.cpp: * Modules/webaudio/AudioDestinationNode.cpp: (WebCore::AudioDestinationNode::AudioDestinationNode): * Modules/webaudio/AudioDestinationNode.h: * Modules/webaudio/AudioNodeOutput.cpp: (WebCore::AudioNodeOutput::AudioNodeOutput): * Modules/webaudio/AudioNodeOutput.h: * Modules/webaudio/AudioParamTimeline.h: * Modules/webaudio/AudioScheduledSourceNode.cpp: (WebCore::AudioScheduledSourceNode::AudioScheduledSourceNode): (WebCore::AudioScheduledSourceNode::updateSchedulingInfo): * Modules/webaudio/AudioScheduledSourceNode.h: * Modules/webaudio/AudioSummingJunction.cpp: (WebCore::AudioSummingJunction::AudioSummingJunction): * Modules/webaudio/AudioSummingJunction.h: * Modules/webaudio/BiquadDSPKernel.cpp: * Modules/webaudio/BiquadProcessor.cpp: (WebCore::BiquadProcessor::BiquadProcessor): * Modules/webaudio/BiquadProcessor.h: * Modules/webaudio/ConvolverNode.cpp: * Modules/webaudio/DefaultAudioDestinationNode.cpp: * Modules/webaudio/DelayNode.cpp: * Modules/webaudio/DynamicsCompressorNode.cpp: * Modules/webaudio/MediaElementAudioSourceNode.cpp: * Modules/webaudio/PeriodicWave.cpp: (WebCore::PeriodicWave::PeriodicWave): (WebCore::PeriodicWave::waveDataForFundamentalFrequency): (WebCore::PeriodicWave::numberOfPartialsForRange const): * Modules/webaudio/PeriodicWave.h: * Modules/webaudio/RealtimeAnalyser.cpp: (WebCore::RealtimeAnalyser::RealtimeAnalyser): * Modules/webaudio/RealtimeAnalyser.h: * Modules/webaudio/ScriptProcessorNode.cpp: (WebCore::ScriptProcessorNode::ScriptProcessorNode): * Modules/webaudio/ScriptProcessorNode.h: * Modules/webaudio/WaveShaperProcessor.cpp: (WebCore::WaveShaperProcessor::WaveShaperProcessor): * Modules/webaudio/WaveShaperProcessor.h: * Modules/webaudio/WebKitAudioContext.cpp: * Modules/webaudio/WebKitAudioPannerNode.cpp: (WebCore::WebKitAudioPannerNode::WebKitAudioPannerNode): * Modules/webaudio/WebKitAudioPannerNode.h: * platform/audio/AudioBus.cpp: (WebCore::AudioBus::AudioBus): * platform/audio/AudioBus.h: * platform/audio/AudioChannel.h: * platform/audio/AudioDSPKernelProcessor.cpp: (WebCore::AudioDSPKernelProcessor::AudioDSPKernelProcessor): * platform/audio/AudioDSPKernelProcessor.h: * platform/audio/AudioHardwareListener.cpp: (WebCore::AudioHardwareListener::AudioHardwareListener): * platform/audio/AudioHardwareListener.h: * platform/audio/AudioResampler.cpp: (WebCore::AudioResampler::AudioResampler): * platform/audio/AudioResampler.h: * platform/audio/AudioResamplerKernel.cpp: (WebCore::AudioResamplerKernel::AudioResamplerKernel): * platform/audio/AudioResamplerKernel.h: * platform/audio/Biquad.cpp: * platform/audio/Cone.cpp: * platform/audio/Cone.h: * platform/audio/Distance.cpp: * platform/audio/Distance.h: * platform/audio/DownSampler.cpp: (WebCore::DownSampler::DownSampler): * platform/audio/DownSampler.h: * platform/audio/DynamicsCompressorKernel.cpp: (WebCore::DynamicsCompressorKernel::DynamicsCompressorKernel): * platform/audio/DynamicsCompressorKernel.h: * platform/audio/EqualPowerPanner.cpp: (WebCore::EqualPowerPanner::EqualPowerPanner): * platform/audio/EqualPowerPanner.h: * platform/audio/FFTConvolver.cpp: (WebCore::FFTConvolver::FFTConvolver): * platform/audio/FFTConvolver.h: * platform/audio/HRTFDatabase.cpp: (WebCore::HRTFDatabase::HRTFDatabase): * platform/audio/HRTFDatabase.h: * platform/audio/HRTFElevation.cpp: * platform/audio/HRTFElevation.h: * platform/audio/HRTFKernel.cpp: (WebCore::HRTFKernel::HRTFKernel): * platform/audio/HRTFKernel.h: * platform/audio/HRTFPanner.cpp: (WebCore::HRTFPanner::HRTFPanner): * platform/audio/HRTFPanner.h: * platform/audio/PlatformMediaSession.cpp: (WebCore::PlatformMediaSession::PlatformMediaSession): * platform/audio/PlatformMediaSession.h: * platform/audio/PushPullFIFO.cpp: (WebCore::PushPullFIFO::PushPullFIFO): * platform/audio/PushPullFIFO.h: * platform/audio/Reverb.cpp: * platform/audio/ReverbAccumulationBuffer.cpp: (WebCore::ReverbAccumulationBuffer::ReverbAccumulationBuffer): * platform/audio/ReverbAccumulationBuffer.h: * platform/audio/ReverbConvolver.cpp: * platform/audio/ReverbConvolverStage.cpp: (WebCore::ReverbConvolverStage::ReverbConvolverStage): * platform/audio/ReverbConvolverStage.h: * platform/audio/ReverbInputBuffer.cpp: (WebCore::ReverbInputBuffer::ReverbInputBuffer): * platform/audio/ReverbInputBuffer.h: Canonical link: https://commits.webkit.org/229726@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267544 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-24 20:10:05 +00:00
static constexpr size_t maxFIFOLength { 65536 };
Add proper support for AudioContextOptions.sampleRate https://bugs.webkit.org/show_bug.cgi?id=216425 Reviewed by Eric Carlson. Add proper support for AudioContextOptions.sampleRate. Previously, our AudioContext always ran at the hardware's sampleRate, no matter what value was set for AudioContextOptions.sampleRate. This patch is based on the following Chromium changes: - https://chromium-review.googlesource.com/c/chromium/src/+/1482957 - https://codereview.chromium.org/2549093009 - https://codereview.chromium.org/14189035 * Modules/webaudio/DefaultAudioDestinationNode.cpp: (WebCore::DefaultAudioDestinationNode::createDestination): When creating an AudioDestination, pass the requested AudioContext sample rate instead of the hardware sample rate. * Sources.txt: * WebCore.xcodeproj/project.pbxproj: * platform/audio/AudioFIFO.cpp: Removed. * platform/audio/AudioFIFO.h: Removed. * platform/audio/AudioPullFIFO.cpp: Removed. * platform/audio/AudioPullFIFO.h: Removed. * platform/audio/PushPullFIFO.cpp: Added. * platform/audio/PushPullFIFO.h: Added. Replace AudioFIFO and AudioPullFIFO with a new PushPullFIFO replacement, similarly to what was done in Chromium in: - https://codereview.chromium.org/2549093009 * platform/audio/MultiChannelResampler.cpp: (WebCore::MultiChannelResampler::MultiChannelResampler): * platform/audio/MultiChannelResampler.h: * platform/audio/SincResampler.cpp: (WebCore::SincResampler::SincResampler): (WebCore::SincResampler::updateRegions): (WebCore::SincResampler::initializeKernel): (WebCore::SincResampler::process): * platform/audio/SincResampler.h: Add parameter to MultiChannelResampler & SincResampler to allow the client to specify the size of the buffer in frames when the resampler calls AudioSourceProvider::provideInput() to get input data. This is necessary because our WebAudio implementation uses a static buffer size of 128 frames. This is similar to what was done in Chromium in: - https://codereview.chromium.org/14189035 * platform/audio/cocoa/AudioDestinationCocoa.cpp: (WebCore::AudioDestinationCocoa::AudioDestinationCocoa): (WebCore::AudioDestinationCocoa::setAudioStreamBasicDescription): (WebCore::AudioDestinationCocoa::render): (WebCore::AudioDestinationCocoa::provideInput): * platform/audio/cocoa/AudioDestinationCocoa.h: - Adopt PushPullFIFO to resolve the buffer size mismatch between the WebAudio engine and the callback function from the actual audio device, similarly to what was done in Chromium. - When the context's sample rate differs from the hardware sample rate, instantiate a MultiChannelResampler and use it in render() to do the resampling. * platform/audio/ios/AudioDestinationIOS.cpp: (WebCore::AudioDestinationCocoa::configure): * platform/audio/mac/AudioDestinationMac.cpp: (WebCore::AudioDestinationCocoa::configure): Drop sampleRate parameter as it is no longer needed. * platform/mock/MockAudioDestinationCocoa.cpp: (WebCore::MockAudioDestinationCocoa::tick): Canonical link: https://commits.webkit.org/229304@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267014 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-14 15:47:53 +00:00
Use more inline initialization and constexpr in WebAudio code https://bugs.webkit.org/show_bug.cgi?id=216939 Reviewed by Darin Adler. Use more inline initialization and constexpr in WebAudio code. * Modules/webaudio/AudioBufferSourceNode.cpp: * Modules/webaudio/AudioDestinationNode.cpp: (WebCore::AudioDestinationNode::AudioDestinationNode): * Modules/webaudio/AudioDestinationNode.h: * Modules/webaudio/AudioNodeOutput.cpp: (WebCore::AudioNodeOutput::AudioNodeOutput): * Modules/webaudio/AudioNodeOutput.h: * Modules/webaudio/AudioParamTimeline.h: * Modules/webaudio/AudioScheduledSourceNode.cpp: (WebCore::AudioScheduledSourceNode::AudioScheduledSourceNode): (WebCore::AudioScheduledSourceNode::updateSchedulingInfo): * Modules/webaudio/AudioScheduledSourceNode.h: * Modules/webaudio/AudioSummingJunction.cpp: (WebCore::AudioSummingJunction::AudioSummingJunction): * Modules/webaudio/AudioSummingJunction.h: * Modules/webaudio/BiquadDSPKernel.cpp: * Modules/webaudio/BiquadProcessor.cpp: (WebCore::BiquadProcessor::BiquadProcessor): * Modules/webaudio/BiquadProcessor.h: * Modules/webaudio/ConvolverNode.cpp: * Modules/webaudio/DefaultAudioDestinationNode.cpp: * Modules/webaudio/DelayNode.cpp: * Modules/webaudio/DynamicsCompressorNode.cpp: * Modules/webaudio/MediaElementAudioSourceNode.cpp: * Modules/webaudio/PeriodicWave.cpp: (WebCore::PeriodicWave::PeriodicWave): (WebCore::PeriodicWave::waveDataForFundamentalFrequency): (WebCore::PeriodicWave::numberOfPartialsForRange const): * Modules/webaudio/PeriodicWave.h: * Modules/webaudio/RealtimeAnalyser.cpp: (WebCore::RealtimeAnalyser::RealtimeAnalyser): * Modules/webaudio/RealtimeAnalyser.h: * Modules/webaudio/ScriptProcessorNode.cpp: (WebCore::ScriptProcessorNode::ScriptProcessorNode): * Modules/webaudio/ScriptProcessorNode.h: * Modules/webaudio/WaveShaperProcessor.cpp: (WebCore::WaveShaperProcessor::WaveShaperProcessor): * Modules/webaudio/WaveShaperProcessor.h: * Modules/webaudio/WebKitAudioContext.cpp: * Modules/webaudio/WebKitAudioPannerNode.cpp: (WebCore::WebKitAudioPannerNode::WebKitAudioPannerNode): * Modules/webaudio/WebKitAudioPannerNode.h: * platform/audio/AudioBus.cpp: (WebCore::AudioBus::AudioBus): * platform/audio/AudioBus.h: * platform/audio/AudioChannel.h: * platform/audio/AudioDSPKernelProcessor.cpp: (WebCore::AudioDSPKernelProcessor::AudioDSPKernelProcessor): * platform/audio/AudioDSPKernelProcessor.h: * platform/audio/AudioHardwareListener.cpp: (WebCore::AudioHardwareListener::AudioHardwareListener): * platform/audio/AudioHardwareListener.h: * platform/audio/AudioResampler.cpp: (WebCore::AudioResampler::AudioResampler): * platform/audio/AudioResampler.h: * platform/audio/AudioResamplerKernel.cpp: (WebCore::AudioResamplerKernel::AudioResamplerKernel): * platform/audio/AudioResamplerKernel.h: * platform/audio/Biquad.cpp: * platform/audio/Cone.cpp: * platform/audio/Cone.h: * platform/audio/Distance.cpp: * platform/audio/Distance.h: * platform/audio/DownSampler.cpp: (WebCore::DownSampler::DownSampler): * platform/audio/DownSampler.h: * platform/audio/DynamicsCompressorKernel.cpp: (WebCore::DynamicsCompressorKernel::DynamicsCompressorKernel): * platform/audio/DynamicsCompressorKernel.h: * platform/audio/EqualPowerPanner.cpp: (WebCore::EqualPowerPanner::EqualPowerPanner): * platform/audio/EqualPowerPanner.h: * platform/audio/FFTConvolver.cpp: (WebCore::FFTConvolver::FFTConvolver): * platform/audio/FFTConvolver.h: * platform/audio/HRTFDatabase.cpp: (WebCore::HRTFDatabase::HRTFDatabase): * platform/audio/HRTFDatabase.h: * platform/audio/HRTFElevation.cpp: * platform/audio/HRTFElevation.h: * platform/audio/HRTFKernel.cpp: (WebCore::HRTFKernel::HRTFKernel): * platform/audio/HRTFKernel.h: * platform/audio/HRTFPanner.cpp: (WebCore::HRTFPanner::HRTFPanner): * platform/audio/HRTFPanner.h: * platform/audio/PlatformMediaSession.cpp: (WebCore::PlatformMediaSession::PlatformMediaSession): * platform/audio/PlatformMediaSession.h: * platform/audio/PushPullFIFO.cpp: (WebCore::PushPullFIFO::PushPullFIFO): * platform/audio/PushPullFIFO.h: * platform/audio/Reverb.cpp: * platform/audio/ReverbAccumulationBuffer.cpp: (WebCore::ReverbAccumulationBuffer::ReverbAccumulationBuffer): * platform/audio/ReverbAccumulationBuffer.h: * platform/audio/ReverbConvolver.cpp: * platform/audio/ReverbConvolverStage.cpp: (WebCore::ReverbConvolverStage::ReverbConvolverStage): * platform/audio/ReverbConvolverStage.h: * platform/audio/ReverbInputBuffer.cpp: (WebCore::ReverbInputBuffer::ReverbInputBuffer): * platform/audio/ReverbInputBuffer.h: Canonical link: https://commits.webkit.org/229726@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267544 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-24 20:10:05 +00:00
// |fifoLength| cannot exceed |maxFIFOLength|. Otherwise it crashes.
WebAudio tests are crashing in debug when enabling the GPU process https://bugs.webkit.org/show_bug.cgi?id=217663 Reviewed by Geoff Garen. Source/WebCore: No new tests, unskipped existing tests. * WebCore.xcodeproj/project.pbxproj: * platform/audio/PushPullFIFO.h: Export PushPullFIFO so that it can be used at WebKit layer. Source/WebKit: WebAudio tests were crashing in debug when enabling the GPU process because it did audio processing on the WebContent process's main thread. To address the issue, I made RemoteAudioDestinationProxy a ThreadMessageReceiver so that it receives IPC on an audio thread instead of the main thread. IPC messages are processed directly on the AudioWorklet thread when active or on an audio thread constructed by RemoteAudioDestinationProxy otherwise. * GPUProcess/media/RemoteAudioDestinationManager.cpp: (WebKit::RemoteAudioDestination::RemoteAudioDestination): Use a PushPullFIFO structure in render() to avoid hanging the audio rendering thread on a semaphore. Hanging the rendering thread was terrible for performance and was also a source of deadlock since the underlying framework is holding a lock while render() is called. We could process a RemoteAudioDestination::CreateAudioDestination sync IPC on the main thread and deadlock on that lock. * GPUProcess/webrtc/LibWebRTCCodecsProxy.h: * GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererManager.h: * GPUProcess/webrtc/RemoteSampleBufferDisplayLayerManager.h: * NetworkProcess/IndexedDB/WebIDBServer.h: * NetworkProcess/webrtc/NetworkRTCProvider.h: * WebProcess/Network/webrtc/LibWebRTCNetwork.h: * WebProcess/cocoa/RemoteCaptureSampleManager.h: Use ThreadMessageReceiverRefCounted instead of ThreadMessageReceiver since those classes do not provide their own RefCounting. * Platform/IPC/Connection.cpp: (IPC::Connection::addWorkQueueMessageReceiver): (IPC::Connection::removeWorkQueueMessageReceiver): (IPC::Connection::addThreadMessageReceiver): (IPC::Connection::removeThreadMessageReceiver): (IPC::Connection::processIncomingMessage): (IPC::Connection::dispatchMessageToWorkQueueReceiver): (IPC::Connection::dispatchMessageToThreadReceiver): * Platform/IPC/Connection.h: (IPC::Connection::ThreadMessageReceiver::ref): (IPC::Connection::ThreadMessageReceiver::deref): - Add support for passing a destinationID when registering a WorkQueueMessageReceiver or a ThreadMessageReceiver, similarly to regular MessageReceivers. This was needed here since The GPUProcess sends IPC messages to the RemoteAudioDestinationProxy with a given destinationID and since RemoteAudioDestinationProxy is now a ThreadMessageReceiver. - Stop having ThreadMessageReceiver subclass ThreadSafeRefCounted since RemoteAudioDestinationProxy already subclasses ThreadSafeRefCounted indirectly. A new ThreadMessageReceiverRefCounted class was added for convenience for existing code that relied on its refcounting. * WebProcess/GPU/media/RemoteAudioDestinationProxy.cpp: (WebKit::RemoteAudioDestinationProxy::RemoteAudioDestinationProxy): (WebKit::RemoteAudioDestinationProxy::~RemoteAudioDestinationProxy): (WebKit::RemoteAudioDestinationProxy::start): (WebKit::RemoteAudioDestinationProxy::stop): (WebKit::RemoteAudioDestinationProxy::renderBuffer): (WebKit::RemoteAudioDestinationProxy::didChangeIsPlaying): (WebKit::RemoteAudioDestinationProxy::dispatchToThread): * WebProcess/GPU/media/RemoteAudioDestinationProxy.h: Use a PushPullFIFO container in render() to avoid handing the audio rendering thread on a semaphore while the Render IPC is getting processed by the WebProcess. Source/WTF: * wtf/CrossThreadQueue.h: (WTF::CrossThreadQueue<DataType>::waitForMessage): If CrossThreadQueue::kill() gets called while another thread is waiting on a CrossThreadQueue::waitForMessage() call, make it so that waitForMessage() returns a default-constructed DataType instead of crashing trying to dequeue (since the queue is empty). LayoutTests: Unskip webaudio tests when the GPU process is enabled. * gpu-process/TestExpectations: Canonical link: https://commits.webkit.org/230420@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@268423 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-10-13 21:53:22 +00:00
WEBCORE_EXPORT PushPullFIFO(unsigned numberOfChannels, size_t fifoLength);
WEBCORE_EXPORT ~PushPullFIFO();
Add proper support for AudioContextOptions.sampleRate https://bugs.webkit.org/show_bug.cgi?id=216425 Reviewed by Eric Carlson. Add proper support for AudioContextOptions.sampleRate. Previously, our AudioContext always ran at the hardware's sampleRate, no matter what value was set for AudioContextOptions.sampleRate. This patch is based on the following Chromium changes: - https://chromium-review.googlesource.com/c/chromium/src/+/1482957 - https://codereview.chromium.org/2549093009 - https://codereview.chromium.org/14189035 * Modules/webaudio/DefaultAudioDestinationNode.cpp: (WebCore::DefaultAudioDestinationNode::createDestination): When creating an AudioDestination, pass the requested AudioContext sample rate instead of the hardware sample rate. * Sources.txt: * WebCore.xcodeproj/project.pbxproj: * platform/audio/AudioFIFO.cpp: Removed. * platform/audio/AudioFIFO.h: Removed. * platform/audio/AudioPullFIFO.cpp: Removed. * platform/audio/AudioPullFIFO.h: Removed. * platform/audio/PushPullFIFO.cpp: Added. * platform/audio/PushPullFIFO.h: Added. Replace AudioFIFO and AudioPullFIFO with a new PushPullFIFO replacement, similarly to what was done in Chromium in: - https://codereview.chromium.org/2549093009 * platform/audio/MultiChannelResampler.cpp: (WebCore::MultiChannelResampler::MultiChannelResampler): * platform/audio/MultiChannelResampler.h: * platform/audio/SincResampler.cpp: (WebCore::SincResampler::SincResampler): (WebCore::SincResampler::updateRegions): (WebCore::SincResampler::initializeKernel): (WebCore::SincResampler::process): * platform/audio/SincResampler.h: Add parameter to MultiChannelResampler & SincResampler to allow the client to specify the size of the buffer in frames when the resampler calls AudioSourceProvider::provideInput() to get input data. This is necessary because our WebAudio implementation uses a static buffer size of 128 frames. This is similar to what was done in Chromium in: - https://codereview.chromium.org/14189035 * platform/audio/cocoa/AudioDestinationCocoa.cpp: (WebCore::AudioDestinationCocoa::AudioDestinationCocoa): (WebCore::AudioDestinationCocoa::setAudioStreamBasicDescription): (WebCore::AudioDestinationCocoa::render): (WebCore::AudioDestinationCocoa::provideInput): * platform/audio/cocoa/AudioDestinationCocoa.h: - Adopt PushPullFIFO to resolve the buffer size mismatch between the WebAudio engine and the callback function from the actual audio device, similarly to what was done in Chromium. - When the context's sample rate differs from the hardware sample rate, instantiate a MultiChannelResampler and use it in render() to do the resampling. * platform/audio/ios/AudioDestinationIOS.cpp: (WebCore::AudioDestinationCocoa::configure): * platform/audio/mac/AudioDestinationMac.cpp: (WebCore::AudioDestinationCocoa::configure): Drop sampleRate parameter as it is no longer needed. * platform/mock/MockAudioDestinationCocoa.cpp: (WebCore::MockAudioDestinationCocoa::tick): Canonical link: https://commits.webkit.org/229304@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267014 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-14 15:47:53 +00:00
// Pushes the rendered frames by WebAudio engine.
// - The |inputBus| length is 128 frames (1 render quantum), fixed.
// - In case of overflow (FIFO full while push), the existing frames in FIFO
// will be overwritten and |indexRead| will be forcibly moved to
// |indexWrite| to avoid reading overwritten frames.
WebAudio tests are crashing in debug when enabling the GPU process https://bugs.webkit.org/show_bug.cgi?id=217663 Reviewed by Geoff Garen. Source/WebCore: No new tests, unskipped existing tests. * WebCore.xcodeproj/project.pbxproj: * platform/audio/PushPullFIFO.h: Export PushPullFIFO so that it can be used at WebKit layer. Source/WebKit: WebAudio tests were crashing in debug when enabling the GPU process because it did audio processing on the WebContent process's main thread. To address the issue, I made RemoteAudioDestinationProxy a ThreadMessageReceiver so that it receives IPC on an audio thread instead of the main thread. IPC messages are processed directly on the AudioWorklet thread when active or on an audio thread constructed by RemoteAudioDestinationProxy otherwise. * GPUProcess/media/RemoteAudioDestinationManager.cpp: (WebKit::RemoteAudioDestination::RemoteAudioDestination): Use a PushPullFIFO structure in render() to avoid hanging the audio rendering thread on a semaphore. Hanging the rendering thread was terrible for performance and was also a source of deadlock since the underlying framework is holding a lock while render() is called. We could process a RemoteAudioDestination::CreateAudioDestination sync IPC on the main thread and deadlock on that lock. * GPUProcess/webrtc/LibWebRTCCodecsProxy.h: * GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererManager.h: * GPUProcess/webrtc/RemoteSampleBufferDisplayLayerManager.h: * NetworkProcess/IndexedDB/WebIDBServer.h: * NetworkProcess/webrtc/NetworkRTCProvider.h: * WebProcess/Network/webrtc/LibWebRTCNetwork.h: * WebProcess/cocoa/RemoteCaptureSampleManager.h: Use ThreadMessageReceiverRefCounted instead of ThreadMessageReceiver since those classes do not provide their own RefCounting. * Platform/IPC/Connection.cpp: (IPC::Connection::addWorkQueueMessageReceiver): (IPC::Connection::removeWorkQueueMessageReceiver): (IPC::Connection::addThreadMessageReceiver): (IPC::Connection::removeThreadMessageReceiver): (IPC::Connection::processIncomingMessage): (IPC::Connection::dispatchMessageToWorkQueueReceiver): (IPC::Connection::dispatchMessageToThreadReceiver): * Platform/IPC/Connection.h: (IPC::Connection::ThreadMessageReceiver::ref): (IPC::Connection::ThreadMessageReceiver::deref): - Add support for passing a destinationID when registering a WorkQueueMessageReceiver or a ThreadMessageReceiver, similarly to regular MessageReceivers. This was needed here since The GPUProcess sends IPC messages to the RemoteAudioDestinationProxy with a given destinationID and since RemoteAudioDestinationProxy is now a ThreadMessageReceiver. - Stop having ThreadMessageReceiver subclass ThreadSafeRefCounted since RemoteAudioDestinationProxy already subclasses ThreadSafeRefCounted indirectly. A new ThreadMessageReceiverRefCounted class was added for convenience for existing code that relied on its refcounting. * WebProcess/GPU/media/RemoteAudioDestinationProxy.cpp: (WebKit::RemoteAudioDestinationProxy::RemoteAudioDestinationProxy): (WebKit::RemoteAudioDestinationProxy::~RemoteAudioDestinationProxy): (WebKit::RemoteAudioDestinationProxy::start): (WebKit::RemoteAudioDestinationProxy::stop): (WebKit::RemoteAudioDestinationProxy::renderBuffer): (WebKit::RemoteAudioDestinationProxy::didChangeIsPlaying): (WebKit::RemoteAudioDestinationProxy::dispatchToThread): * WebProcess/GPU/media/RemoteAudioDestinationProxy.h: Use a PushPullFIFO container in render() to avoid handing the audio rendering thread on a semaphore while the Render IPC is getting processed by the WebProcess. Source/WTF: * wtf/CrossThreadQueue.h: (WTF::CrossThreadQueue<DataType>::waitForMessage): If CrossThreadQueue::kill() gets called while another thread is waiting on a CrossThreadQueue::waitForMessage() call, make it so that waitForMessage() returns a default-constructed DataType instead of crashing trying to dequeue (since the queue is empty). LayoutTests: Unskip webaudio tests when the GPU process is enabled. * gpu-process/TestExpectations: Canonical link: https://commits.webkit.org/230420@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@268423 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-10-13 21:53:22 +00:00
WEBCORE_EXPORT void push(const AudioBus* inputBus);
Add proper support for AudioContextOptions.sampleRate https://bugs.webkit.org/show_bug.cgi?id=216425 Reviewed by Eric Carlson. Add proper support for AudioContextOptions.sampleRate. Previously, our AudioContext always ran at the hardware's sampleRate, no matter what value was set for AudioContextOptions.sampleRate. This patch is based on the following Chromium changes: - https://chromium-review.googlesource.com/c/chromium/src/+/1482957 - https://codereview.chromium.org/2549093009 - https://codereview.chromium.org/14189035 * Modules/webaudio/DefaultAudioDestinationNode.cpp: (WebCore::DefaultAudioDestinationNode::createDestination): When creating an AudioDestination, pass the requested AudioContext sample rate instead of the hardware sample rate. * Sources.txt: * WebCore.xcodeproj/project.pbxproj: * platform/audio/AudioFIFO.cpp: Removed. * platform/audio/AudioFIFO.h: Removed. * platform/audio/AudioPullFIFO.cpp: Removed. * platform/audio/AudioPullFIFO.h: Removed. * platform/audio/PushPullFIFO.cpp: Added. * platform/audio/PushPullFIFO.h: Added. Replace AudioFIFO and AudioPullFIFO with a new PushPullFIFO replacement, similarly to what was done in Chromium in: - https://codereview.chromium.org/2549093009 * platform/audio/MultiChannelResampler.cpp: (WebCore::MultiChannelResampler::MultiChannelResampler): * platform/audio/MultiChannelResampler.h: * platform/audio/SincResampler.cpp: (WebCore::SincResampler::SincResampler): (WebCore::SincResampler::updateRegions): (WebCore::SincResampler::initializeKernel): (WebCore::SincResampler::process): * platform/audio/SincResampler.h: Add parameter to MultiChannelResampler & SincResampler to allow the client to specify the size of the buffer in frames when the resampler calls AudioSourceProvider::provideInput() to get input data. This is necessary because our WebAudio implementation uses a static buffer size of 128 frames. This is similar to what was done in Chromium in: - https://codereview.chromium.org/14189035 * platform/audio/cocoa/AudioDestinationCocoa.cpp: (WebCore::AudioDestinationCocoa::AudioDestinationCocoa): (WebCore::AudioDestinationCocoa::setAudioStreamBasicDescription): (WebCore::AudioDestinationCocoa::render): (WebCore::AudioDestinationCocoa::provideInput): * platform/audio/cocoa/AudioDestinationCocoa.h: - Adopt PushPullFIFO to resolve the buffer size mismatch between the WebAudio engine and the callback function from the actual audio device, similarly to what was done in Chromium. - When the context's sample rate differs from the hardware sample rate, instantiate a MultiChannelResampler and use it in render() to do the resampling. * platform/audio/ios/AudioDestinationIOS.cpp: (WebCore::AudioDestinationCocoa::configure): * platform/audio/mac/AudioDestinationMac.cpp: (WebCore::AudioDestinationCocoa::configure): Drop sampleRate parameter as it is no longer needed. * platform/mock/MockAudioDestinationCocoa.cpp: (WebCore::MockAudioDestinationCocoa::tick): Canonical link: https://commits.webkit.org/229304@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267014 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-14 15:47:53 +00:00
// Pulls |framesRequested| by the audio device thread and returns the actual
// number of frames to be rendered by the source. (i.e. WebAudio graph)
WebAudio tests are crashing in debug when enabling the GPU process https://bugs.webkit.org/show_bug.cgi?id=217663 Reviewed by Geoff Garen. Source/WebCore: No new tests, unskipped existing tests. * WebCore.xcodeproj/project.pbxproj: * platform/audio/PushPullFIFO.h: Export PushPullFIFO so that it can be used at WebKit layer. Source/WebKit: WebAudio tests were crashing in debug when enabling the GPU process because it did audio processing on the WebContent process's main thread. To address the issue, I made RemoteAudioDestinationProxy a ThreadMessageReceiver so that it receives IPC on an audio thread instead of the main thread. IPC messages are processed directly on the AudioWorklet thread when active or on an audio thread constructed by RemoteAudioDestinationProxy otherwise. * GPUProcess/media/RemoteAudioDestinationManager.cpp: (WebKit::RemoteAudioDestination::RemoteAudioDestination): Use a PushPullFIFO structure in render() to avoid hanging the audio rendering thread on a semaphore. Hanging the rendering thread was terrible for performance and was also a source of deadlock since the underlying framework is holding a lock while render() is called. We could process a RemoteAudioDestination::CreateAudioDestination sync IPC on the main thread and deadlock on that lock. * GPUProcess/webrtc/LibWebRTCCodecsProxy.h: * GPUProcess/webrtc/RemoteAudioMediaStreamTrackRendererManager.h: * GPUProcess/webrtc/RemoteSampleBufferDisplayLayerManager.h: * NetworkProcess/IndexedDB/WebIDBServer.h: * NetworkProcess/webrtc/NetworkRTCProvider.h: * WebProcess/Network/webrtc/LibWebRTCNetwork.h: * WebProcess/cocoa/RemoteCaptureSampleManager.h: Use ThreadMessageReceiverRefCounted instead of ThreadMessageReceiver since those classes do not provide their own RefCounting. * Platform/IPC/Connection.cpp: (IPC::Connection::addWorkQueueMessageReceiver): (IPC::Connection::removeWorkQueueMessageReceiver): (IPC::Connection::addThreadMessageReceiver): (IPC::Connection::removeThreadMessageReceiver): (IPC::Connection::processIncomingMessage): (IPC::Connection::dispatchMessageToWorkQueueReceiver): (IPC::Connection::dispatchMessageToThreadReceiver): * Platform/IPC/Connection.h: (IPC::Connection::ThreadMessageReceiver::ref): (IPC::Connection::ThreadMessageReceiver::deref): - Add support for passing a destinationID when registering a WorkQueueMessageReceiver or a ThreadMessageReceiver, similarly to regular MessageReceivers. This was needed here since The GPUProcess sends IPC messages to the RemoteAudioDestinationProxy with a given destinationID and since RemoteAudioDestinationProxy is now a ThreadMessageReceiver. - Stop having ThreadMessageReceiver subclass ThreadSafeRefCounted since RemoteAudioDestinationProxy already subclasses ThreadSafeRefCounted indirectly. A new ThreadMessageReceiverRefCounted class was added for convenience for existing code that relied on its refcounting. * WebProcess/GPU/media/RemoteAudioDestinationProxy.cpp: (WebKit::RemoteAudioDestinationProxy::RemoteAudioDestinationProxy): (WebKit::RemoteAudioDestinationProxy::~RemoteAudioDestinationProxy): (WebKit::RemoteAudioDestinationProxy::start): (WebKit::RemoteAudioDestinationProxy::stop): (WebKit::RemoteAudioDestinationProxy::renderBuffer): (WebKit::RemoteAudioDestinationProxy::didChangeIsPlaying): (WebKit::RemoteAudioDestinationProxy::dispatchToThread): * WebProcess/GPU/media/RemoteAudioDestinationProxy.h: Use a PushPullFIFO container in render() to avoid handing the audio rendering thread on a semaphore while the Render IPC is getting processed by the WebProcess. Source/WTF: * wtf/CrossThreadQueue.h: (WTF::CrossThreadQueue<DataType>::waitForMessage): If CrossThreadQueue::kill() gets called while another thread is waiting on a CrossThreadQueue::waitForMessage() call, make it so that waitForMessage() returns a default-constructed DataType instead of crashing trying to dequeue (since the queue is empty). LayoutTests: Unskip webaudio tests when the GPU process is enabled. * gpu-process/TestExpectations: Canonical link: https://commits.webkit.org/230420@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@268423 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-10-13 21:53:22 +00:00
WEBCORE_EXPORT size_t pull(AudioBus* outputBus, size_t framesRequested);
Add proper support for AudioContextOptions.sampleRate https://bugs.webkit.org/show_bug.cgi?id=216425 Reviewed by Eric Carlson. Add proper support for AudioContextOptions.sampleRate. Previously, our AudioContext always ran at the hardware's sampleRate, no matter what value was set for AudioContextOptions.sampleRate. This patch is based on the following Chromium changes: - https://chromium-review.googlesource.com/c/chromium/src/+/1482957 - https://codereview.chromium.org/2549093009 - https://codereview.chromium.org/14189035 * Modules/webaudio/DefaultAudioDestinationNode.cpp: (WebCore::DefaultAudioDestinationNode::createDestination): When creating an AudioDestination, pass the requested AudioContext sample rate instead of the hardware sample rate. * Sources.txt: * WebCore.xcodeproj/project.pbxproj: * platform/audio/AudioFIFO.cpp: Removed. * platform/audio/AudioFIFO.h: Removed. * platform/audio/AudioPullFIFO.cpp: Removed. * platform/audio/AudioPullFIFO.h: Removed. * platform/audio/PushPullFIFO.cpp: Added. * platform/audio/PushPullFIFO.h: Added. Replace AudioFIFO and AudioPullFIFO with a new PushPullFIFO replacement, similarly to what was done in Chromium in: - https://codereview.chromium.org/2549093009 * platform/audio/MultiChannelResampler.cpp: (WebCore::MultiChannelResampler::MultiChannelResampler): * platform/audio/MultiChannelResampler.h: * platform/audio/SincResampler.cpp: (WebCore::SincResampler::SincResampler): (WebCore::SincResampler::updateRegions): (WebCore::SincResampler::initializeKernel): (WebCore::SincResampler::process): * platform/audio/SincResampler.h: Add parameter to MultiChannelResampler & SincResampler to allow the client to specify the size of the buffer in frames when the resampler calls AudioSourceProvider::provideInput() to get input data. This is necessary because our WebAudio implementation uses a static buffer size of 128 frames. This is similar to what was done in Chromium in: - https://codereview.chromium.org/14189035 * platform/audio/cocoa/AudioDestinationCocoa.cpp: (WebCore::AudioDestinationCocoa::AudioDestinationCocoa): (WebCore::AudioDestinationCocoa::setAudioStreamBasicDescription): (WebCore::AudioDestinationCocoa::render): (WebCore::AudioDestinationCocoa::provideInput): * platform/audio/cocoa/AudioDestinationCocoa.h: - Adopt PushPullFIFO to resolve the buffer size mismatch between the WebAudio engine and the callback function from the actual audio device, similarly to what was done in Chromium. - When the context's sample rate differs from the hardware sample rate, instantiate a MultiChannelResampler and use it in render() to do the resampling. * platform/audio/ios/AudioDestinationIOS.cpp: (WebCore::AudioDestinationCocoa::configure): * platform/audio/mac/AudioDestinationMac.cpp: (WebCore::AudioDestinationCocoa::configure): Drop sampleRate parameter as it is no longer needed. * platform/mock/MockAudioDestinationCocoa.cpp: (WebCore::MockAudioDestinationCocoa::tick): Canonical link: https://commits.webkit.org/229304@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@267014 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-09-14 15:47:53 +00:00
size_t framesAvailable() const { return m_framesAvailable; }
size_t length() const { return m_fifoLength; }
unsigned numberOfChannels() const;
AudioBus* bus() const { return m_fifoBus.get(); }
private:
// The size of the FIFO.
const size_t m_fifoLength = 0;
RefPtr<AudioBus> m_fifoBus;
// The number of frames in the FIFO actually available for pulling.
size_t m_framesAvailable { 0 };
size_t m_indexRead { 0 };
size_t m_indexWrite { 0 };
};
} // namespace WebCore