haikuwebkit/LayoutTests/fast/speechrecognition/start-second-recognition.html

33 lines
776 B
HTML
Raw Permalink Normal View History

Enable audio capture for speech recognition in GPUProcess https://bugs.webkit.org/show_bug.cgi?id=221457 Reviewed by Eric Carlson. Source/WebCore: Add fake deviceId to play nice with capture ASSERTs. Covered by updated tests. * Modules/speech/SpeechRecognitionCaptureSource.cpp: (WebCore::SpeechRecognitionCaptureSource::createRealtimeMediaSource): Source/WebKit: Allow to create remote sources without any constraint. To do so, we serialize through IPC a MediaConstraints with isValid = false and treat it as no constraint in capture process. Make sure to send sandbox extensions and authorizations for GPUProcess to capture in case of speech recognition audio capture request. In case of GPUProcess audio capture, send the request to capture to WebProcess like done for iOS. WebProcess is then responsible to get audio samples from GPUProcess and forward them to UIProcess. A future refactoring should move speech recognition to GPUProcess. * UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: (WebKit::UserMediaCaptureManagerProxy::createMediaSourceForCaptureDeviceWithConstraints): * UIProcess/UserMediaPermissionRequestManagerProxy.cpp: (WebKit::UserMediaPermissionRequestManagerProxy::grantRequest): * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::createRealtimeMediaSourceForSpeechRecognition): * WebProcess/Speech/SpeechRecognitionRealtimeMediaSourceManager.cpp: (WebKit::SpeechRecognitionRealtimeMediaSourceManager::grantSandboxExtensions): (WebKit::SpeechRecognitionRealtimeMediaSourceManager::createSource): * WebProcess/cocoa/RemoteRealtimeMediaSource.cpp: (WebKit::RemoteRealtimeMediaSource::create): (WebKit::RemoteRealtimeMediaSource::RemoteRealtimeMediaSource): (WebKit::RemoteRealtimeMediaSource::createRemoteMediaSource): (WebKit::RemoteRealtimeMediaSource::~RemoteRealtimeMediaSource): (WebKit::RemoteRealtimeMediaSource::cloneVideoSource): (WebKit::RemoteRealtimeMediaSource::gpuProcessConnectionDidClose): * WebProcess/cocoa/RemoteRealtimeMediaSource.h: * WebProcess/cocoa/UserMediaCaptureManager.cpp: (WebKit::UserMediaCaptureManager::AudioFactory::createAudioCaptureSource): (WebKit::UserMediaCaptureManager::VideoFactory::createVideoCaptureSource): (WebKit::UserMediaCaptureManager::DisplayFactory::createDisplayCaptureSource): LayoutTests: * fast/speechrecognition/ios/restart-recognition-after-stop.html: * fast/speechrecognition/ios/start-recognition-then-stop.html: * fast/speechrecognition/start-recognition-then-stop.html: * fast/speechrecognition/start-second-recognition.html: Canonical link: https://commits.webkit.org/233756@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@272434 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2021-02-05 20:27:19 +00:00
<!DOCTYPE html>
Implement audio capture for SpeechRecognition on macOS https://bugs.webkit.org/show_bug.cgi?id=218855 <rdar://problem/71331001> Reviewed by Youenn Fablet. Source/WebCore: Introduce SpeechRecognizer, which performs audio capture and speech recogntion operations. On start, SpeechRecognizer creates a SpeechRecognitionCaptureSource and starts audio capturing. On stop, SpeechRecognizer clears the source and stops recognizing. SpeechRecognizer can only handle one request at a time, so calling start on already started SpeechRecognizer would cause ongoing request to be aborted. Tests: fast/speechrecognition/start-recognition-then-stop.html fast/speechrecognition/start-second-recognition.html * Headers.cmake: * Modules/speech/SpeechRecognitionCaptureSource.cpp: Added. (WebCore::SpeechRecognitionCaptureSource::SpeechRecognitionCaptureSource): * Modules/speech/SpeechRecognitionCaptureSource.h: Added. * Modules/speech/SpeechRecognitionCaptureSourceImpl.cpp: Added. SpeechRecognitionCaptureSourceImpl provides implementation of SpeechRecognitionCaptureSource on when ENABLE(MEDIA_STREAM) is true. (WebCore::nextLogIdentifier): (WebCore::nullLogger): (WebCore::SpeechRecognitionCaptureSourceImpl::SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::~SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::audioSamplesAvailable): Push data to buffer, signal main thread to pull from buffer and invoke data callback. (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStarted): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStopped): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceMutedChanged): * Modules/speech/SpeechRecognitionCaptureSourceImpl.h: Added. * Modules/speech/SpeechRecognizer.cpp: Added. (WebCore::SpeechRecognizer::SpeechRecognizer): (WebCore::SpeechRecognizer::reset): (WebCore::SpeechRecognizer::start): (WebCore::SpeechRecognizer::startInternal): (WebCore::SpeechRecognizer::stop): (WebCore::SpeechRecognizer::stopInternal): * Modules/speech/SpeechRecognizer.h: Added. (WebCore::SpeechRecognizer::currentClientIdentifier const): * Sources.txt: * SourcesCocoa.txt: * WebCore.xcodeproj/project.pbxproj: * platform/cocoa/MediaUtilities.cpp: Added. (WebCore::createAudioFormatDescription): (WebCore::createAudioSampleBuffer): * platform/cocoa/MediaUtilities.h: Added. * platform/mediarecorder/cocoa/MediaRecorderPrivateWriterCocoa.mm: Move code for creating CMSampleBufferRef to MediaUtilities.h/cpp so it can shared between SpeechRecognition and UserMedia, as Speech recognition backend will take CMSampleBufferRef as input. (WebCore::createAudioFormatDescription): Deleted. (WebCore::createAudioSampleBuffer): Deleted. Source/WebKit: * UIProcess/SpeechRecognitionPermissionManager.cpp: (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): Check and enable mock devices based on preference as SpeechRecognition needs it for testing. * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::stopRequest): Deleted. (WebKit::SpeechRecognitionServer::abortRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::syncIfMockDevicesEnabledChanged): * UIProcess/WebPageProxy.h: LayoutTests: * TestExpectations: * fast/speechrecognition/start-recognition-in-removed-iframe.html: mark test as async to avoid flakiness. * fast/speechrecognition/start-recognition-then-stop-expected.txt: Added. * fast/speechrecognition/start-recognition-then-stop.html: Added. * fast/speechrecognition/start-second-recognition-expected.txt: Added. * fast/speechrecognition/start-second-recognition.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231867@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270158 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-22 05:51:10 +00:00
<html>
<body>
<script src="../../resources/js-test.js"></script>
<script>
description("Verify that starting a second recognition aborts ongoing one.");
if (window.testRunner) {
jsTestIsAsync = true;
}
Add webkit- prefix to SpeechRecognition https://bugs.webkit.org/show_bug.cgi?id=219869 Reviewed by Geoffrey Garen. LayoutTests/imported/w3c: Update test expectations as SpeechRecognition becomes not available. * web-platform-tests/speech-api/SpeechRecognition-basics.https-expected.txt: * web-platform-tests/speech-api/historical-expected.txt: * web-platform-tests/speech-api/idlharness.window-expected.txt: Source/WebCore: * Modules/speech/SpeechRecognition.idl: * bindings/js/WebCoreBuiltinNames.h: Tools: Replace SpeechRecognition with webkitSpeechRecognition in test. * TestWebKitAPI/Tests/WebKitCocoa/speechrecognition-user-permission-persistence.html: LayoutTests: Replace SpeechRecognition with webkitSpeechRecognition in tests. * fast/speechrecognition/ios/audio-capture-expected.txt: * fast/speechrecognition/ios/audio-capture.html: * fast/speechrecognition/ios/restart-recognition-after-stop-expected.txt: * fast/speechrecognition/ios/restart-recognition-after-stop.html: * fast/speechrecognition/ios/start-recognition-then-stop-expected.txt: * fast/speechrecognition/ios/start-recognition-then-stop.html: * fast/speechrecognition/permission-error-expected.txt: * fast/speechrecognition/permission-error.html: * fast/speechrecognition/resources/removed-iframe.html: * fast/speechrecognition/start-recognition-then-stop-expected.txt: * fast/speechrecognition/start-recognition-then-stop.html: * fast/speechrecognition/start-recognition-twice-exception-expected.txt: * fast/speechrecognition/start-recognition-twice-exception.html: * fast/speechrecognition/start-second-recognition-expected.txt: * fast/speechrecognition/start-second-recognition.html: Canonical link: https://commits.webkit.org/232504@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270868 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-12-15 23:04:22 +00:00
shouldNotThrow("recognition = new webkitSpeechRecognition();");
Implement audio capture for SpeechRecognition on macOS https://bugs.webkit.org/show_bug.cgi?id=218855 <rdar://problem/71331001> Reviewed by Youenn Fablet. Source/WebCore: Introduce SpeechRecognizer, which performs audio capture and speech recogntion operations. On start, SpeechRecognizer creates a SpeechRecognitionCaptureSource and starts audio capturing. On stop, SpeechRecognizer clears the source and stops recognizing. SpeechRecognizer can only handle one request at a time, so calling start on already started SpeechRecognizer would cause ongoing request to be aborted. Tests: fast/speechrecognition/start-recognition-then-stop.html fast/speechrecognition/start-second-recognition.html * Headers.cmake: * Modules/speech/SpeechRecognitionCaptureSource.cpp: Added. (WebCore::SpeechRecognitionCaptureSource::SpeechRecognitionCaptureSource): * Modules/speech/SpeechRecognitionCaptureSource.h: Added. * Modules/speech/SpeechRecognitionCaptureSourceImpl.cpp: Added. SpeechRecognitionCaptureSourceImpl provides implementation of SpeechRecognitionCaptureSource on when ENABLE(MEDIA_STREAM) is true. (WebCore::nextLogIdentifier): (WebCore::nullLogger): (WebCore::SpeechRecognitionCaptureSourceImpl::SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::~SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::audioSamplesAvailable): Push data to buffer, signal main thread to pull from buffer and invoke data callback. (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStarted): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStopped): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceMutedChanged): * Modules/speech/SpeechRecognitionCaptureSourceImpl.h: Added. * Modules/speech/SpeechRecognizer.cpp: Added. (WebCore::SpeechRecognizer::SpeechRecognizer): (WebCore::SpeechRecognizer::reset): (WebCore::SpeechRecognizer::start): (WebCore::SpeechRecognizer::startInternal): (WebCore::SpeechRecognizer::stop): (WebCore::SpeechRecognizer::stopInternal): * Modules/speech/SpeechRecognizer.h: Added. (WebCore::SpeechRecognizer::currentClientIdentifier const): * Sources.txt: * SourcesCocoa.txt: * WebCore.xcodeproj/project.pbxproj: * platform/cocoa/MediaUtilities.cpp: Added. (WebCore::createAudioFormatDescription): (WebCore::createAudioSampleBuffer): * platform/cocoa/MediaUtilities.h: Added. * platform/mediarecorder/cocoa/MediaRecorderPrivateWriterCocoa.mm: Move code for creating CMSampleBufferRef to MediaUtilities.h/cpp so it can shared between SpeechRecognition and UserMedia, as Speech recognition backend will take CMSampleBufferRef as input. (WebCore::createAudioFormatDescription): Deleted. (WebCore::createAudioSampleBuffer): Deleted. Source/WebKit: * UIProcess/SpeechRecognitionPermissionManager.cpp: (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): Check and enable mock devices based on preference as SpeechRecognition needs it for testing. * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::stopRequest): Deleted. (WebKit::SpeechRecognitionServer::abortRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::syncIfMockDevicesEnabledChanged): * UIProcess/WebPageProxy.h: LayoutTests: * TestExpectations: * fast/speechrecognition/start-recognition-in-removed-iframe.html: mark test as async to avoid flakiness. * fast/speechrecognition/start-recognition-then-stop-expected.txt: Added. * fast/speechrecognition/start-recognition-then-stop.html: Added. * fast/speechrecognition/start-second-recognition-expected.txt: Added. * fast/speechrecognition/start-second-recognition.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231867@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270158 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-22 05:51:10 +00:00
receivedStart = false;
recognition.onstart = (event) => {
receivedStart = true;
}
recognition.onerror = (event) => {
shouldBeTrue("receivedStart");
shouldBeEqualToString("event.error", "aborted");
shouldBeEqualToString("event.message", "Another request is started");
finishJSTest();
}
shouldNotThrow("recognition.start()");
Add webkit- prefix to SpeechRecognition https://bugs.webkit.org/show_bug.cgi?id=219869 Reviewed by Geoffrey Garen. LayoutTests/imported/w3c: Update test expectations as SpeechRecognition becomes not available. * web-platform-tests/speech-api/SpeechRecognition-basics.https-expected.txt: * web-platform-tests/speech-api/historical-expected.txt: * web-platform-tests/speech-api/idlharness.window-expected.txt: Source/WebCore: * Modules/speech/SpeechRecognition.idl: * bindings/js/WebCoreBuiltinNames.h: Tools: Replace SpeechRecognition with webkitSpeechRecognition in test. * TestWebKitAPI/Tests/WebKitCocoa/speechrecognition-user-permission-persistence.html: LayoutTests: Replace SpeechRecognition with webkitSpeechRecognition in tests. * fast/speechrecognition/ios/audio-capture-expected.txt: * fast/speechrecognition/ios/audio-capture.html: * fast/speechrecognition/ios/restart-recognition-after-stop-expected.txt: * fast/speechrecognition/ios/restart-recognition-after-stop.html: * fast/speechrecognition/ios/start-recognition-then-stop-expected.txt: * fast/speechrecognition/ios/start-recognition-then-stop.html: * fast/speechrecognition/permission-error-expected.txt: * fast/speechrecognition/permission-error.html: * fast/speechrecognition/resources/removed-iframe.html: * fast/speechrecognition/start-recognition-then-stop-expected.txt: * fast/speechrecognition/start-recognition-then-stop.html: * fast/speechrecognition/start-recognition-twice-exception-expected.txt: * fast/speechrecognition/start-recognition-twice-exception.html: * fast/speechrecognition/start-second-recognition-expected.txt: * fast/speechrecognition/start-second-recognition.html: Canonical link: https://commits.webkit.org/232504@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270868 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-12-15 23:04:22 +00:00
shouldNotThrow("secondRecognition = new webkitSpeechRecognition();");
Implement audio capture for SpeechRecognition on macOS https://bugs.webkit.org/show_bug.cgi?id=218855 <rdar://problem/71331001> Reviewed by Youenn Fablet. Source/WebCore: Introduce SpeechRecognizer, which performs audio capture and speech recogntion operations. On start, SpeechRecognizer creates a SpeechRecognitionCaptureSource and starts audio capturing. On stop, SpeechRecognizer clears the source and stops recognizing. SpeechRecognizer can only handle one request at a time, so calling start on already started SpeechRecognizer would cause ongoing request to be aborted. Tests: fast/speechrecognition/start-recognition-then-stop.html fast/speechrecognition/start-second-recognition.html * Headers.cmake: * Modules/speech/SpeechRecognitionCaptureSource.cpp: Added. (WebCore::SpeechRecognitionCaptureSource::SpeechRecognitionCaptureSource): * Modules/speech/SpeechRecognitionCaptureSource.h: Added. * Modules/speech/SpeechRecognitionCaptureSourceImpl.cpp: Added. SpeechRecognitionCaptureSourceImpl provides implementation of SpeechRecognitionCaptureSource on when ENABLE(MEDIA_STREAM) is true. (WebCore::nextLogIdentifier): (WebCore::nullLogger): (WebCore::SpeechRecognitionCaptureSourceImpl::SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::~SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::audioSamplesAvailable): Push data to buffer, signal main thread to pull from buffer and invoke data callback. (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStarted): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStopped): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceMutedChanged): * Modules/speech/SpeechRecognitionCaptureSourceImpl.h: Added. * Modules/speech/SpeechRecognizer.cpp: Added. (WebCore::SpeechRecognizer::SpeechRecognizer): (WebCore::SpeechRecognizer::reset): (WebCore::SpeechRecognizer::start): (WebCore::SpeechRecognizer::startInternal): (WebCore::SpeechRecognizer::stop): (WebCore::SpeechRecognizer::stopInternal): * Modules/speech/SpeechRecognizer.h: Added. (WebCore::SpeechRecognizer::currentClientIdentifier const): * Sources.txt: * SourcesCocoa.txt: * WebCore.xcodeproj/project.pbxproj: * platform/cocoa/MediaUtilities.cpp: Added. (WebCore::createAudioFormatDescription): (WebCore::createAudioSampleBuffer): * platform/cocoa/MediaUtilities.h: Added. * platform/mediarecorder/cocoa/MediaRecorderPrivateWriterCocoa.mm: Move code for creating CMSampleBufferRef to MediaUtilities.h/cpp so it can shared between SpeechRecognition and UserMedia, as Speech recognition backend will take CMSampleBufferRef as input. (WebCore::createAudioFormatDescription): Deleted. (WebCore::createAudioSampleBuffer): Deleted. Source/WebKit: * UIProcess/SpeechRecognitionPermissionManager.cpp: (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): Check and enable mock devices based on preference as SpeechRecognition needs it for testing. * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::stopRequest): Deleted. (WebKit::SpeechRecognitionServer::abortRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::syncIfMockDevicesEnabledChanged): * UIProcess/WebPageProxy.h: LayoutTests: * TestExpectations: * fast/speechrecognition/start-recognition-in-removed-iframe.html: mark test as async to avoid flakiness. * fast/speechrecognition/start-recognition-then-stop-expected.txt: Added. * fast/speechrecognition/start-recognition-then-stop.html: Added. * fast/speechrecognition/start-second-recognition-expected.txt: Added. * fast/speechrecognition/start-second-recognition.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231867@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270158 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-22 05:51:10 +00:00
shouldNotThrow("secondRecognition.start()");
</script>
</body>
</html>