haikuwebkit/LayoutTests/fast/speechrecognition/start-recognition-in-remove...

28 lines
636 B
HTML
Raw Permalink Normal View History

Implement basic permission check for SpeechRecognition https://bugs.webkit.org/show_bug.cgi?id=218476 <rdar://problem/71222638> Reviewed by Youenn Fablet. Source/WebCore: Tests: fast/speechrecognition/permission-error.html fast/speechrecognition/start-recognition-in-removed-iframe.html * Modules/speech/SpeechRecognition.cpp: (WebCore::SpeechRecognition::startRecognition): * Modules/speech/SpeechRecognitionConnection.h: * Modules/speech/SpeechRecognitionRequest.cpp: (WebCore::SpeechRecognitionRequest::create): Deleted. * Modules/speech/SpeechRecognitionRequest.h: (WebCore::SpeechRecognitionRequest::clientOrigin const): * Modules/speech/SpeechRecognitionRequestInfo.h: (WebCore::SpeechRecognitionRequestInfo::encode const): (WebCore::SpeechRecognitionRequestInfo::decode): * Modules/speech/SpeechRecognitionUpdate.cpp: (WebCore::SpeechRecognitionUpdate::create): (WebCore::SpeechRecognitionUpdate::error const): (WebCore::SpeechRecognitionUpdate::result const): * Modules/speech/SpeechRecognitionUpdate.h: * page/DummySpeechRecognitionProvider.h: Source/WebKit: Introduce SpeechRecognitionPermissionManager, which checks and requests speech recognition permissions before we actually start capturing audio and perform recognition. SpeechRecognitionPermissionManager is per-page, like SpeechRecognitionServer. The checks include: 1. Sandbox requirement for microphone 2. TCC check for microphone 3. TCC check for SFSpeechRecognizer 4. User permission on speech recognition for origin Add a delegate function for requesting user permission. By default, user permission is not granted. API test: WebKit2.SpeechRecognitionUserPermissionPersistence * Headers.cmake: * Shared/API/APIObject.h: * Shared/API/c/WKBase.h: * Sources.txt: * SourcesCocoa.txt: * UIProcess/API/APIUIClient.h: (API::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/API/C/WKAPICast.h: * UIProcess/API/C/WKPage.cpp: (WKPageSetPageUIClient): * UIProcess/API/C/WKPageUIClient.h: * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.cpp: Added. (WKSpeechRecognitionPermissionCallbackGetTypeID): (WKSpeechRecognitionPermissionCallbackComplete): * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.h: Added. * UIProcess/API/Cocoa/WKPreferences.mm: (-[WKPreferences _speechRecognitionEnabled]): (-[WKPreferences _setSpeechRecognitionEnabled:]): * UIProcess/API/Cocoa/WKPreferencesPrivate.h: * UIProcess/API/Cocoa/WKUIDelegatePrivate.h: * UIProcess/Cocoa/MediaPermissionUtilities.mm: Added. (WebKit::checkSandboxRequirementForType): (WebKit::checkUsageDescriptionStringForType): (WebKit::checkUsageDescriptionStringForSpeechRecognition): (WebKit::requestAVCaptureAccessForType): (WebKit::checkAVCaptureAccessForType): (WebKit::requestSpeechRecognitionAccess): (WebKit::checkSpeechRecognitionServiceAccess): * UIProcess/Cocoa/UIDelegate.h: * UIProcess/Cocoa/UIDelegate.mm: (WebKit::UIDelegate::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/Cocoa/UserMediaPermissionRequestManagerProxy.mm: (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureAudio): (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureVideo): (WebKit::UserMediaPermissionRequestManagerProxy::requestSystemValidation): (WebKit::requestAVCaptureAccessForMediaType): Deleted. * UIProcess/Cocoa/WebProcessProxyCocoa.mm: * UIProcess/MediaPermissionUtilities.h: Copied from Added. * UIProcess/SpeechRecognitionPermissionManager.cpp: Added. (WebKit::computeMicrophoneAccess): (WebKit::computeSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::~SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::request): (WebKit::SpeechRecognitionPermissionManager::startNextRequest): (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::continueProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::completeCurrentRequest): (WebKit::SpeechRecognitionPermissionManager::requestSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::requestMicrophoneAccess): (WebKit::SpeechRecognitionPermissionManager::requestUserPermission): * UIProcess/SpeechRecognitionPermissionManager.h: Added. * UIProcess/SpeechRecognitionPermissionRequest.h: Added. (WebKit::SpeechRecognitionPermissionRequest::create): (WebKit::SpeechRecognitionPermissionRequest::complete): (WebKit::SpeechRecognitionPermissionRequest::origin const): (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest): (WebKit::SpeechRecognitionPermissionCallback::create): (WebKit::SpeechRecognitionPermissionCallback::complete): (WebKit::SpeechRecognitionPermissionCallback::SpeechRecognitionPermissionCallback): * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::SpeechRecognitionServer): (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stopRequest): (WebKit::SpeechRecognitionServer::abortRequest): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::processNextPendingRequestIfNeeded): Deleted. (WebKit::SpeechRecognitionServer::removePendingRequest): Deleted. (WebKit::SpeechRecognitionServer::startPocessingRequest): Deleted. (WebKit::SpeechRecognitionServer::stopProcessingRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/SpeechRecognitionServer.messages.in: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::didChangeMainDocument): (WebKit::WebPageProxy::resetState): (WebKit::WebPageProxy::requestSpeechRecognitionPermission): * UIProcess/WebPageProxy.h: * UIProcess/WebProcessProxy.cpp: (WebKit::WebProcessProxy::createSpeechRecognitionServer): * WebKit.xcodeproj/project.pbxproj: * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp: (WebKit::WebSpeechRecognitionConnection::start): (WebKit::WebSpeechRecognitionConnection::didReceiveUpdate): * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h: Source/WTF: * wtf/PlatformHave.h: Tools: * MiniBrowser/mac/Info.plist: * MobileMiniBrowser/MobileMiniBrowser/Info.plist: * TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: * TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm: Added. (-[SpeechRecognitionPermissionUIDelegate _webView:requestSpeechRecognitionPermissionForOrigin:decisionHandler:]): (-[SpeechRecognitionMessageHandler userContentController:didReceiveScriptMessage:]): (TestWebKitAPI::TEST): * TestWebKitAPI/Tests/WebKitCocoa/speechrecognition-user-permission-persistence.html: Added. * WebKitTestRunner/InjectedBundle/Bindings/TestRunner.idl: * WebKitTestRunner/InjectedBundle/TestRunner.cpp: (WTR::TestRunner::setIsSpeechRecognitionPermissionGranted): * WebKitTestRunner/InjectedBundle/TestRunner.h: * WebKitTestRunner/TestController.cpp: (WTR::decidePolicyForSpeechRecognitionPermissionRequest): (WTR::TestController::completeSpeechRecognitionPermissionCheck): (WTR::TestController::setIsSpeechRecognitionPermissionGranted): (WTR::TestController::createWebViewWithOptions): (WTR::TestController::resetStateToConsistentValues): * WebKitTestRunner/TestController.h: * WebKitTestRunner/TestInvocation.cpp: (WTR::TestInvocation::didReceiveSynchronousMessageFromInjectedBundle): LayoutTests: * TestExpectations: * fast/speechrecognition/permission-error-expected.txt: Added. * fast/speechrecognition/permission-error.html: Added. * fast/speechrecognition/resources/removed-iframe.html: Added. * fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt: Added. * fast/speechrecognition/start-recognition-in-removed-iframe.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231575@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@269810 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-14 02:16:49 +00:00
<!DOCTYPE html>
<html>
<body>
<script src="../../resources/js-test.js"></script>
<script>
description("Verify that process does not crash when starting recognition in a removed iframe.");
Implement audio capture for SpeechRecognition on macOS https://bugs.webkit.org/show_bug.cgi?id=218855 <rdar://problem/71331001> Reviewed by Youenn Fablet. Source/WebCore: Introduce SpeechRecognizer, which performs audio capture and speech recogntion operations. On start, SpeechRecognizer creates a SpeechRecognitionCaptureSource and starts audio capturing. On stop, SpeechRecognizer clears the source and stops recognizing. SpeechRecognizer can only handle one request at a time, so calling start on already started SpeechRecognizer would cause ongoing request to be aborted. Tests: fast/speechrecognition/start-recognition-then-stop.html fast/speechrecognition/start-second-recognition.html * Headers.cmake: * Modules/speech/SpeechRecognitionCaptureSource.cpp: Added. (WebCore::SpeechRecognitionCaptureSource::SpeechRecognitionCaptureSource): * Modules/speech/SpeechRecognitionCaptureSource.h: Added. * Modules/speech/SpeechRecognitionCaptureSourceImpl.cpp: Added. SpeechRecognitionCaptureSourceImpl provides implementation of SpeechRecognitionCaptureSource on when ENABLE(MEDIA_STREAM) is true. (WebCore::nextLogIdentifier): (WebCore::nullLogger): (WebCore::SpeechRecognitionCaptureSourceImpl::SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::~SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::audioSamplesAvailable): Push data to buffer, signal main thread to pull from buffer and invoke data callback. (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStarted): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStopped): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceMutedChanged): * Modules/speech/SpeechRecognitionCaptureSourceImpl.h: Added. * Modules/speech/SpeechRecognizer.cpp: Added. (WebCore::SpeechRecognizer::SpeechRecognizer): (WebCore::SpeechRecognizer::reset): (WebCore::SpeechRecognizer::start): (WebCore::SpeechRecognizer::startInternal): (WebCore::SpeechRecognizer::stop): (WebCore::SpeechRecognizer::stopInternal): * Modules/speech/SpeechRecognizer.h: Added. (WebCore::SpeechRecognizer::currentClientIdentifier const): * Sources.txt: * SourcesCocoa.txt: * WebCore.xcodeproj/project.pbxproj: * platform/cocoa/MediaUtilities.cpp: Added. (WebCore::createAudioFormatDescription): (WebCore::createAudioSampleBuffer): * platform/cocoa/MediaUtilities.h: Added. * platform/mediarecorder/cocoa/MediaRecorderPrivateWriterCocoa.mm: Move code for creating CMSampleBufferRef to MediaUtilities.h/cpp so it can shared between SpeechRecognition and UserMedia, as Speech recognition backend will take CMSampleBufferRef as input. (WebCore::createAudioFormatDescription): Deleted. (WebCore::createAudioSampleBuffer): Deleted. Source/WebKit: * UIProcess/SpeechRecognitionPermissionManager.cpp: (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): Check and enable mock devices based on preference as SpeechRecognition needs it for testing. * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::stopRequest): Deleted. (WebKit::SpeechRecognitionServer::abortRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::syncIfMockDevicesEnabledChanged): * UIProcess/WebPageProxy.h: LayoutTests: * TestExpectations: * fast/speechrecognition/start-recognition-in-removed-iframe.html: mark test as async to avoid flakiness. * fast/speechrecognition/start-recognition-then-stop-expected.txt: Added. * fast/speechrecognition/start-recognition-then-stop.html: Added. * fast/speechrecognition/start-second-recognition-expected.txt: Added. * fast/speechrecognition/start-second-recognition.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231867@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270158 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-22 05:51:10 +00:00
if (window.testRunner) {
jsTestIsAsync = true;
}
Implement basic permission check for SpeechRecognition https://bugs.webkit.org/show_bug.cgi?id=218476 <rdar://problem/71222638> Reviewed by Youenn Fablet. Source/WebCore: Tests: fast/speechrecognition/permission-error.html fast/speechrecognition/start-recognition-in-removed-iframe.html * Modules/speech/SpeechRecognition.cpp: (WebCore::SpeechRecognition::startRecognition): * Modules/speech/SpeechRecognitionConnection.h: * Modules/speech/SpeechRecognitionRequest.cpp: (WebCore::SpeechRecognitionRequest::create): Deleted. * Modules/speech/SpeechRecognitionRequest.h: (WebCore::SpeechRecognitionRequest::clientOrigin const): * Modules/speech/SpeechRecognitionRequestInfo.h: (WebCore::SpeechRecognitionRequestInfo::encode const): (WebCore::SpeechRecognitionRequestInfo::decode): * Modules/speech/SpeechRecognitionUpdate.cpp: (WebCore::SpeechRecognitionUpdate::create): (WebCore::SpeechRecognitionUpdate::error const): (WebCore::SpeechRecognitionUpdate::result const): * Modules/speech/SpeechRecognitionUpdate.h: * page/DummySpeechRecognitionProvider.h: Source/WebKit: Introduce SpeechRecognitionPermissionManager, which checks and requests speech recognition permissions before we actually start capturing audio and perform recognition. SpeechRecognitionPermissionManager is per-page, like SpeechRecognitionServer. The checks include: 1. Sandbox requirement for microphone 2. TCC check for microphone 3. TCC check for SFSpeechRecognizer 4. User permission on speech recognition for origin Add a delegate function for requesting user permission. By default, user permission is not granted. API test: WebKit2.SpeechRecognitionUserPermissionPersistence * Headers.cmake: * Shared/API/APIObject.h: * Shared/API/c/WKBase.h: * Sources.txt: * SourcesCocoa.txt: * UIProcess/API/APIUIClient.h: (API::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/API/C/WKAPICast.h: * UIProcess/API/C/WKPage.cpp: (WKPageSetPageUIClient): * UIProcess/API/C/WKPageUIClient.h: * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.cpp: Added. (WKSpeechRecognitionPermissionCallbackGetTypeID): (WKSpeechRecognitionPermissionCallbackComplete): * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.h: Added. * UIProcess/API/Cocoa/WKPreferences.mm: (-[WKPreferences _speechRecognitionEnabled]): (-[WKPreferences _setSpeechRecognitionEnabled:]): * UIProcess/API/Cocoa/WKPreferencesPrivate.h: * UIProcess/API/Cocoa/WKUIDelegatePrivate.h: * UIProcess/Cocoa/MediaPermissionUtilities.mm: Added. (WebKit::checkSandboxRequirementForType): (WebKit::checkUsageDescriptionStringForType): (WebKit::checkUsageDescriptionStringForSpeechRecognition): (WebKit::requestAVCaptureAccessForType): (WebKit::checkAVCaptureAccessForType): (WebKit::requestSpeechRecognitionAccess): (WebKit::checkSpeechRecognitionServiceAccess): * UIProcess/Cocoa/UIDelegate.h: * UIProcess/Cocoa/UIDelegate.mm: (WebKit::UIDelegate::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/Cocoa/UserMediaPermissionRequestManagerProxy.mm: (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureAudio): (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureVideo): (WebKit::UserMediaPermissionRequestManagerProxy::requestSystemValidation): (WebKit::requestAVCaptureAccessForMediaType): Deleted. * UIProcess/Cocoa/WebProcessProxyCocoa.mm: * UIProcess/MediaPermissionUtilities.h: Copied from Added. * UIProcess/SpeechRecognitionPermissionManager.cpp: Added. (WebKit::computeMicrophoneAccess): (WebKit::computeSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::~SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::request): (WebKit::SpeechRecognitionPermissionManager::startNextRequest): (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::continueProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::completeCurrentRequest): (WebKit::SpeechRecognitionPermissionManager::requestSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::requestMicrophoneAccess): (WebKit::SpeechRecognitionPermissionManager::requestUserPermission): * UIProcess/SpeechRecognitionPermissionManager.h: Added. * UIProcess/SpeechRecognitionPermissionRequest.h: Added. (WebKit::SpeechRecognitionPermissionRequest::create): (WebKit::SpeechRecognitionPermissionRequest::complete): (WebKit::SpeechRecognitionPermissionRequest::origin const): (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest): (WebKit::SpeechRecognitionPermissionCallback::create): (WebKit::SpeechRecognitionPermissionCallback::complete): (WebKit::SpeechRecognitionPermissionCallback::SpeechRecognitionPermissionCallback): * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::SpeechRecognitionServer): (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stopRequest): (WebKit::SpeechRecognitionServer::abortRequest): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::processNextPendingRequestIfNeeded): Deleted. (WebKit::SpeechRecognitionServer::removePendingRequest): Deleted. (WebKit::SpeechRecognitionServer::startPocessingRequest): Deleted. (WebKit::SpeechRecognitionServer::stopProcessingRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/SpeechRecognitionServer.messages.in: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::didChangeMainDocument): (WebKit::WebPageProxy::resetState): (WebKit::WebPageProxy::requestSpeechRecognitionPermission): * UIProcess/WebPageProxy.h: * UIProcess/WebProcessProxy.cpp: (WebKit::WebProcessProxy::createSpeechRecognitionServer): * WebKit.xcodeproj/project.pbxproj: * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp: (WebKit::WebSpeechRecognitionConnection::start): (WebKit::WebSpeechRecognitionConnection::didReceiveUpdate): * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h: Source/WTF: * wtf/PlatformHave.h: Tools: * MiniBrowser/mac/Info.plist: * MobileMiniBrowser/MobileMiniBrowser/Info.plist: * TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: * TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm: Added. (-[SpeechRecognitionPermissionUIDelegate _webView:requestSpeechRecognitionPermissionForOrigin:decisionHandler:]): (-[SpeechRecognitionMessageHandler userContentController:didReceiveScriptMessage:]): (TestWebKitAPI::TEST): * TestWebKitAPI/Tests/WebKitCocoa/speechrecognition-user-permission-persistence.html: Added. * WebKitTestRunner/InjectedBundle/Bindings/TestRunner.idl: * WebKitTestRunner/InjectedBundle/TestRunner.cpp: (WTR::TestRunner::setIsSpeechRecognitionPermissionGranted): * WebKitTestRunner/InjectedBundle/TestRunner.h: * WebKitTestRunner/TestController.cpp: (WTR::decidePolicyForSpeechRecognitionPermissionRequest): (WTR::TestController::completeSpeechRecognitionPermissionCheck): (WTR::TestController::setIsSpeechRecognitionPermissionGranted): (WTR::TestController::createWebViewWithOptions): (WTR::TestController::resetStateToConsistentValues): * WebKitTestRunner/TestController.h: * WebKitTestRunner/TestInvocation.cpp: (WTR::TestInvocation::didReceiveSynchronousMessageFromInjectedBundle): LayoutTests: * TestExpectations: * fast/speechrecognition/permission-error-expected.txt: Added. * fast/speechrecognition/permission-error.html: Added. * fast/speechrecognition/resources/removed-iframe.html: Added. * fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt: Added. * fast/speechrecognition/start-recognition-in-removed-iframe.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231575@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@269810 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-14 02:16:49 +00:00
function test()
{
iframe = document.getElementsByTagName('iframe')[0];
Use user media permission prompt for speech recognition https://bugs.webkit.org/show_bug.cgi?id=221082 rdar://problem/73372499 Patch by Sihui Liu <sihui_liu@appe.com> on 2021-02-01 Reviewed by Youenn Fablet. Source/WebCore: Add frame identifier to SpeechRecognitionRequest as it is needed for checking user media permission. Updated existing tests for changed behavior. * Modules/speech/SpeechRecognition.cpp: (WebCore::SpeechRecognition::startRecognition): * Modules/speech/SpeechRecognitionConnection.h: * Modules/speech/SpeechRecognitionRequest.h: (WebCore::SpeechRecognitionRequest::frameIdentifier const): * Modules/speech/SpeechRecognitionRequestInfo.h: (WebCore::SpeechRecognitionRequestInfo::encode const): (WebCore::SpeechRecognitionRequestInfo::decode): * page/DummySpeechRecognitionProvider.h: Source/WebKit: Make SpeechRecognitionPermissionManager ask UserMediaPermissionRequestManagerProxy for user permission on microphone. * UIProcess/SpeechRecognitionPermissionManager.cpp: (WebKit::SpeechRecognitionPermissionManager::request): (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::requestUserPermission): * UIProcess/SpeechRecognitionPermissionManager.h: * UIProcess/SpeechRecognitionPermissionRequest.h: (WebKit::SpeechRecognitionPermissionRequest::create): (WebKit::SpeechRecognitionPermissionRequest::frameIdentifier const): (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest): * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): * UIProcess/SpeechRecognitionServer.h: * UIProcess/SpeechRecognitionServer.messages.in: * UIProcess/UserMediaPermissionRequestManagerProxy.cpp: (WebKit::UserMediaPermissionRequestManagerProxy::denyRequest): (WebKit::UserMediaPermissionRequestManagerProxy::grantRequest): (WebKit::UserMediaPermissionRequestManagerProxy::checkUserMediaPermissionForSpeechRecognition): * UIProcess/UserMediaPermissionRequestManagerProxy.h: * UIProcess/UserMediaPermissionRequestProxy.cpp: (WebKit::UserMediaPermissionRequestProxy::UserMediaPermissionRequestProxy): * UIProcess/UserMediaPermissionRequestProxy.h: (WebKit::UserMediaPermissionRequestProxy::create): (WebKit::UserMediaPermissionRequestProxy::decisionCompletionHandler): * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::requestSpeechRecognitionPermission): (WebKit::WebPageProxy::requestUserMediaPermissionForSpeechRecognition): * UIProcess/WebPageProxy.h: * UIProcess/WebProcessProxy.cpp: (WebKit::WebProcessProxy::createSpeechRecognitionServer): * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp: (WebKit::WebSpeechRecognitionConnection::start): * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h: Tools: * TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm: (-[SpeechRecognitionUIDelegate _webView:requestMediaCaptureAuthorization:decisionHandler:]): LayoutTests: * fast/speechrecognition/permission-error.html: * fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt: * fast/speechrecognition/start-recognition-in-removed-iframe.html: Canonical link: https://commits.webkit.org/233543@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@272165 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2021-02-01 20:30:42 +00:00
shouldThrow("iframe.contentWindow.startRecognition()");
Implement basic permission check for SpeechRecognition https://bugs.webkit.org/show_bug.cgi?id=218476 <rdar://problem/71222638> Reviewed by Youenn Fablet. Source/WebCore: Tests: fast/speechrecognition/permission-error.html fast/speechrecognition/start-recognition-in-removed-iframe.html * Modules/speech/SpeechRecognition.cpp: (WebCore::SpeechRecognition::startRecognition): * Modules/speech/SpeechRecognitionConnection.h: * Modules/speech/SpeechRecognitionRequest.cpp: (WebCore::SpeechRecognitionRequest::create): Deleted. * Modules/speech/SpeechRecognitionRequest.h: (WebCore::SpeechRecognitionRequest::clientOrigin const): * Modules/speech/SpeechRecognitionRequestInfo.h: (WebCore::SpeechRecognitionRequestInfo::encode const): (WebCore::SpeechRecognitionRequestInfo::decode): * Modules/speech/SpeechRecognitionUpdate.cpp: (WebCore::SpeechRecognitionUpdate::create): (WebCore::SpeechRecognitionUpdate::error const): (WebCore::SpeechRecognitionUpdate::result const): * Modules/speech/SpeechRecognitionUpdate.h: * page/DummySpeechRecognitionProvider.h: Source/WebKit: Introduce SpeechRecognitionPermissionManager, which checks and requests speech recognition permissions before we actually start capturing audio and perform recognition. SpeechRecognitionPermissionManager is per-page, like SpeechRecognitionServer. The checks include: 1. Sandbox requirement for microphone 2. TCC check for microphone 3. TCC check for SFSpeechRecognizer 4. User permission on speech recognition for origin Add a delegate function for requesting user permission. By default, user permission is not granted. API test: WebKit2.SpeechRecognitionUserPermissionPersistence * Headers.cmake: * Shared/API/APIObject.h: * Shared/API/c/WKBase.h: * Sources.txt: * SourcesCocoa.txt: * UIProcess/API/APIUIClient.h: (API::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/API/C/WKAPICast.h: * UIProcess/API/C/WKPage.cpp: (WKPageSetPageUIClient): * UIProcess/API/C/WKPageUIClient.h: * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.cpp: Added. (WKSpeechRecognitionPermissionCallbackGetTypeID): (WKSpeechRecognitionPermissionCallbackComplete): * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.h: Added. * UIProcess/API/Cocoa/WKPreferences.mm: (-[WKPreferences _speechRecognitionEnabled]): (-[WKPreferences _setSpeechRecognitionEnabled:]): * UIProcess/API/Cocoa/WKPreferencesPrivate.h: * UIProcess/API/Cocoa/WKUIDelegatePrivate.h: * UIProcess/Cocoa/MediaPermissionUtilities.mm: Added. (WebKit::checkSandboxRequirementForType): (WebKit::checkUsageDescriptionStringForType): (WebKit::checkUsageDescriptionStringForSpeechRecognition): (WebKit::requestAVCaptureAccessForType): (WebKit::checkAVCaptureAccessForType): (WebKit::requestSpeechRecognitionAccess): (WebKit::checkSpeechRecognitionServiceAccess): * UIProcess/Cocoa/UIDelegate.h: * UIProcess/Cocoa/UIDelegate.mm: (WebKit::UIDelegate::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/Cocoa/UserMediaPermissionRequestManagerProxy.mm: (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureAudio): (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureVideo): (WebKit::UserMediaPermissionRequestManagerProxy::requestSystemValidation): (WebKit::requestAVCaptureAccessForMediaType): Deleted. * UIProcess/Cocoa/WebProcessProxyCocoa.mm: * UIProcess/MediaPermissionUtilities.h: Copied from Added. * UIProcess/SpeechRecognitionPermissionManager.cpp: Added. (WebKit::computeMicrophoneAccess): (WebKit::computeSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::~SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::request): (WebKit::SpeechRecognitionPermissionManager::startNextRequest): (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::continueProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::completeCurrentRequest): (WebKit::SpeechRecognitionPermissionManager::requestSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::requestMicrophoneAccess): (WebKit::SpeechRecognitionPermissionManager::requestUserPermission): * UIProcess/SpeechRecognitionPermissionManager.h: Added. * UIProcess/SpeechRecognitionPermissionRequest.h: Added. (WebKit::SpeechRecognitionPermissionRequest::create): (WebKit::SpeechRecognitionPermissionRequest::complete): (WebKit::SpeechRecognitionPermissionRequest::origin const): (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest): (WebKit::SpeechRecognitionPermissionCallback::create): (WebKit::SpeechRecognitionPermissionCallback::complete): (WebKit::SpeechRecognitionPermissionCallback::SpeechRecognitionPermissionCallback): * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::SpeechRecognitionServer): (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stopRequest): (WebKit::SpeechRecognitionServer::abortRequest): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::processNextPendingRequestIfNeeded): Deleted. (WebKit::SpeechRecognitionServer::removePendingRequest): Deleted. (WebKit::SpeechRecognitionServer::startPocessingRequest): Deleted. (WebKit::SpeechRecognitionServer::stopProcessingRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/SpeechRecognitionServer.messages.in: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::didChangeMainDocument): (WebKit::WebPageProxy::resetState): (WebKit::WebPageProxy::requestSpeechRecognitionPermission): * UIProcess/WebPageProxy.h: * UIProcess/WebProcessProxy.cpp: (WebKit::WebProcessProxy::createSpeechRecognitionServer): * WebKit.xcodeproj/project.pbxproj: * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp: (WebKit::WebSpeechRecognitionConnection::start): (WebKit::WebSpeechRecognitionConnection::didReceiveUpdate): * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h: Source/WTF: * wtf/PlatformHave.h: Tools: * MiniBrowser/mac/Info.plist: * MobileMiniBrowser/MobileMiniBrowser/Info.plist: * TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: * TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm: Added. (-[SpeechRecognitionPermissionUIDelegate _webView:requestSpeechRecognitionPermissionForOrigin:decisionHandler:]): (-[SpeechRecognitionMessageHandler userContentController:didReceiveScriptMessage:]): (TestWebKitAPI::TEST): * TestWebKitAPI/Tests/WebKitCocoa/speechrecognition-user-permission-persistence.html: Added. * WebKitTestRunner/InjectedBundle/Bindings/TestRunner.idl: * WebKitTestRunner/InjectedBundle/TestRunner.cpp: (WTR::TestRunner::setIsSpeechRecognitionPermissionGranted): * WebKitTestRunner/InjectedBundle/TestRunner.h: * WebKitTestRunner/TestController.cpp: (WTR::decidePolicyForSpeechRecognitionPermissionRequest): (WTR::TestController::completeSpeechRecognitionPermissionCheck): (WTR::TestController::setIsSpeechRecognitionPermissionGranted): (WTR::TestController::createWebViewWithOptions): (WTR::TestController::resetStateToConsistentValues): * WebKitTestRunner/TestController.h: * WebKitTestRunner/TestInvocation.cpp: (WTR::TestInvocation::didReceiveSynchronousMessageFromInjectedBundle): LayoutTests: * TestExpectations: * fast/speechrecognition/permission-error-expected.txt: Added. * fast/speechrecognition/permission-error.html: Added. * fast/speechrecognition/resources/removed-iframe.html: Added. * fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt: Added. * fast/speechrecognition/start-recognition-in-removed-iframe.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231575@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@269810 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-14 02:16:49 +00:00
}
function removeFrame()
{
shouldNotThrow("iframe.parentNode.removeChild(iframe)");
Implement audio capture for SpeechRecognition on macOS https://bugs.webkit.org/show_bug.cgi?id=218855 <rdar://problem/71331001> Reviewed by Youenn Fablet. Source/WebCore: Introduce SpeechRecognizer, which performs audio capture and speech recogntion operations. On start, SpeechRecognizer creates a SpeechRecognitionCaptureSource and starts audio capturing. On stop, SpeechRecognizer clears the source and stops recognizing. SpeechRecognizer can only handle one request at a time, so calling start on already started SpeechRecognizer would cause ongoing request to be aborted. Tests: fast/speechrecognition/start-recognition-then-stop.html fast/speechrecognition/start-second-recognition.html * Headers.cmake: * Modules/speech/SpeechRecognitionCaptureSource.cpp: Added. (WebCore::SpeechRecognitionCaptureSource::SpeechRecognitionCaptureSource): * Modules/speech/SpeechRecognitionCaptureSource.h: Added. * Modules/speech/SpeechRecognitionCaptureSourceImpl.cpp: Added. SpeechRecognitionCaptureSourceImpl provides implementation of SpeechRecognitionCaptureSource on when ENABLE(MEDIA_STREAM) is true. (WebCore::nextLogIdentifier): (WebCore::nullLogger): (WebCore::SpeechRecognitionCaptureSourceImpl::SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::~SpeechRecognitionCaptureSourceImpl): (WebCore::SpeechRecognitionCaptureSourceImpl::audioSamplesAvailable): Push data to buffer, signal main thread to pull from buffer and invoke data callback. (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStarted): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceStopped): (WebCore::SpeechRecognitionCaptureSourceImpl::sourceMutedChanged): * Modules/speech/SpeechRecognitionCaptureSourceImpl.h: Added. * Modules/speech/SpeechRecognizer.cpp: Added. (WebCore::SpeechRecognizer::SpeechRecognizer): (WebCore::SpeechRecognizer::reset): (WebCore::SpeechRecognizer::start): (WebCore::SpeechRecognizer::startInternal): (WebCore::SpeechRecognizer::stop): (WebCore::SpeechRecognizer::stopInternal): * Modules/speech/SpeechRecognizer.h: Added. (WebCore::SpeechRecognizer::currentClientIdentifier const): * Sources.txt: * SourcesCocoa.txt: * WebCore.xcodeproj/project.pbxproj: * platform/cocoa/MediaUtilities.cpp: Added. (WebCore::createAudioFormatDescription): (WebCore::createAudioSampleBuffer): * platform/cocoa/MediaUtilities.h: Added. * platform/mediarecorder/cocoa/MediaRecorderPrivateWriterCocoa.mm: Move code for creating CMSampleBufferRef to MediaUtilities.h/cpp so it can shared between SpeechRecognition and UserMedia, as Speech recognition backend will take CMSampleBufferRef as input. (WebCore::createAudioFormatDescription): Deleted. (WebCore::createAudioSampleBuffer): Deleted. Source/WebKit: * UIProcess/SpeechRecognitionPermissionManager.cpp: (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): Check and enable mock devices based on preference as SpeechRecognition needs it for testing. * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::stopRequest): Deleted. (WebKit::SpeechRecognitionServer::abortRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::syncIfMockDevicesEnabledChanged): * UIProcess/WebPageProxy.h: LayoutTests: * TestExpectations: * fast/speechrecognition/start-recognition-in-removed-iframe.html: mark test as async to avoid flakiness. * fast/speechrecognition/start-recognition-then-stop-expected.txt: Added. * fast/speechrecognition/start-recognition-then-stop.html: Added. * fast/speechrecognition/start-second-recognition-expected.txt: Added. * fast/speechrecognition/start-second-recognition.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231867@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@270158 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-22 05:51:10 +00:00
setTimeout(() => finishJSTest(), 0);
Implement basic permission check for SpeechRecognition https://bugs.webkit.org/show_bug.cgi?id=218476 <rdar://problem/71222638> Reviewed by Youenn Fablet. Source/WebCore: Tests: fast/speechrecognition/permission-error.html fast/speechrecognition/start-recognition-in-removed-iframe.html * Modules/speech/SpeechRecognition.cpp: (WebCore::SpeechRecognition::startRecognition): * Modules/speech/SpeechRecognitionConnection.h: * Modules/speech/SpeechRecognitionRequest.cpp: (WebCore::SpeechRecognitionRequest::create): Deleted. * Modules/speech/SpeechRecognitionRequest.h: (WebCore::SpeechRecognitionRequest::clientOrigin const): * Modules/speech/SpeechRecognitionRequestInfo.h: (WebCore::SpeechRecognitionRequestInfo::encode const): (WebCore::SpeechRecognitionRequestInfo::decode): * Modules/speech/SpeechRecognitionUpdate.cpp: (WebCore::SpeechRecognitionUpdate::create): (WebCore::SpeechRecognitionUpdate::error const): (WebCore::SpeechRecognitionUpdate::result const): * Modules/speech/SpeechRecognitionUpdate.h: * page/DummySpeechRecognitionProvider.h: Source/WebKit: Introduce SpeechRecognitionPermissionManager, which checks and requests speech recognition permissions before we actually start capturing audio and perform recognition. SpeechRecognitionPermissionManager is per-page, like SpeechRecognitionServer. The checks include: 1. Sandbox requirement for microphone 2. TCC check for microphone 3. TCC check for SFSpeechRecognizer 4. User permission on speech recognition for origin Add a delegate function for requesting user permission. By default, user permission is not granted. API test: WebKit2.SpeechRecognitionUserPermissionPersistence * Headers.cmake: * Shared/API/APIObject.h: * Shared/API/c/WKBase.h: * Sources.txt: * SourcesCocoa.txt: * UIProcess/API/APIUIClient.h: (API::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/API/C/WKAPICast.h: * UIProcess/API/C/WKPage.cpp: (WKPageSetPageUIClient): * UIProcess/API/C/WKPageUIClient.h: * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.cpp: Added. (WKSpeechRecognitionPermissionCallbackGetTypeID): (WKSpeechRecognitionPermissionCallbackComplete): * UIProcess/API/C/WKSpeechRecognitionPermissionCallback.h: Added. * UIProcess/API/Cocoa/WKPreferences.mm: (-[WKPreferences _speechRecognitionEnabled]): (-[WKPreferences _setSpeechRecognitionEnabled:]): * UIProcess/API/Cocoa/WKPreferencesPrivate.h: * UIProcess/API/Cocoa/WKUIDelegatePrivate.h: * UIProcess/Cocoa/MediaPermissionUtilities.mm: Added. (WebKit::checkSandboxRequirementForType): (WebKit::checkUsageDescriptionStringForType): (WebKit::checkUsageDescriptionStringForSpeechRecognition): (WebKit::requestAVCaptureAccessForType): (WebKit::checkAVCaptureAccessForType): (WebKit::requestSpeechRecognitionAccess): (WebKit::checkSpeechRecognitionServiceAccess): * UIProcess/Cocoa/UIDelegate.h: * UIProcess/Cocoa/UIDelegate.mm: (WebKit::UIDelegate::UIClient::decidePolicyForSpeechRecognitionPermissionRequest): * UIProcess/Cocoa/UserMediaPermissionRequestManagerProxy.mm: (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureAudio): (WebKit::UserMediaPermissionRequestManagerProxy::permittedToCaptureVideo): (WebKit::UserMediaPermissionRequestManagerProxy::requestSystemValidation): (WebKit::requestAVCaptureAccessForMediaType): Deleted. * UIProcess/Cocoa/WebProcessProxyCocoa.mm: * UIProcess/MediaPermissionUtilities.h: Copied from Added. * UIProcess/SpeechRecognitionPermissionManager.cpp: Added. (WebKit::computeMicrophoneAccess): (WebKit::computeSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::~SpeechRecognitionPermissionManager): (WebKit::SpeechRecognitionPermissionManager::request): (WebKit::SpeechRecognitionPermissionManager::startNextRequest): (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::continueProcessingRequest): (WebKit::SpeechRecognitionPermissionManager::completeCurrentRequest): (WebKit::SpeechRecognitionPermissionManager::requestSpeechRecognitionServiceAccess): (WebKit::SpeechRecognitionPermissionManager::requestMicrophoneAccess): (WebKit::SpeechRecognitionPermissionManager::requestUserPermission): * UIProcess/SpeechRecognitionPermissionManager.h: Added. * UIProcess/SpeechRecognitionPermissionRequest.h: Added. (WebKit::SpeechRecognitionPermissionRequest::create): (WebKit::SpeechRecognitionPermissionRequest::complete): (WebKit::SpeechRecognitionPermissionRequest::origin const): (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest): (WebKit::SpeechRecognitionPermissionCallback::create): (WebKit::SpeechRecognitionPermissionCallback::complete): (WebKit::SpeechRecognitionPermissionCallback::SpeechRecognitionPermissionCallback): * UIProcess/SpeechRecognitionServer.cpp: (WebKit::SpeechRecognitionServer::SpeechRecognitionServer): (WebKit::SpeechRecognitionServer::start): (WebKit::SpeechRecognitionServer::requestPermissionForRequest): (WebKit::SpeechRecognitionServer::stop): (WebKit::SpeechRecognitionServer::abort): (WebKit::SpeechRecognitionServer::invalidate): (WebKit::SpeechRecognitionServer::handleRequest): (WebKit::SpeechRecognitionServer::stopRequest): (WebKit::SpeechRecognitionServer::abortRequest): (WebKit::SpeechRecognitionServer::sendUpdate): (WebKit::SpeechRecognitionServer::processNextPendingRequestIfNeeded): Deleted. (WebKit::SpeechRecognitionServer::removePendingRequest): Deleted. (WebKit::SpeechRecognitionServer::startPocessingRequest): Deleted. (WebKit::SpeechRecognitionServer::stopProcessingRequest): Deleted. * UIProcess/SpeechRecognitionServer.h: * UIProcess/SpeechRecognitionServer.messages.in: * UIProcess/WebPageProxy.cpp: (WebKit::WebPageProxy::didChangeMainDocument): (WebKit::WebPageProxy::resetState): (WebKit::WebPageProxy::requestSpeechRecognitionPermission): * UIProcess/WebPageProxy.h: * UIProcess/WebProcessProxy.cpp: (WebKit::WebProcessProxy::createSpeechRecognitionServer): * WebKit.xcodeproj/project.pbxproj: * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp: (WebKit::WebSpeechRecognitionConnection::start): (WebKit::WebSpeechRecognitionConnection::didReceiveUpdate): * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h: Source/WTF: * wtf/PlatformHave.h: Tools: * MiniBrowser/mac/Info.plist: * MobileMiniBrowser/MobileMiniBrowser/Info.plist: * TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: * TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm: Added. (-[SpeechRecognitionPermissionUIDelegate _webView:requestSpeechRecognitionPermissionForOrigin:decisionHandler:]): (-[SpeechRecognitionMessageHandler userContentController:didReceiveScriptMessage:]): (TestWebKitAPI::TEST): * TestWebKitAPI/Tests/WebKitCocoa/speechrecognition-user-permission-persistence.html: Added. * WebKitTestRunner/InjectedBundle/Bindings/TestRunner.idl: * WebKitTestRunner/InjectedBundle/TestRunner.cpp: (WTR::TestRunner::setIsSpeechRecognitionPermissionGranted): * WebKitTestRunner/InjectedBundle/TestRunner.h: * WebKitTestRunner/TestController.cpp: (WTR::decidePolicyForSpeechRecognitionPermissionRequest): (WTR::TestController::completeSpeechRecognitionPermissionCheck): (WTR::TestController::setIsSpeechRecognitionPermissionGranted): (WTR::TestController::createWebViewWithOptions): (WTR::TestController::resetStateToConsistentValues): * WebKitTestRunner/TestController.h: * WebKitTestRunner/TestInvocation.cpp: (WTR::TestInvocation::didReceiveSynchronousMessageFromInjectedBundle): LayoutTests: * TestExpectations: * fast/speechrecognition/permission-error-expected.txt: Added. * fast/speechrecognition/permission-error.html: Added. * fast/speechrecognition/resources/removed-iframe.html: Added. * fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt: Added. * fast/speechrecognition/start-recognition-in-removed-iframe.html: Added. * platform/wk2/TestExpectations: Canonical link: https://commits.webkit.org/231575@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@269810 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2020-11-14 02:16:49 +00:00
}
window.addEventListener('load', test, false);
</script>
<iframe src="resources/removed-iframe.html"></iframe>
</body>
</html>