haikuwebkit/PerformanceTests/Speedometer/index.html

108 lines
5.2 KiB
HTML
Raw Permalink Normal View History

Make DoYouEvenBench runnable by run-perf-tests https://bugs.webkit.org/show_bug.cgi?id=127030 Reviewed by Andreas Kling. PerformanceTests: Added Full.html that runs 5 iterations of DoYouEvenBench. This is the canonical DoYouEvenBench, which is also runnable by run-perf-tests. * DoYouEvenBench/Full.html: Added. * DoYouEvenBench/benchmark.html: (startTest): Updated the code to account for the fact old measuredValues is pushed down to tests property and we now have total property so that we don't have to manually compute the total. * DoYouEvenBench/resources/benchmark-report.js: Added. When we're inside a DRT/WTR, use PerfTestRunner to output that can be parsed by run-perf-tests. Do the same when the query part or the fragment part of the current URL is "webkit" for debugging purposes. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner): (BenchmarkRunner.prototype._appendFrame): Position the frame at (0, 0) inside DRT and WTR since we have exactly 800px by 600px inside those two test runners. Also always insert the iframe as the first child of body to avoid inserting it after the pre inserted by the test runner. (BenchmarkRunner.prototype.step): Initializes _measuredValues. (BenchmarkRunner.prototype.runAllSteps): Merged callNextStep in benchmark.html. (BenchmarkRunner.prototype.runMultipleIterations): Added. (BenchmarkRunner.prototype._runTestAndRecordResults): Compute the grand total among suites. Also push down the sync and async time into tests property for consistency. (BenchmarkRunner.prototype._finalize): * Dromaeo/resources/dromaeorunner.js: (DRT.testObject): Renamed dromaeoIterationCount to customIterationCount as this option is also used by DoYouEvenBench. * resources/runner.js: Ditto. (.finish): Spit out the aggregator name. Tools: Ignore console messages spit out by DoYouEvenBench and support aggregator names such as ":Total" to appear at the end of a test name. We don't do anything with it for now. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest._metrics_regex): Handle aggregator names such as ":Total". We'll pass it down to the JSON in a follow up patch for the perf dashboard. (PerfTest._lines_to_ignore_in_parser_result): Added lines to ignore for DoYouEvenBench. LayoutTests: Use customIterationCount as it has been renamed from dromaeoIterationCount. * fast/harness/perftests/runs-per-second-iterations.html: Canonical link: https://commits.webkit.org/145035@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162058 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-01-15 08:01:52 +00:00
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Speedometer 2.0</title>
<link rel="stylesheet" href="resources/main.css">
<script src="resources/main.js" defer></script>
<script src="resources/benchmark-runner.js" defer></script>
<script src="resources/benchmark-report.js" defer></script>
<script src="../resources/statistics.js" defer></script>
<script src="resources/tests.js" defer></script>
Make DoYouEvenBench runnable by run-perf-tests https://bugs.webkit.org/show_bug.cgi?id=127030 Reviewed by Andreas Kling. PerformanceTests: Added Full.html that runs 5 iterations of DoYouEvenBench. This is the canonical DoYouEvenBench, which is also runnable by run-perf-tests. * DoYouEvenBench/Full.html: Added. * DoYouEvenBench/benchmark.html: (startTest): Updated the code to account for the fact old measuredValues is pushed down to tests property and we now have total property so that we don't have to manually compute the total. * DoYouEvenBench/resources/benchmark-report.js: Added. When we're inside a DRT/WTR, use PerfTestRunner to output that can be parsed by run-perf-tests. Do the same when the query part or the fragment part of the current URL is "webkit" for debugging purposes. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner): (BenchmarkRunner.prototype._appendFrame): Position the frame at (0, 0) inside DRT and WTR since we have exactly 800px by 600px inside those two test runners. Also always insert the iframe as the first child of body to avoid inserting it after the pre inserted by the test runner. (BenchmarkRunner.prototype.step): Initializes _measuredValues. (BenchmarkRunner.prototype.runAllSteps): Merged callNextStep in benchmark.html. (BenchmarkRunner.prototype.runMultipleIterations): Added. (BenchmarkRunner.prototype._runTestAndRecordResults): Compute the grand total among suites. Also push down the sync and async time into tests property for consistency. (BenchmarkRunner.prototype._finalize): * Dromaeo/resources/dromaeorunner.js: (DRT.testObject): Renamed dromaeoIterationCount to customIterationCount as this option is also used by DoYouEvenBench. * resources/runner.js: Ditto. (.finish): Spit out the aggregator name. Tools: Ignore console messages spit out by DoYouEvenBench and support aggregator names such as ":Total" to appear at the end of a test name. We don't do anything with it for now. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest._metrics_regex): Handle aggregator names such as ":Total". We'll pass it down to the JSON in a follow up patch for the perf dashboard. (PerfTest._lines_to_ignore_in_parser_result): Added lines to ignore for DoYouEvenBench. LayoutTests: Use customIterationCount as it has been renamed from dromaeoIterationCount. * fast/harness/perftests/runs-per-second-iterations.html: Canonical link: https://commits.webkit.org/145035@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162058 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-01-15 08:01:52 +00:00
</head>
<body>
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
<main>
<a id="logo-link" href="javascript:showHome()"><img id="logo" src="resources/logo.png"></a>
<section id="home" class="selected">
<p>
Speedometer is a browser benchmark that measures the responsiveness of Web applications.
It uses demo web applications to simulate user actions such as adding to-do items.
</p>
<p id="screen-size-warning"><strong>
Your browser window is too small. For most accurate results, please make the view port size at least 850px by 650px.<br>
It's currently <span id="screen-size"></span>.
</strong></p>
<div class="buttons">
<button onclick="startTest()">Start Test</button>
</div>
<p class="show-about"><a href="javascript:showAbout()">About Speedometer</a></p>
</section>
<section id="running">
<div id="testContainer"></div>
<div id="progress"><div id="progress-completed"></div></div>
<div id="info"></div>
</section>
<section id="summarized-results">
<h1>Runs / Minute</h1>
<div class="gauge"><div class="window"><div class="needle"></div></div></div>
<hr>
<div id="result-number"></div>
<div id="confidence-number"></div>
<div class="buttons">
<button onclick="startTest()">Test Again</button>
<button class="show-details" onclick="showResultDetails()">Details</button>
</div>
</section>
<section id="detailed-results">
<h1>Detailed Results</h1>
<table class="results-table"></table>
<table class="results-table"></table>
<div class="arithmetic-mean"><label>Arithmetic Mean:</label><span id="results-with-statistics"></span></div>
<div class="buttons">
<button onclick="startTest()">Test Again</button>
<button id="show-summary" onclick="showResultsSummary()">Summary</button>
</div>
<p class="show-about"><a href="javascript:showAbout()">About Speedometer</a></p>
</section>
<section id="about">
<h1>About Speedometer 2.0</h1>
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
<p>Speedometer tests a browser's Web app responsiveness by timing simulated user interactions.</p>
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
<p>
This benchmark simulates user actions for adding, completing, and removing to-do items using multiple examples in TodoMVC.
Each example in TodoMVC implements the same todo application using DOM APIs in different ways.
Some call DOM APIs directly from ECMAScript 5 (ES5), ECMASCript 2015 (ES6), ES6 transpiled to ES5, and Elm transpiled to ES5.
Others use one of eleven popular JavaScript frameworks: React, React with Redux, Ember.js, Backbone.js,
AngularJS, (new) Angular, Vue.js, jQuery, Preact, Inferno, and Flight.
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
Many of these frameworks are used on the most popular websites in the world, such as Facebook and Twitter.
The performance of these types of operations depends on the speed of the DOM APIs, the JavaScript engine,
CSS style resolution, layout, and other technologies.
</p>
<p>
Although user-driven actions like mouse movements and keyboard input cannot be accurately emulated in JavaScript,
Speedometer does its best to faithfully replay a typical workload within the demo applications.
To make the run time long enough to measure with the limited precision,
we synchronously execute a large number of the operations, such as adding one hundred to-do items.
</p>
<p>
Modern browser engines execute some work asynchronously as an optimization strategy to reduce the run time of synchronous operations.
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
While returning control back to JavaScript execution as soon as possible is worth pursuing,
the run time cost of such an asynchronous work should still be taken into a holistic measurement of web application performance.
In addition, some modern JavaScript frameworks such as Vue.js and Preact call into DOM APIs asynchronously as an optimization technique.
Speedometer approximates the run time of this asynchronous work in the UI thread with a zero-second timer
that is scheduled immediately after each execution of synchronous operations.
</p>
<p>
Speedometer does not attempt to measure concurrent asynchronous work that does not directly impact the UI thread,
which tends not to affect app responsiveness.
</p>
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
<p class="note">
<strong>Note:</strong> Speedometer should not be used as a way to compare the performance of different JavaScript frameworks
as work load differs greatly in each framework.
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
</p>
</section>
</main>
Make DoYouEvenBench runnable by run-perf-tests https://bugs.webkit.org/show_bug.cgi?id=127030 Reviewed by Andreas Kling. PerformanceTests: Added Full.html that runs 5 iterations of DoYouEvenBench. This is the canonical DoYouEvenBench, which is also runnable by run-perf-tests. * DoYouEvenBench/Full.html: Added. * DoYouEvenBench/benchmark.html: (startTest): Updated the code to account for the fact old measuredValues is pushed down to tests property and we now have total property so that we don't have to manually compute the total. * DoYouEvenBench/resources/benchmark-report.js: Added. When we're inside a DRT/WTR, use PerfTestRunner to output that can be parsed by run-perf-tests. Do the same when the query part or the fragment part of the current URL is "webkit" for debugging purposes. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner): (BenchmarkRunner.prototype._appendFrame): Position the frame at (0, 0) inside DRT and WTR since we have exactly 800px by 600px inside those two test runners. Also always insert the iframe as the first child of body to avoid inserting it after the pre inserted by the test runner. (BenchmarkRunner.prototype.step): Initializes _measuredValues. (BenchmarkRunner.prototype.runAllSteps): Merged callNextStep in benchmark.html. (BenchmarkRunner.prototype.runMultipleIterations): Added. (BenchmarkRunner.prototype._runTestAndRecordResults): Compute the grand total among suites. Also push down the sync and async time into tests property for consistency. (BenchmarkRunner.prototype._finalize): * Dromaeo/resources/dromaeorunner.js: (DRT.testObject): Renamed dromaeoIterationCount to customIterationCount as this option is also used by DoYouEvenBench. * resources/runner.js: Ditto. (.finish): Spit out the aggregator name. Tools: Ignore console messages spit out by DoYouEvenBench and support aggregator names such as ":Total" to appear at the end of a test name. We don't do anything with it for now. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest._metrics_regex): Handle aggregator names such as ":Total". We'll pass it down to the JSON in a follow up patch for the perf dashboard. (PerfTest._lines_to_ignore_in_parser_result): Added lines to ignore for DoYouEvenBench. LayoutTests: Use customIterationCount as it has been renamed from dromaeoIterationCount. * fast/harness/perftests/runs-per-second-iterations.html: Canonical link: https://commits.webkit.org/145035@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162058 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-01-15 08:01:52 +00:00
</body>
</html>