haikuwebkit/PerformanceTests/Speedometer/resources/main.js

247 lines
9.7 KiB
JavaScript
Raw Permalink Normal View History

window.benchmarkClient = {
displayUnit: 'runs/min',
iterationCount: 10,
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
stepCount: null,
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
suitesCount: null,
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
_measuredValuesList: [],
_finishedTestCount: 0,
_progressCompleted: null,
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
willAddTestFrame: function (frame) {
var main = document.querySelector('main');
var style = getComputedStyle(main);
frame.style.left = main.offsetLeft + parseInt(style.borderLeftWidth) + parseInt(style.paddingLeft) + 'px';
frame.style.top = main.offsetTop + parseInt(style.borderTopWidth) + parseInt(style.paddingTop) + 'px';
},
willRunTest: function (suite, test) {
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
document.getElementById('info').textContent = suite.name + ' ( ' + this._finishedTestCount + ' / ' + this.stepCount + ' )';
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
},
didRunTest: function () {
this._finishedTestCount++;
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
this._progressCompleted.style.width = (this._finishedTestCount * 100 / this.stepCount) + '%';
},
didRunSuites: function (measuredValues) {
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
this._measuredValuesList.push(measuredValues);
},
willStartFirstIteration: function () {
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
this._measuredValuesList = [];
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
this._finishedTestCount = 0;
this._progressCompleted = document.getElementById('progress-completed');
document.getElementById('logo-link').onclick = function (event) { event.preventDefault(); return false; }
},
didFinishLastIteration: function () {
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
document.getElementById('logo-link').onclick = null;
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
var results = this._computeResults(this._measuredValuesList, this.displayUnit);
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
this._updateGaugeNeedle(results.mean);
document.getElementById('result-number').textContent = results.formattedMean;
if (results.formattedDelta)
document.getElementById('confidence-number').textContent = '\u00b1 ' + results.formattedDelta;
this._populateDetailedResults(results.formattedValues);
document.getElementById('results-with-statistics').textContent = results.formattedMeanAndDelta;
if (this.displayUnit == 'ms') {
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
document.getElementById('show-summary').style.display = 'none';
showResultDetails();
} else
showResultsSummary();
},
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
_computeResults: function (measuredValuesList, displayUnit) {
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
var suitesCount = this.suitesCount;
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
function valueForUnit(measuredValues) {
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
if (displayUnit == 'ms')
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
return measuredValues.geomean;
return measuredValues.score;
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
}
function sigFigFromPercentDelta(percentDelta) {
return Math.ceil(-Math.log(percentDelta)/Math.log(10)) + 3;
}
function toSigFigPrecision(number, sigFig) {
var nonDecimalDigitCount = number < 1 ? 0 : (Math.floor(Math.log(number)/Math.log(10)) + 1);
return number.toPrecision(Math.max(nonDecimalDigitCount, Math.min(6, sigFig)));
}
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
var values = measuredValuesList.map(valueForUnit);
var sum = values.reduce(function (a, b) { return a + b; }, 0);
var arithmeticMean = sum / values.length;
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
var meanSigFig = 4;
var formattedDelta;
var formattedPercentDelta;
if (window.Statistics) {
var delta = Statistics.confidenceIntervalDelta(0.95, values.length, sum, Statistics.squareSum(values));
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
if (!isNaN(delta)) {
var percentDelta = delta * 100 / arithmeticMean;
meanSigFig = sigFigFromPercentDelta(percentDelta);
formattedDelta = toSigFigPrecision(delta, 2);
formattedPercentDelta = toSigFigPrecision(percentDelta, 2) + '%';
}
}
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
var formattedMean = toSigFigPrecision(arithmeticMean, Math.max(meanSigFig, 3));
return {
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
formattedValues: values.map(function (value) {
return toSigFigPrecision(value, 4) + ' ' + displayUnit;
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
}),
mean: arithmeticMean,
formattedMean: formattedMean,
formattedDelta: formattedDelta,
formattedMeanAndDelta: formattedMean + (formattedDelta ? ' \xb1 ' + formattedDelta + ' (' + formattedPercentDelta + ')' : ''),
};
},
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
_addDetailedResultsRow: function (table, iterationNumber, value) {
var row = document.createElement('tr');
var th = document.createElement('th');
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
th.textContent = 'Iteration ' + (iterationNumber + 1);
var td = document.createElement('td');
td.textContent = value;
row.appendChild(th);
row.appendChild(td);
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
table.appendChild(row);
},
_updateGaugeNeedle: function (rpm) {
var needleAngle = Math.max(0, Math.min(rpm, 140)) - 70;
var needleRotationValue = 'rotate(' + needleAngle + 'deg)';
var gaugeNeedleElement = document.querySelector('#summarized-results > .gauge .needle');
gaugeNeedleElement.style.setProperty('-webkit-transform', needleRotationValue);
gaugeNeedleElement.style.setProperty('-moz-transform', needleRotationValue);
gaugeNeedleElement.style.setProperty('-ms-transform', needleRotationValue);
gaugeNeedleElement.style.setProperty('transform', needleRotationValue);
},
_populateDetailedResults: function (formattedValues) {
var resultsTables = document.querySelectorAll('.results-table');
var i = 0;
resultsTables[0].innerHTML = '';
for (; i < Math.ceil(formattedValues.length / 2); i++)
this._addDetailedResultsRow(resultsTables[0], i, formattedValues[i]);
resultsTables[1].innerHTML = '';
for (; i < formattedValues.length; i++)
this._addDetailedResultsRow(resultsTables[1], i, formattedValues[i]);
},
prepareUI: function () {
window.addEventListener('popstate', function (event) {
if (event.state) {
var sectionToShow = event.state.section;
if (sectionToShow) {
var sections = document.querySelectorAll('main > section');
for (var i = 0; i < sections.length; i++) {
if (sections[i].id === sectionToShow)
return showSection(sectionToShow, false);
}
}
}
return showSection('home', false);
}, false);
function updateScreenSize() {
// FIXME: Detect when the window size changes during the test.
var screenIsTooSmall = window.innerWidth < 850 || window.innerHeight < 650;
document.getElementById('screen-size').textContent = window.innerWidth + 'px by ' + window.innerHeight + 'px';
document.getElementById('screen-size-warning').style.display = screenIsTooSmall ? null : 'none';
}
window.addEventListener('resize', updateScreenSize);
updateScreenSize();
}
}
function enableOneSuite(suites, suiteToEnable)
{
suiteToEnable = suiteToEnable.toLowerCase();
var found = false;
for (var i = 0; i < suites.length; i++) {
var currentSuite = suites[i];
if (currentSuite.name.toLowerCase() == suiteToEnable) {
currentSuite.disabled = false;
found = true;
} else
currentSuite.disabled = true;
}
return found;
}
function startBenchmark() {
if (location.search.length > 1) {
var parts = location.search.substring(1).split('&');
for (var i = 0; i < parts.length; i++) {
var keyValue = parts[i].split('=');
var key = keyValue[0];
var value = keyValue[1];
switch (key) {
case 'unit':
if (value == 'ms')
benchmarkClient.displayUnit = 'ms';
else
console.error('Invalid unit: ' + value);
break;
case 'iterationCount':
var parsedValue = parseInt(value);
if (!isNaN(parsedValue))
benchmarkClient.iterationCount = parsedValue;
else
console.error('Invalid iteration count: ' + value);
break;
case 'suite':
if (!enableOneSuite(Suites, value)) {
alert('Suite "' + value + '" does not exist. No tests to run.');
return false;
}
break;
}
}
}
var enabledSuites = Suites.filter(function (suite) { return !suite.disabled; });
Compute the final score using geometric mean in Speedometer 2.0 https://bugs.webkit.org/show_bug.cgi?id=172968 Reviewed by Saam Barati. Make Speedometer 2.0 use the geometric mean of the subtotal of each test suite instead of the total.. In Speedometer 1.0, we used the total time to compute the final score because we wanted to make the slowest framework and library faster. The fastest suite (FlightJS) still accounted for ~6% and the slowest case (React) accounted for ~25% so we felt the total time, or the arithmetic mean with a constant factor, was a good metric to track. In the latest version of Speedometer 2.0, however, the fastest suite (Preact) runs in ~55ms whereas the slowest suite (Inferno) takes 1.5s on Safari. Since the total time is 6.5s, Preact's suite only accounts for ~0.8% of the total score while Inferno's suite accounts for ~23% of the total score. Since the goal of Speedometer is to approximate different kinds of DOM API use patterns on the Web, we want each framework & library to have some measurement impact on the overall benchmark score. Furthermore, after r221205, we're testing both debug build of Ember.js as well as release build. Since debug build is 4x slower, using the total time or the arithmetic mean thereof will effectively give 4x as much weight to debug build of Ember.js relative to release build of Ember.js. Given only ~5% of websites that deploy Ember.js use debug build, this weighting is clearly not right. This patch, therefore, replaces the arithmetic mean by the geometric mean to compute the final score. It also moves the code to compute the final score to BenchmarkRunner to be shared between main.js and InteractiveRunner.html. * Speedometer/InteractiveRunner.html: (.didRunSuites): Show geometric mean, arithmetic mean, total, as well as the score for completeness since this is a debugging page for developers. * Speedometer/resources/benchmark-runner.js: (BenchmarkRunner.prototype.step): Added mean, geomean, and score as measuredValues' properties. (BenchmarkRunner.prototype._runTestAndRecordResults): Removed the dead code. (BenchmarkRunner.prototype._finalize): Compute and add total, arithmetic mean (just mean in the code), and geometric mean (geomean) to measuredValues. * Speedometer/resources/main.js: (window.benchmarkClient): Replaced testsCount by stepsCount and _timeValues by _measuredValuesList. (window.benchmarkClient.willRunTest): (window.benchmarkClient.didRunTest): (window.benchmarkClient.didRunSuites): Store measuredValues object instead of just the total time. (window.benchmarkClient.didFinishLastIteration): (window.benchmarkClient._computeResults): (window.benchmarkClient._computeResults.valueForUnit): Renamed from totalTimeInDisplayUnit. Now simply retrieves the values computed by BenchmarkRunner's_finalize. (startBenchmark): (computeScore): Deleted. Canonical link: https://commits.webkit.org/193020@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221659 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2017-09-06 02:37:41 +00:00
var totalSubtestsCount = enabledSuites.reduce(function (testsCount, suite) { return testsCount + suite.tests.length; }, 0);
benchmarkClient.stepCount = benchmarkClient.iterationCount * totalSubtestsCount;
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
benchmarkClient.suitesCount = enabledSuites.length;
var runner = new BenchmarkRunner(Suites, benchmarkClient);
runner.runMultipleIterations(benchmarkClient.iterationCount);
return true;
}
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
function showSection(sectionIdentifier, pushState) {
var currentSectionElement = document.querySelector('section.selected');
console.assert(currentSectionElement);
var newSectionElement = document.getElementById(sectionIdentifier);
console.assert(newSectionElement);
currentSectionElement.classList.remove('selected');
newSectionElement.classList.add('selected');
if (pushState)
history.pushState({section: sectionIdentifier}, document.title);
}
function showHome() {
showSection('home', true);
}
function startTest() {
if (startBenchmark())
showSection('running');
Rename DoYouEvenBench 0.17 to Speedometer 1.0 and add a new look. https://bugs.webkit.org/show_bug.cgi?id=133455 Reviewed by Timothy Hatcher. Renamed the benchmark to Speedometer and added the new look designed by Timothy Hatcher. Also changed the unit of measurements from milliseconds to runs-per-minute averaged over the number of the benchmark suites (7 for 1.0). You can divide 420000 by the old benchmark score (in milliseconds) to get the new value for the set of tests that are enabled by default in 1.0. You can continue to see results in milliseconds on Full.html#ms. * DoYouEvenBench/Full.html: Added a bunch of sections and the description of the benchmark. * DoYouEvenBench/resources/benchmark-report.js: Remove the newly added content when ran inside a DRT or WTR so that run-perf-tests wouldn't error. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner.prototype._appendFrame): Call a newly added willAddTestFrame callback when it exists. * DoYouEvenBench/resources/gauge.png: Added. * DoYouEvenBench/resources/gauge@2x.png: Added. * DoYouEvenBench/resources/logo.png: Added. * DoYouEvenBench/resources/logo@2x.png: Added. * DoYouEvenBench/resources/main.css: Replaced the style. * DoYouEvenBench/resources/main.js: (window.benchmarkClient.willAddTestFrame): Place the iframe right where #testContainer is shown. (window.benchmarkClient.willRunTest): Show the name of the suite (e.g. EmberJS-TodoMVC) to run next. (window.benchmarkClient.didRunSuites): (window.benchmarkClient.willStartFirstIteration): Initialize _timeValues and _finishedTestCount now that we have an UI to run the benchmark multiple times without reloading the page. (window.benchmarkClient.didFinishLastIteration): Split into smaller pieces. (window.benchmarkClient._computeResults): Computes the mean and the statistics for the given time values, and also format them in a human readable form. (window.benchmarkClient._computeResults.totalTimeInDisplayUnit): Converts ms to runs/min. (window.benchmarkClient._computeResults.sigFigFromPercentDelta): Given a percentage error (e.g. 1%), returns the number of significant digits required for the mean. (window.benchmarkClient._computeResults.toSigFigPrecision): Calls toPrecision with the specified precision constrained to be at least the number of non-decimal digits and at most 6. (window.benchmarkClient._addDetailedResultsRow): Renamed from _addResult. It now takes the table to which to add a row and the iteration number. (window.benchmarkClient._updateGaugeNeedle): Added. Controls the angle of the speed indicator. (window.benchmarkClient._populateDetailedResults): Added. (window.benchmarkClient.prepareUI): Added. It adds an event listener to show a specified section when the push state of the document changes, and shows a warning sign when the view port size is too small. We do this inside a callback to avoid running it inside DRT / WTR. (startBenchmark): (showSection): Added. (startTest): Added. (showResultsSummary): Added. (showResultDetails): Added. (showAbout): Added. Canonical link: https://commits.webkit.org/151480@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@169540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-06-02 19:57:39 +00:00
}
function showResultsSummary() {
showSection('summarized-results', true);
}
function showResultDetails() {
showSection('detailed-results', true);
}
function showAbout() {
showSection('about', true);
}
window.addEventListener('DOMContentLoaded', function () {
if (benchmarkClient.prepareUI)
benchmarkClient.prepareUI();
});