haikuwebkit/PerformanceTests/resources/runner.js

402 lines
14 KiB
JavaScript
Raw Permalink Normal View History

// There are tests for computeStatistics() located in LayoutTests/fast/harness/perftests
if (window.testRunner) {
testRunner.waitUntilDone();
testRunner.dumpAsText();
}
(function () {
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
var logLines = window.testRunner ? [] : null;
Simplify and reformat the output of performance tests inside test runners https://bugs.webkit.org/show_bug.cgi?id=124496 Reviewed by Antti Koivisto. PerformanceTests: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Also modernize the output to better support "metric" concept we introduced a while ago. New output on Dromaeo/dom-attr looks like this: ----------------------------------------------- Running 5 times getAttribute -> [1105, 1108, 1134, 1137, 1154] element.property -> [1634, 1655, 1685, 1696, 1723] setAttribute -> [646.3536463536464, 651, 651, 656.3436563436563, 658] element.property = value -> [934, 949, 963, 964, 974] element.expando = value -> [419, 419.5804195804196, 421.57842157842157, 425.57442557442556, 429] element.expando -> [501, 517, 519.4805194805194, 521.4785214785214, 525] 1: 117.40644785571585 runs/s 2: 118.84720469666297 runs/s 3: 119.80547640905021 runs/s 4: 120.51886194758805 runs/s 5: 121.51924380569295 runs/s :Time -> [117.40644785571585, 118.84720469666297, 119.80547640905021, 120.51886194758805, 121.51924380569295] runs/s mean: 119.619446942942 runs/s median: 119.80547640905021 runs/s stdev: 1.5769040458730506 runs/s min: 117.40644785571585 runs/s max: 121.51924380569295 runs/s ----------------------------------------------- * Dromaeo/resources/dromaeorunner.js: (DRT.progress): Use the new format for subtest reports. * resources/runner.js: (.): Declare verboseLogging, which is set to true outside of test runners. (PerfTestRunner.logInfo): Use verboseLogging instead of directly checking window.testRunner. (PerfTestRunner.logDetail): Added. Logs informative text with a label such as "mean: 123 s" with 4-space indentation. (PerfTestRunner.logStatistics): Use logDetail. (.start): Initialize verboseLogging. Also log "Running 20 times" as an informative log using logDetail. (.ignoreWarmUpAndLog): Use logDetail for showing the progress. These logs were useless inside test runners anyway because perftest didn't get to see any output until the test finished running. (.finish): Call logStatistics with metric name as opposed to a label. Each metric name is now prefixed with ':' to be distinguishable from subtests, making the new format forward compatible. Tools: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Instead of spitting out noise in PerfTestRunner (runner.js) and ignoring it in PerfTest._filter_output (perftest.py), simply avoid generating it in the first place. Also modernize the output to adopt "metric" concept better and make it forward compatible with subtests. With this patch, performance tests written using runner.js only produces empty lines or lines of the following format inside test runners (DumpRenderTree and WebKitTestRunner): <subtest name> -> [<value 1>, <value 2>, ...] :<metric name> -> [<value 1>, <value 2>, ...] This greatly simplifies the parsing logic inside PerfTest._run_with_driver. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Removed a bunch of regular expressions that are no longer used. (PerfTest._run_with_driver): Just parse the values and description and treat everything else as errors. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest.test_parse_output): Removed the junk. (TestPerfTest._assert_failed_on_line): Extracted from test_parse_output_with_failing_line, which was removed in favor of the tests below. (TestPerfTest.test_parse_output_with_running_five_times): Added. (TestPerfTest.test_parse_output_with_detailed_info): Added. (TestPerfTest.test_parse_output_with_statistics): Added. (TestPerfTest.test_parse_output_with_description): Removed the junk. (TestPerfTest.test_parse_output_with_subtests): Ditto. (TestSingleProcessPerfTest.test_use_only_one_process): Ditto. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (EventTargetWrapperTestData): Ditto. (SomeParserTestData): Ditto. (MemoryTestData): Ditto. LayoutTests: Rebaseline the expected result now that the output has been simplified. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/142730@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159465 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-18 23:40:36 +00:00
var verboseLogging = false;
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
var completedIterations;
var callsPerIteration = 1;
var currentTest = null;
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
var results;
var jsHeapResults;
var mallocHeapResults;
var iterationCount = undefined;
var lastResponsivenessTimestamp = 0;
var _longestResponsivenessDelay = 0;
var continueCheckingResponsiveness = false;
var PerfTestRunner = {};
// To make the benchmark results predictable, we replace Math.random with a
// 100% deterministic alternative.
PerfTestRunner.randomSeed = PerfTestRunner.initialRandomSeed = 49734321;
PerfTestRunner.resetRandomSeed = function() {
PerfTestRunner.randomSeed = PerfTestRunner.initialRandomSeed
}
PerfTestRunner.random = Math.random = function() {
// Robert Jenkins' 32 bit integer hash function.
var randomSeed = PerfTestRunner.randomSeed;
randomSeed = ((randomSeed + 0x7ed55d16) + (randomSeed << 12)) & 0xffffffff;
randomSeed = ((randomSeed ^ 0xc761c23c) ^ (randomSeed >>> 19)) & 0xffffffff;
randomSeed = ((randomSeed + 0x165667b1) + (randomSeed << 5)) & 0xffffffff;
randomSeed = ((randomSeed + 0xd3a2646c) ^ (randomSeed << 9)) & 0xffffffff;
randomSeed = ((randomSeed + 0xfd7046c5) + (randomSeed << 3)) & 0xffffffff;
randomSeed = ((randomSeed ^ 0xb55a4f09) ^ (randomSeed >>> 16)) & 0xffffffff;
PerfTestRunner.randomSeed = randomSeed;
return (randomSeed & 0xfffffff) / 0x10000000;
};
PerfTestRunner.now = window.performance && window.performance.now ? function () { return window.performance.now(); } : Date.now;
PerfTestRunner.logInfo = function (text) {
Simplify and reformat the output of performance tests inside test runners https://bugs.webkit.org/show_bug.cgi?id=124496 Reviewed by Antti Koivisto. PerformanceTests: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Also modernize the output to better support "metric" concept we introduced a while ago. New output on Dromaeo/dom-attr looks like this: ----------------------------------------------- Running 5 times getAttribute -> [1105, 1108, 1134, 1137, 1154] element.property -> [1634, 1655, 1685, 1696, 1723] setAttribute -> [646.3536463536464, 651, 651, 656.3436563436563, 658] element.property = value -> [934, 949, 963, 964, 974] element.expando = value -> [419, 419.5804195804196, 421.57842157842157, 425.57442557442556, 429] element.expando -> [501, 517, 519.4805194805194, 521.4785214785214, 525] 1: 117.40644785571585 runs/s 2: 118.84720469666297 runs/s 3: 119.80547640905021 runs/s 4: 120.51886194758805 runs/s 5: 121.51924380569295 runs/s :Time -> [117.40644785571585, 118.84720469666297, 119.80547640905021, 120.51886194758805, 121.51924380569295] runs/s mean: 119.619446942942 runs/s median: 119.80547640905021 runs/s stdev: 1.5769040458730506 runs/s min: 117.40644785571585 runs/s max: 121.51924380569295 runs/s ----------------------------------------------- * Dromaeo/resources/dromaeorunner.js: (DRT.progress): Use the new format for subtest reports. * resources/runner.js: (.): Declare verboseLogging, which is set to true outside of test runners. (PerfTestRunner.logInfo): Use verboseLogging instead of directly checking window.testRunner. (PerfTestRunner.logDetail): Added. Logs informative text with a label such as "mean: 123 s" with 4-space indentation. (PerfTestRunner.logStatistics): Use logDetail. (.start): Initialize verboseLogging. Also log "Running 20 times" as an informative log using logDetail. (.ignoreWarmUpAndLog): Use logDetail for showing the progress. These logs were useless inside test runners anyway because perftest didn't get to see any output until the test finished running. (.finish): Call logStatistics with metric name as opposed to a label. Each metric name is now prefixed with ':' to be distinguishable from subtests, making the new format forward compatible. Tools: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Instead of spitting out noise in PerfTestRunner (runner.js) and ignoring it in PerfTest._filter_output (perftest.py), simply avoid generating it in the first place. Also modernize the output to adopt "metric" concept better and make it forward compatible with subtests. With this patch, performance tests written using runner.js only produces empty lines or lines of the following format inside test runners (DumpRenderTree and WebKitTestRunner): <subtest name> -> [<value 1>, <value 2>, ...] :<metric name> -> [<value 1>, <value 2>, ...] This greatly simplifies the parsing logic inside PerfTest._run_with_driver. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Removed a bunch of regular expressions that are no longer used. (PerfTest._run_with_driver): Just parse the values and description and treat everything else as errors. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest.test_parse_output): Removed the junk. (TestPerfTest._assert_failed_on_line): Extracted from test_parse_output_with_failing_line, which was removed in favor of the tests below. (TestPerfTest.test_parse_output_with_running_five_times): Added. (TestPerfTest.test_parse_output_with_detailed_info): Added. (TestPerfTest.test_parse_output_with_statistics): Added. (TestPerfTest.test_parse_output_with_description): Removed the junk. (TestPerfTest.test_parse_output_with_subtests): Ditto. (TestSingleProcessPerfTest.test_use_only_one_process): Ditto. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (EventTargetWrapperTestData): Ditto. (SomeParserTestData): Ditto. (MemoryTestData): Ditto. LayoutTests: Rebaseline the expected result now that the output has been simplified. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/142730@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159465 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-18 23:40:36 +00:00
if (verboseLogging)
this.log(text);
}
Simplify and reformat the output of performance tests inside test runners https://bugs.webkit.org/show_bug.cgi?id=124496 Reviewed by Antti Koivisto. PerformanceTests: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Also modernize the output to better support "metric" concept we introduced a while ago. New output on Dromaeo/dom-attr looks like this: ----------------------------------------------- Running 5 times getAttribute -> [1105, 1108, 1134, 1137, 1154] element.property -> [1634, 1655, 1685, 1696, 1723] setAttribute -> [646.3536463536464, 651, 651, 656.3436563436563, 658] element.property = value -> [934, 949, 963, 964, 974] element.expando = value -> [419, 419.5804195804196, 421.57842157842157, 425.57442557442556, 429] element.expando -> [501, 517, 519.4805194805194, 521.4785214785214, 525] 1: 117.40644785571585 runs/s 2: 118.84720469666297 runs/s 3: 119.80547640905021 runs/s 4: 120.51886194758805 runs/s 5: 121.51924380569295 runs/s :Time -> [117.40644785571585, 118.84720469666297, 119.80547640905021, 120.51886194758805, 121.51924380569295] runs/s mean: 119.619446942942 runs/s median: 119.80547640905021 runs/s stdev: 1.5769040458730506 runs/s min: 117.40644785571585 runs/s max: 121.51924380569295 runs/s ----------------------------------------------- * Dromaeo/resources/dromaeorunner.js: (DRT.progress): Use the new format for subtest reports. * resources/runner.js: (.): Declare verboseLogging, which is set to true outside of test runners. (PerfTestRunner.logInfo): Use verboseLogging instead of directly checking window.testRunner. (PerfTestRunner.logDetail): Added. Logs informative text with a label such as "mean: 123 s" with 4-space indentation. (PerfTestRunner.logStatistics): Use logDetail. (.start): Initialize verboseLogging. Also log "Running 20 times" as an informative log using logDetail. (.ignoreWarmUpAndLog): Use logDetail for showing the progress. These logs were useless inside test runners anyway because perftest didn't get to see any output until the test finished running. (.finish): Call logStatistics with metric name as opposed to a label. Each metric name is now prefixed with ':' to be distinguishable from subtests, making the new format forward compatible. Tools: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Instead of spitting out noise in PerfTestRunner (runner.js) and ignoring it in PerfTest._filter_output (perftest.py), simply avoid generating it in the first place. Also modernize the output to adopt "metric" concept better and make it forward compatible with subtests. With this patch, performance tests written using runner.js only produces empty lines or lines of the following format inside test runners (DumpRenderTree and WebKitTestRunner): <subtest name> -> [<value 1>, <value 2>, ...] :<metric name> -> [<value 1>, <value 2>, ...] This greatly simplifies the parsing logic inside PerfTest._run_with_driver. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Removed a bunch of regular expressions that are no longer used. (PerfTest._run_with_driver): Just parse the values and description and treat everything else as errors. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest.test_parse_output): Removed the junk. (TestPerfTest._assert_failed_on_line): Extracted from test_parse_output_with_failing_line, which was removed in favor of the tests below. (TestPerfTest.test_parse_output_with_running_five_times): Added. (TestPerfTest.test_parse_output_with_detailed_info): Added. (TestPerfTest.test_parse_output_with_statistics): Added. (TestPerfTest.test_parse_output_with_description): Removed the junk. (TestPerfTest.test_parse_output_with_subtests): Ditto. (TestSingleProcessPerfTest.test_use_only_one_process): Ditto. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (EventTargetWrapperTestData): Ditto. (SomeParserTestData): Ditto. (MemoryTestData): Ditto. LayoutTests: Rebaseline the expected result now that the output has been simplified. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/142730@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159465 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-18 23:40:36 +00:00
PerfTestRunner.logDetail = function (label, value) {
if (verboseLogging)
this.log(' ' + label + ': ' + value);
}
PerfTestRunner.loadFile = function (path) {
var xhr = new XMLHttpRequest();
xhr.open("GET", path, false);
xhr.send(null);
return xhr.responseText;
}
PerfTestRunner.computeStatistics = function (times, unit) {
var data = times.slice();
// Add values from the smallest to the largest to avoid the loss of significance
data.sort(function(a,b){return a-b;});
var middle = Math.floor(data.length / 2);
var result = {
min: data[0],
max: data[data.length - 1],
median: data.length % 2 ? data[middle] : (data[middle - 1] + data[middle]) / 2,
};
Fix PerfTest standard deviation calculation. https://bugs.webkit.org/show_bug.cgi?id=98115 Reviewed by Ryosuke Niwa. Previously our standard deviation calculation was incorrect. This patch updates perftest.py's algorithm to calculate the sample standard deviation (with Bessel's correction) using Knuth's online algorithm: http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm An existing test has been modified to prove our new results. This patch also updates runner.js to use Bessel's correction in its sample standard deviation calculation, which is more accurate for small sample sizes. Additionally, runner.js has been modified to not calculate the 'sum' statistic, which was not very useful. PerformanceTests: * resources/runner.js: (PerfTestRunner.computeStatistics): Tools: * Scripts/webkitpy/performance_tests/perftest.py: The unused variable valueSum has also been removed. (PageLoadingPerfTest.run): * Scripts/webkitpy/performance_tests/perftest_unittest.py: This test calculates the stdev of {2000, 3000, ..., 20000} which was hand-calculated using a spreadsheet. (TestPageLoadingPerfTest.test_run): LayoutTests: * fast/harness/perftests/perf-runner-compute-statistics-expected.txt: * fast/harness/perftests/perf-runner-compute-statistics.html: * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/116134@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130135 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-02 08:10:47 +00:00
// Compute the mean and variance using Knuth's online algorithm (has good numerical stability).
var squareSum = 0;
result.values = times;
Fix PerfTest standard deviation calculation. https://bugs.webkit.org/show_bug.cgi?id=98115 Reviewed by Ryosuke Niwa. Previously our standard deviation calculation was incorrect. This patch updates perftest.py's algorithm to calculate the sample standard deviation (with Bessel's correction) using Knuth's online algorithm: http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm An existing test has been modified to prove our new results. This patch also updates runner.js to use Bessel's correction in its sample standard deviation calculation, which is more accurate for small sample sizes. Additionally, runner.js has been modified to not calculate the 'sum' statistic, which was not very useful. PerformanceTests: * resources/runner.js: (PerfTestRunner.computeStatistics): Tools: * Scripts/webkitpy/performance_tests/perftest.py: The unused variable valueSum has also been removed. (PageLoadingPerfTest.run): * Scripts/webkitpy/performance_tests/perftest_unittest.py: This test calculates the stdev of {2000, 3000, ..., 20000} which was hand-calculated using a spreadsheet. (TestPageLoadingPerfTest.test_run): LayoutTests: * fast/harness/perftests/perf-runner-compute-statistics-expected.txt: * fast/harness/perftests/perf-runner-compute-statistics.html: * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/116134@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130135 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-02 08:10:47 +00:00
result.mean = 0;
for (var i = 0; i < data.length; ++i) {
var x = data[i];
var delta = x - result.mean;
var sweep = i + 1.0;
result.mean += delta / sweep;
Fix PerfTest standard deviation calculation. https://bugs.webkit.org/show_bug.cgi?id=98115 Reviewed by Ryosuke Niwa. Previously our standard deviation calculation was incorrect. This patch updates perftest.py's algorithm to calculate the sample standard deviation (with Bessel's correction) using Knuth's online algorithm: http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm An existing test has been modified to prove our new results. This patch also updates runner.js to use Bessel's correction in its sample standard deviation calculation, which is more accurate for small sample sizes. Additionally, runner.js has been modified to not calculate the 'sum' statistic, which was not very useful. PerformanceTests: * resources/runner.js: (PerfTestRunner.computeStatistics): Tools: * Scripts/webkitpy/performance_tests/perftest.py: The unused variable valueSum has also been removed. (PageLoadingPerfTest.run): * Scripts/webkitpy/performance_tests/perftest_unittest.py: This test calculates the stdev of {2000, 3000, ..., 20000} which was hand-calculated using a spreadsheet. (TestPageLoadingPerfTest.test_run): LayoutTests: * fast/harness/perftests/perf-runner-compute-statistics-expected.txt: * fast/harness/perftests/perf-runner-compute-statistics.html: * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/116134@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130135 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-02 08:10:47 +00:00
squareSum += delta * (x - result.mean);
}
result.variance = data.length <= 1 ? 0 : squareSum / (data.length - 1);
result.stdev = Math.sqrt(result.variance);
result.unit = unit || "ms";
return result;
}
PerfTestRunner.logStatistics = function (values, unit, title) {
var statistics = this.computeStatistics(values, unit);
this.log("");
Simplify and reformat the output of performance tests inside test runners https://bugs.webkit.org/show_bug.cgi?id=124496 Reviewed by Antti Koivisto. PerformanceTests: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Also modernize the output to better support "metric" concept we introduced a while ago. New output on Dromaeo/dom-attr looks like this: ----------------------------------------------- Running 5 times getAttribute -> [1105, 1108, 1134, 1137, 1154] element.property -> [1634, 1655, 1685, 1696, 1723] setAttribute -> [646.3536463536464, 651, 651, 656.3436563436563, 658] element.property = value -> [934, 949, 963, 964, 974] element.expando = value -> [419, 419.5804195804196, 421.57842157842157, 425.57442557442556, 429] element.expando -> [501, 517, 519.4805194805194, 521.4785214785214, 525] 1: 117.40644785571585 runs/s 2: 118.84720469666297 runs/s 3: 119.80547640905021 runs/s 4: 120.51886194758805 runs/s 5: 121.51924380569295 runs/s :Time -> [117.40644785571585, 118.84720469666297, 119.80547640905021, 120.51886194758805, 121.51924380569295] runs/s mean: 119.619446942942 runs/s median: 119.80547640905021 runs/s stdev: 1.5769040458730506 runs/s min: 117.40644785571585 runs/s max: 121.51924380569295 runs/s ----------------------------------------------- * Dromaeo/resources/dromaeorunner.js: (DRT.progress): Use the new format for subtest reports. * resources/runner.js: (.): Declare verboseLogging, which is set to true outside of test runners. (PerfTestRunner.logInfo): Use verboseLogging instead of directly checking window.testRunner. (PerfTestRunner.logDetail): Added. Logs informative text with a label such as "mean: 123 s" with 4-space indentation. (PerfTestRunner.logStatistics): Use logDetail. (.start): Initialize verboseLogging. Also log "Running 20 times" as an informative log using logDetail. (.ignoreWarmUpAndLog): Use logDetail for showing the progress. These logs were useless inside test runners anyway because perftest didn't get to see any output until the test finished running. (.finish): Call logStatistics with metric name as opposed to a label. Each metric name is now prefixed with ':' to be distinguishable from subtests, making the new format forward compatible. Tools: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Instead of spitting out noise in PerfTestRunner (runner.js) and ignoring it in PerfTest._filter_output (perftest.py), simply avoid generating it in the first place. Also modernize the output to adopt "metric" concept better and make it forward compatible with subtests. With this patch, performance tests written using runner.js only produces empty lines or lines of the following format inside test runners (DumpRenderTree and WebKitTestRunner): <subtest name> -> [<value 1>, <value 2>, ...] :<metric name> -> [<value 1>, <value 2>, ...] This greatly simplifies the parsing logic inside PerfTest._run_with_driver. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Removed a bunch of regular expressions that are no longer used. (PerfTest._run_with_driver): Just parse the values and description and treat everything else as errors. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest.test_parse_output): Removed the junk. (TestPerfTest._assert_failed_on_line): Extracted from test_parse_output_with_failing_line, which was removed in favor of the tests below. (TestPerfTest.test_parse_output_with_running_five_times): Added. (TestPerfTest.test_parse_output_with_detailed_info): Added. (TestPerfTest.test_parse_output_with_statistics): Added. (TestPerfTest.test_parse_output_with_description): Removed the junk. (TestPerfTest.test_parse_output_with_subtests): Ditto. (TestSingleProcessPerfTest.test_use_only_one_process): Ditto. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (EventTargetWrapperTestData): Ditto. (SomeParserTestData): Ditto. (MemoryTestData): Ditto. LayoutTests: Rebaseline the expected result now that the output has been simplified. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/142730@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159465 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-18 23:40:36 +00:00
this.log(title + " -> [" + statistics.values.join(", ") + "] " + statistics.unit);
["mean", "median", "stdev", "min", "max"].forEach(function (name) {
PerfTestRunner.logDetail(name, statistics[name] + ' ' + statistics.unit);
});
}
2012-09-28 04:16:48 +00:00
PerformanceTests: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Made one small modification to Droameo's webrunner.js so that it reports individual runs/s values for each subtest. This allows us to compute the aggregated run/s for each iteration like other performance tests. Also stop measuring memory usage in Dromaeo tests because some Dromaeo tests (e.g. jslib-modify-jquery) have unrealistic memory usage, and measuring them at the time of teardown doesn't make much sense. * Animation/balls.html: Fixed typo: measureValueAync. * Dromaeo/resources/dromaeo/web/webrunner.js: * Dromaeo/resources/dromaeorunner.js: (DRT.setup): Call prepareToMeasureValuesAsync so that DRT.teardown can use meausreValueAsync, and log "Running 5 times". Since the log container will be inserted before iframe, we need to explicitly insert the iframe as the first child of the body element to avoid logs from affecting the iframe's position. Also specify the number of iterations by calling PerfTestRunner.iterationCount() so that we may adjust the number of iterations in PerfTestRunner. (DRT.progress): Log individual measurement for each subtest. (DRT.teardown): Compute the aggregated result for each iteration, and log them using measureValueAsync. * resources/runner.js: (PerfTestRunner.logStatistics): Merged printStatistics since it's no longer needed after r131638. (PerfTestRunner): Removed getAndPrintMemoryStatistics since it was used only in Dromaeo tests but we no longer measure memory usage in Dromaeo tests. (start): Increment completedRuns from -1 to 0 for Dromaeo tests where we don't want to ignore the initial measurement. Note that ignoreWarmUpAndLog ignores the measurements for which completedRuns is negative. (ignoreWarmUpAndLog): We don't measure memory usage in Dromaeo tests. See above. (PerfTestRunner.iterationCount): Added. This abstraction allows us to auto-adjust the number of iterations from run-perf-tests in near future. (PerfTestRunner.measureValueAsync): Renamed from measureValueAync. Tools: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Ignore subtest results spit out by Dromaeo tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Added a line to ignore. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (MainTest.test_parse_output_with_subtests): Added. LayoutTests: Fix a test and re-enable fast/harness/perftests on Chromium. * fast/harness/perftests/runs-per-second-log.html: * platform/chromium/TestExpectations: Canonical link: https://commits.webkit.org/122080@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@136492 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-12-04 10:00:54 +00:00
function getUsedMallocHeap() {
var stats = window.internals.mallocStatistics();
return stats.committedVMBytes - stats.freeListBytes;
}
2012-09-28 04:16:48 +00:00
PerformanceTests: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Made one small modification to Droameo's webrunner.js so that it reports individual runs/s values for each subtest. This allows us to compute the aggregated run/s for each iteration like other performance tests. Also stop measuring memory usage in Dromaeo tests because some Dromaeo tests (e.g. jslib-modify-jquery) have unrealistic memory usage, and measuring them at the time of teardown doesn't make much sense. * Animation/balls.html: Fixed typo: measureValueAync. * Dromaeo/resources/dromaeo/web/webrunner.js: * Dromaeo/resources/dromaeorunner.js: (DRT.setup): Call prepareToMeasureValuesAsync so that DRT.teardown can use meausreValueAsync, and log "Running 5 times". Since the log container will be inserted before iframe, we need to explicitly insert the iframe as the first child of the body element to avoid logs from affecting the iframe's position. Also specify the number of iterations by calling PerfTestRunner.iterationCount() so that we may adjust the number of iterations in PerfTestRunner. (DRT.progress): Log individual measurement for each subtest. (DRT.teardown): Compute the aggregated result for each iteration, and log them using measureValueAsync. * resources/runner.js: (PerfTestRunner.logStatistics): Merged printStatistics since it's no longer needed after r131638. (PerfTestRunner): Removed getAndPrintMemoryStatistics since it was used only in Dromaeo tests but we no longer measure memory usage in Dromaeo tests. (start): Increment completedRuns from -1 to 0 for Dromaeo tests where we don't want to ignore the initial measurement. Note that ignoreWarmUpAndLog ignores the measurements for which completedRuns is negative. (ignoreWarmUpAndLog): We don't measure memory usage in Dromaeo tests. See above. (PerfTestRunner.iterationCount): Added. This abstraction allows us to auto-adjust the number of iterations from run-perf-tests in near future. (PerfTestRunner.measureValueAsync): Renamed from measureValueAync. Tools: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Ignore subtest results spit out by Dromaeo tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Added a line to ignore. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (MainTest.test_parse_output_with_subtests): Added. LayoutTests: Fix a test and re-enable fast/harness/perftests on Chromium. * fast/harness/perftests/runs-per-second-log.html: * platform/chromium/TestExpectations: Canonical link: https://commits.webkit.org/122080@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@136492 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-12-04 10:00:54 +00:00
function getUsedJSHeap() {
Move MemoryInfo under window.internals https://bugs.webkit.org/show_bug.cgi?id=117197 Reviewed by Ryosuke Niwa. .: * Source/autotools/symbols.filter: Export the required symbol. PerformanceTests: * resources/runner.js: Remove the setMemoryEnabled call, it's not required anymore as the memory info is now accessed through window.internals and doesn't need the setting to be enabled to work. Source/WebCore: The MemoryInfo interface is not a subject of any specification and should not be exposed to the Web. It's still used by the performance tests so it is moved under the testing internals, accessible through window.internals.memoryInfo. The jsHeapSizeLimit attribute is removed from the MemoryInfo interface as that value was only usable when using the V8 bindings which are not supported anymore. A small fast/harness test is also provided to check that the MemoryInfo object is accessible through window.internals. Test: fast/harness/memoryinfo-object.html * CMakeLists.txt: * DerivedSources.cpp: * DerivedSources.make: * DerivedSources.pri: * GNUmakefile.list.am: * Target.pri: * UseJSC.cmake: * WebCore.exp.in: * WebCore.order: * WebCore.vcxproj/WebCore.vcxproj: * WebCore.vcxproj/WebCore.vcxproj.filters: * WebCore.vcxproj/WebCoreTestSupport.vcxproj: * WebCore.vcxproj/WebCoreTestSupport.vcxproj.filters: * WebCore.xcodeproj/project.pbxproj: * bindings/gobject/GNUmakefile.am: Remove the GObject bindings targets for MemoryInfo. * bindings/js/JSBindingsAllInOne.cpp: Remove the JSMemoryInfo.h and MemoryInfo.h inclusion. * bindings/js/JSMemoryInfoCustom.cpp: Removed. * bindings/js/ScriptGCEvent.cpp: Remove the ENABLE(INSPECTOR) guards. (WebCore::ScriptGCEvent::getHeapSize): Remove the jsHeapSizeLimit assignment. * bindings/js/ScriptGCEvent.h: Remove the ENABLE(INSPECTOR) guards. (WebCore::HeapInfo::HeapInfo): Remove the jsHeapSizeLimit member. * page/Console.cpp: Remove the Console::memory method. * page/Console.h: Ditto. * page/Console.idl: Remove the window.console.memory attribute. * page/MemoryInfo.cpp: Removed. * page/Performance.cpp: Remove the Performance::memory method. * page/Performance.h: Ditto. * page/Performance.idl: Remove the window.performance.memory attribute. * testing/Internals.cpp: (WebCore::Internals::memoryInfo): Return a MemoryInfo object upon invoking. * testing/Internals.h: Declare the Internals::memoryInfo method. * testing/Internals.idl: Expose the window.internals.memoryInfo operation. * testing/MemoryInfo.h: Renamed from Source/WebCore/page/MemoryInfo.h. (WebCore::MemoryInfo::create): Return a new RefPtr-wrapped MemoryInfo object. (WebCore::MemoryInfo::usedJSHeapSize): Return the value of the equally-named HeapInfo member. (WebCore::MemoryInfo::totalJSHeapSize): Ditto. (WebCore::MemoryInfo::MemoryInfo): Acquire the current heap info upon construction. * testing/MemoryInfo.idl: Renamed from Source/WebCore/page/MemoryInfo.idl. The jsHeapSizeLimit attribute is removed. Tools: * GNUmakefile.am: Add the testing/MemoryInfo.(h|idl) files and the generated targets to the Automake build * Scripts/webkitperl/filter-build-webkit_unittest/shouldIgnoreLine_unittests.pl: List the testing/MemoryInfo.idl file instead of page/MemoryInfo.idl. LayoutTests: Remove window.performance.memory property listings from the baselines, the object was moved under window.internals. The latter is tested throug the new fast/harness test. * fast/dom/Window/window-properties-performance-expected.txt: * fast/harness/memoryinfo-object-expected.txt: Added. * fast/harness/memoryinfo-object.html: Added. * platform/efl/fast/dom/Window/window-properties-performance-expected.txt: * platform/gtk/fast/dom/Window/window-properties-performance-expected.txt: * platform/qt/fast/dom/Window/window-properties-performance-expected.txt: Canonical link: https://commits.webkit.org/135479@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@151199 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-06-05 07:08:40 +00:00
return window.internals.memoryInfo().usedJSHeapSize;
}
2012-09-28 04:16:48 +00:00
PerfTestRunner.gc = function () {
if (window.GCController)
window.GCController.collect();
else {
function gcRec(n) {
if (n < 1)
return {};
var temp = {i: "ab" + i + (i / 100000)};
temp += "foo";
gcRec(n-1);
}
for (var i = 0; i < 1000; i++)
gcRec(10);
}
};
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
function logInDocument(text) {
if (!document.getElementById("log")) {
var pre = document.createElement("pre");
pre.id = "log";
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
document.body.appendChild(pre);
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
document.getElementById("log").innerHTML += text + "\n";
window.scrollTo(0, document.body.height);
}
PerfTestRunner.log = function (text) {
if (logLines)
logLines.push(text);
else
logInDocument(text);
}
function logFatalError(text) {
PerfTestRunner.log(text);
finish();
}
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
function start(test, runner, doNotLogStart) {
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
if (!test) {
logFatalError("Got a bad test object.");
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
return;
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
currentTest = test;
Some perf. tests have variances that differ greatly between runs https://bugs.webkit.org/show_bug.cgi?id=97510 Reviewed by Benjamin Poulain. PerformanceTests: In order to control the number of iterations and processes to use from run-perf-tests, always use 20 iterations on all tests except Dromaeo, where even doing 5 iterations is prohibitively slow, by default. Without this change, it'll become extremely hard for us to tweak the number of iterations and processes to use from run-perf-tests. * Animation/balls.html: * DOM/DOMTable.html: * DOM/resources/dom-perf.js: (runBenchmarkSuite.PerfTestRunner.measureTime): * Dromaeo/resources/dromaeorunner.js: * Layout/floats_100_100.html: * Layout/floats_100_100_nested.html: * Layout/floats_20_100.html: * Layout/floats_20_100_nested.html: * Layout/floats_2_100.html: * Layout/floats_2_100_nested.html: * Layout/floats_50_100.html: * Layout/floats_50_100_nested.html: * Layout/subtree-detaching.html: * Parser/html5-full-render.html: * SVG/SvgHitTesting.html: * resources/runner.js: * resources/results-template.html: Tools: Use multiple instances of DumpRenderTree or WebKitTestRunner to amortize the effect of the runtime environment on test results (we run each instance after one another, not in parallel). We use 4 instances of the test runner, each executing 5 in-process iterations, for the total of 20 iterations as it was done previously in single process. These values are hard-coded in perftest.py and runner.js but they are to be configurable in the future. Set of 5 iterations obtained by the same test runner is treated as an "iteration group" and each metric now reports an array of the length 4 with each element containing an array of 5 iteration values obtained by each test runner instance as opposed to a flattened array of 20 iteration values. Unfortunately, we can use the same trick on Dromaeo because we're already doing only 5 iterations and repeating the entire Dromaeo 4 times will take too long. We need to disable more Dromaeo tests as needed. To this end, added SingleProcessPerfTest to preserve the old behavior. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTestMetric.append_group): Renamed from append. (PerfTestMetric.grouped_iteration_values): Added. (PerfTestMetric.flattened_iteration_values): Renamed from iteration_values. (PerfTest.__init__): Takes the number of processes (drivers) to run tests with. This parameter is only used by SingleProcessPerfTest. (PerfTest.run): Repeat tests using different driver processes. (PerfTest._run_with_driver): Returns a boolean instead of a list of measured metrics since metrics are shared between multiple drivers (i.e. multiple calls to _run_with_driver). We instead use _ensure_metrics to obtain the matched metrics and store the data there. (PerfTest._ensure_metrics): Added. (SingleProcessPerfTest): Added. Used to run Dromaeo tests where running it on 4 different instances of DumpRenderTree/WebKitTestRunner takes too long. (SingleProcessPerfTest.__init__): (ReplayPerfTest._run_with_driver): Updated to use _ensure_metrics. (PerfTestFactory): Use SingleProcessPerfTest to run Dromaeo tests. * Scripts/webkitpy/performance_tests/perftest_unittest.py: Updated various tests that expect _run_with_driver to return a list of metrics. Now it returns a boolean indicating whether the test succeeded or not. Obtain the dictionary of metrics via test._metrics instead. (TestPerfTestMetric.test_append): Updated per name and added some test cases for grouped_iteration_values. (TestPerfTest._assert_results_are_correct): (TestSingleProcessPerfTest): Added. (TestSingleProcessPerfTest.test_use_only_one_process): (TestSingleProcessPerfTest.test_use_only_one_process.run_single): (TestReplayPerfTest.test_run_with_driver_accumulates_results): (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: Updated values of sample standard deviations since we're now running tests 4 times. (MainTest._test_run_with_json_output.mock_upload_json): (MainTest.test_run_with_upload_json_should_generate_perf_webkit_json): LayoutTests: Use dromaeoIterationCount now that we no longer support iterationCount. * fast/harness/perftests/runs-per-second-iterations.html: Canonical link: https://commits.webkit.org/129631@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@144583 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-03-03 23:19:31 +00:00
// FIXME: We should be using multiple instances of test runner on Dromaeo as well but it's too slow now.
// FIXME: Don't hard code the number of in-process iterations to use inside a test runner.
Make DoYouEvenBench runnable by run-perf-tests https://bugs.webkit.org/show_bug.cgi?id=127030 Reviewed by Andreas Kling. PerformanceTests: Added Full.html that runs 5 iterations of DoYouEvenBench. This is the canonical DoYouEvenBench, which is also runnable by run-perf-tests. * DoYouEvenBench/Full.html: Added. * DoYouEvenBench/benchmark.html: (startTest): Updated the code to account for the fact old measuredValues is pushed down to tests property and we now have total property so that we don't have to manually compute the total. * DoYouEvenBench/resources/benchmark-report.js: Added. When we're inside a DRT/WTR, use PerfTestRunner to output that can be parsed by run-perf-tests. Do the same when the query part or the fragment part of the current URL is "webkit" for debugging purposes. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner): (BenchmarkRunner.prototype._appendFrame): Position the frame at (0, 0) inside DRT and WTR since we have exactly 800px by 600px inside those two test runners. Also always insert the iframe as the first child of body to avoid inserting it after the pre inserted by the test runner. (BenchmarkRunner.prototype.step): Initializes _measuredValues. (BenchmarkRunner.prototype.runAllSteps): Merged callNextStep in benchmark.html. (BenchmarkRunner.prototype.runMultipleIterations): Added. (BenchmarkRunner.prototype._runTestAndRecordResults): Compute the grand total among suites. Also push down the sync and async time into tests property for consistency. (BenchmarkRunner.prototype._finalize): * Dromaeo/resources/dromaeorunner.js: (DRT.testObject): Renamed dromaeoIterationCount to customIterationCount as this option is also used by DoYouEvenBench. * resources/runner.js: Ditto. (.finish): Spit out the aggregator name. Tools: Ignore console messages spit out by DoYouEvenBench and support aggregator names such as ":Total" to appear at the end of a test name. We don't do anything with it for now. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest._metrics_regex): Handle aggregator names such as ":Total". We'll pass it down to the JSON in a follow up patch for the perf dashboard. (PerfTest._lines_to_ignore_in_parser_result): Added lines to ignore for DoYouEvenBench. LayoutTests: Use customIterationCount as it has been renamed from dromaeoIterationCount. * fast/harness/perftests/runs-per-second-iterations.html: Canonical link: https://commits.webkit.org/145035@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162058 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-01-15 08:01:52 +00:00
iterationCount = test.customIterationCount || (window.testRunner ? 5 : 20);
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
completedIterations = -1;
results = [];
jsHeapResults = [];
mallocHeapResults = [];
Simplify and reformat the output of performance tests inside test runners https://bugs.webkit.org/show_bug.cgi?id=124496 Reviewed by Antti Koivisto. PerformanceTests: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Also modernize the output to better support "metric" concept we introduced a while ago. New output on Dromaeo/dom-attr looks like this: ----------------------------------------------- Running 5 times getAttribute -> [1105, 1108, 1134, 1137, 1154] element.property -> [1634, 1655, 1685, 1696, 1723] setAttribute -> [646.3536463536464, 651, 651, 656.3436563436563, 658] element.property = value -> [934, 949, 963, 964, 974] element.expando = value -> [419, 419.5804195804196, 421.57842157842157, 425.57442557442556, 429] element.expando -> [501, 517, 519.4805194805194, 521.4785214785214, 525] 1: 117.40644785571585 runs/s 2: 118.84720469666297 runs/s 3: 119.80547640905021 runs/s 4: 120.51886194758805 runs/s 5: 121.51924380569295 runs/s :Time -> [117.40644785571585, 118.84720469666297, 119.80547640905021, 120.51886194758805, 121.51924380569295] runs/s mean: 119.619446942942 runs/s median: 119.80547640905021 runs/s stdev: 1.5769040458730506 runs/s min: 117.40644785571585 runs/s max: 121.51924380569295 runs/s ----------------------------------------------- * Dromaeo/resources/dromaeorunner.js: (DRT.progress): Use the new format for subtest reports. * resources/runner.js: (.): Declare verboseLogging, which is set to true outside of test runners. (PerfTestRunner.logInfo): Use verboseLogging instead of directly checking window.testRunner. (PerfTestRunner.logDetail): Added. Logs informative text with a label such as "mean: 123 s" with 4-space indentation. (PerfTestRunner.logStatistics): Use logDetail. (.start): Initialize verboseLogging. Also log "Running 20 times" as an informative log using logDetail. (.ignoreWarmUpAndLog): Use logDetail for showing the progress. These logs were useless inside test runners anyway because perftest didn't get to see any output until the test finished running. (.finish): Call logStatistics with metric name as opposed to a label. Each metric name is now prefixed with ':' to be distinguishable from subtests, making the new format forward compatible. Tools: As a preparation to support subtests for Dromaeo and DoYouEvenBench, simplify the output performance tests generate. Instead of spitting out noise in PerfTestRunner (runner.js) and ignoring it in PerfTest._filter_output (perftest.py), simply avoid generating it in the first place. Also modernize the output to adopt "metric" concept better and make it forward compatible with subtests. With this patch, performance tests written using runner.js only produces empty lines or lines of the following format inside test runners (DumpRenderTree and WebKitTestRunner): <subtest name> -> [<value 1>, <value 2>, ...] :<metric name> -> [<value 1>, <value 2>, ...] This greatly simplifies the parsing logic inside PerfTest._run_with_driver. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Removed a bunch of regular expressions that are no longer used. (PerfTest._run_with_driver): Just parse the values and description and treat everything else as errors. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest.test_parse_output): Removed the junk. (TestPerfTest._assert_failed_on_line): Extracted from test_parse_output_with_failing_line, which was removed in favor of the tests below. (TestPerfTest.test_parse_output_with_running_five_times): Added. (TestPerfTest.test_parse_output_with_detailed_info): Added. (TestPerfTest.test_parse_output_with_statistics): Added. (TestPerfTest.test_parse_output_with_description): Removed the junk. (TestPerfTest.test_parse_output_with_subtests): Ditto. (TestSingleProcessPerfTest.test_use_only_one_process): Ditto. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (EventTargetWrapperTestData): Ditto. (SomeParserTestData): Ditto. (MemoryTestData): Ditto. LayoutTests: Rebaseline the expected result now that the output has been simplified. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/142730@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159465 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-18 23:40:36 +00:00
verboseLogging = !window.testRunner;
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
if (!doNotLogStart) {
PerfTestRunner.logInfo('');
PerfTestRunner.logInfo("Running " + iterationCount + " times");
}
PerformanceTests: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Made one small modification to Droameo's webrunner.js so that it reports individual runs/s values for each subtest. This allows us to compute the aggregated run/s for each iteration like other performance tests. Also stop measuring memory usage in Dromaeo tests because some Dromaeo tests (e.g. jslib-modify-jquery) have unrealistic memory usage, and measuring them at the time of teardown doesn't make much sense. * Animation/balls.html: Fixed typo: measureValueAync. * Dromaeo/resources/dromaeo/web/webrunner.js: * Dromaeo/resources/dromaeorunner.js: (DRT.setup): Call prepareToMeasureValuesAsync so that DRT.teardown can use meausreValueAsync, and log "Running 5 times". Since the log container will be inserted before iframe, we need to explicitly insert the iframe as the first child of the body element to avoid logs from affecting the iframe's position. Also specify the number of iterations by calling PerfTestRunner.iterationCount() so that we may adjust the number of iterations in PerfTestRunner. (DRT.progress): Log individual measurement for each subtest. (DRT.teardown): Compute the aggregated result for each iteration, and log them using measureValueAsync. * resources/runner.js: (PerfTestRunner.logStatistics): Merged printStatistics since it's no longer needed after r131638. (PerfTestRunner): Removed getAndPrintMemoryStatistics since it was used only in Dromaeo tests but we no longer measure memory usage in Dromaeo tests. (start): Increment completedRuns from -1 to 0 for Dromaeo tests where we don't want to ignore the initial measurement. Note that ignoreWarmUpAndLog ignores the measurements for which completedRuns is negative. (ignoreWarmUpAndLog): We don't measure memory usage in Dromaeo tests. See above. (PerfTestRunner.iterationCount): Added. This abstraction allows us to auto-adjust the number of iterations from run-perf-tests in near future. (PerfTestRunner.measureValueAsync): Renamed from measureValueAync. Tools: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Ignore subtest results spit out by Dromaeo tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Added a line to ignore. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (MainTest.test_parse_output_with_subtests): Added. LayoutTests: Fix a test and re-enable fast/harness/perftests on Chromium. * fast/harness/perftests/runs-per-second-log.html: * platform/chromium/TestExpectations: Canonical link: https://commits.webkit.org/122080@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@136492 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-12-04 10:00:54 +00:00
if (test.doNotIgnoreInitialRun)
completedIterations++;
if (runner)
scheduleNextRun(runner);
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
function scheduleNextRun(runner) {
PerfTestRunner.gc();
window.setTimeout(function () {
try {
Remove NodeListsNodeData when it's no longer needed https://bugs.webkit.org/show_bug.cgi?id=107074 Reviewed by Darin Adler. PerformanceTests: Added a micro benchmark to see the benefit of removing NodeListsNodeData. The test traverses all elements in the html5 specification page and accesses childNodes. Don't enable this test for now since it's really a micro benchmark specifically designed to test this patch. * DOM/TraverseChildNodes.html: Added. * Skipped: Don't enable newly added test by default. * resources/results-template.html: Compare against the unscaled unit (e.g. "bytes") as opposed to scaled units such as "K bytes". * resources/runner.js: (.start): Moved the code to call currentTest.setup from measureRunsPerSecondOnce so that it'll be ran for all test types, namely of PerfTestRunner.measureTime. (.measureRunsPerSecondOnce): Source/WebCore: Remove NodeListsNodeData when the last node list is removed from it. If we detect that we have only one node list left in the data structure, we'll simply destroy the entire "this" object to free up the memory space. This reduced the memory usage of the micro benchmark by roughly 3%. Performance Tests: DOM/TraverseChildNodes.html * dom/Node.cpp: (WebCore::Node::clearNodeLists): Added. * dom/Node.h: * dom/NodeRareData.h: (WebCore::NodeListsNodeData::removeChildNodeList): (WebCore::NodeListsNodeData::removeCacheWithAtomicName): (WebCore::NodeListsNodeData::removeCacheWithName): (WebCore::NodeListsNodeData::removeCacheWithQualifiedName): (WebCore::NodeListsNodeData::deleteThisAndUpdateNodeRareDataIfAboutToRemoveLastList): Added. Removes "this" NodeListsNodeData if there is only one node list left. Tools: Generalize the warning a little so that it's also ignored on PerformanceTests/DOM/TraverseChildNodes.html * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Canonical link: https://commits.webkit.org/125435@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@140070 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-01-18 00:35:57 +00:00
if (currentTest.setup)
currentTest.setup();
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
var measuredValue = runner();
} catch (exception) {
logFatalError("Got an exception while running test.run with name=" + exception.name + ", message=" + exception.message);
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
return;
}
completedIterations++;
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
try {
ignoreWarmUpAndLog(measuredValue);
} catch (exception) {
logFatalError("Got an exception while logging the result with name=" + exception.name + ", message=" + exception.message);
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
return;
}
if (completedIterations < iterationCount)
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
scheduleNextRun(runner);
else
finish();
}, 0);
}
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
function ignoreWarmUpAndLog(measuredValue, doNotLogProgress) {
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
var labeledResult = measuredValue + " " + PerfTestRunner.unit;
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
if (completedIterations <= 0) {
if (!doNotLogProgress)
PerfTestRunner.logDetail(completedIterations, labeledResult + " (Ignored warm-up run)");
return;
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
}
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
results.push(measuredValue);
if (window.internals && !currentTest.doNotMeasureMemoryUsage) {
jsHeapResults.push(getUsedJSHeap());
mallocHeapResults.push(getUsedMallocHeap());
}
if (!doNotLogProgress)
PerfTestRunner.logDetail(completedIterations, labeledResult);
2012-09-28 04:16:48 +00:00
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
function finish() {
try {
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
var prefix = currentTest.name || '';
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
if (currentTest.description)
PerfTestRunner.log("Description: " + currentTest.description);
metric = {'fps': 'FrameRate', 'runs/s': 'Runs', 'pt': 'Score', 'ms': 'Time'}[PerfTestRunner.unit];
Automate DoYouEvenBench https://bugs.webkit.org/show_bug.cgi?id=124497 Reviewed by Geoffrey Garen. PerformanceTests: Enable DoYouEvenBench/Full.html on perf bots by default. Put a space between the time and ms, and fixed a typo in runner.js so that the aggregator name will be reported. * DoYouEvenBench/Full.html: * Skipped: * resources/runner.js: Tools: * Scripts/webkitpy/performance_tests/perftest.py: (PerfTestMetric.__init__): Added the aggregator name as an argument. (PerfTestMetric.aggregator): Added. (PerfTest._metrics_regex): Made the subtest name match non-greedy so that the metric names will be won't be eagerly parsed as a part of the subtest name. e.g. "Time" and "Total" in "a:Time:Total" should be parsed as the metric and the aggregator respectively. (PerfTest._run_with_driver): Pass in the aggregator name. (PerfTest._ensure_metrics): Ditto. Also split the subtest name by / as required by DoYouEvenBench which generates subtests of subtests within a single test file. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (test_parse_output_with_subtests_and_total): Added. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): Add the aggregator name to the JSON when one is available. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added a sub test with an aggregator and a sub-sub test. Canonical link: https://commits.webkit.org/145149@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162183 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-01-17 06:06:36 +00:00
var suffix = currentTest.aggregator ? ':' + currentTest.aggregator : '';
Make DoYouEvenBench runnable by run-perf-tests https://bugs.webkit.org/show_bug.cgi?id=127030 Reviewed by Andreas Kling. PerformanceTests: Added Full.html that runs 5 iterations of DoYouEvenBench. This is the canonical DoYouEvenBench, which is also runnable by run-perf-tests. * DoYouEvenBench/Full.html: Added. * DoYouEvenBench/benchmark.html: (startTest): Updated the code to account for the fact old measuredValues is pushed down to tests property and we now have total property so that we don't have to manually compute the total. * DoYouEvenBench/resources/benchmark-report.js: Added. When we're inside a DRT/WTR, use PerfTestRunner to output that can be parsed by run-perf-tests. Do the same when the query part or the fragment part of the current URL is "webkit" for debugging purposes. * DoYouEvenBench/resources/benchmark-runner.js: (BenchmarkRunner): (BenchmarkRunner.prototype._appendFrame): Position the frame at (0, 0) inside DRT and WTR since we have exactly 800px by 600px inside those two test runners. Also always insert the iframe as the first child of body to avoid inserting it after the pre inserted by the test runner. (BenchmarkRunner.prototype.step): Initializes _measuredValues. (BenchmarkRunner.prototype.runAllSteps): Merged callNextStep in benchmark.html. (BenchmarkRunner.prototype.runMultipleIterations): Added. (BenchmarkRunner.prototype._runTestAndRecordResults): Compute the grand total among suites. Also push down the sync and async time into tests property for consistency. (BenchmarkRunner.prototype._finalize): * Dromaeo/resources/dromaeorunner.js: (DRT.testObject): Renamed dromaeoIterationCount to customIterationCount as this option is also used by DoYouEvenBench. * resources/runner.js: Ditto. (.finish): Spit out the aggregator name. Tools: Ignore console messages spit out by DoYouEvenBench and support aggregator names such as ":Total" to appear at the end of a test name. We don't do anything with it for now. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest._metrics_regex): Handle aggregator names such as ":Total". We'll pass it down to the JSON in a follow up patch for the perf dashboard. (PerfTest._lines_to_ignore_in_parser_result): Added lines to ignore for DoYouEvenBench. LayoutTests: Use customIterationCount as it has been renamed from dromaeoIterationCount. * fast/harness/perftests/runs-per-second-iterations.html: Canonical link: https://commits.webkit.org/145035@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@162058 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2014-01-15 08:01:52 +00:00
PerfTestRunner.logStatistics(results, PerfTestRunner.unit, prefix + ":" + metric + suffix);
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
if (jsHeapResults.length) {
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
PerfTestRunner.logStatistics(jsHeapResults, "bytes", prefix + ":JSHeap");
PerfTestRunner.logStatistics(mallocHeapResults, "bytes", prefix + ":Malloc");
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
}
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
if (logLines && !currentTest.continueTesting)
logLines.forEach(logInDocument);
if (currentTest.done)
currentTest.done();
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
} catch (exception) {
logInDocument("Got an exception while finalizing the test with name=" + exception.name + ", message=" + exception.message);
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
}
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
if (!currentTest.continueTesting) {
if (window.testRunner)
testRunner.notifyDone();
return;
}
currentTest = null;
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
}
performance tests should be able to measure runs/sec rather than time https://bugs.webkit.org/show_bug.cgi?id=86021 Reviewed by Ojan Vafai. PerformanceTests: Add PerfTestRunner.runPerSecond. It uses _runLoop but replaces _runner by _perSecondRunner to compute runs/s of runFunction. When _perSecondRunner is called for the first time, i.e. _completedRuns is 0 (notice this is -1 in regular run/_runner), it slowly increases the number of function calls to runFunction between time measurements in order to discount the time used by new Date() calls themselves until the total time spent reaches 100 milliseconds. By default, runPerSecond runs the test for at least 750 milliseconds in each run, and executes 21 runs, yielding the total run time of roughly 18 seconds. This is significantly faster than most of existing performance tests. Also see http://ejohn.org/blog/accuracy-of-javascript-time/. Finally, refactored the existing methods of PerfTestRunner to allow "runs/s" unit and share code. * Layout/flexbox-column-nowrap.html: * Layout/flexbox-column-wrap.html: * Layout/flexbox-row-nowrap.html: * Layout/flexbox-row-wrap.html: * resources/runner.js: (PerfTestRunner.computeStatistics): Takes unit. (PerfTestRunner.logStatistics): Ditto. (PerfTestRunner._runLoop): (PerfTestRunner._runner): (PerfTestRunner.runPerSecond): Added. (PerfTestRunner._perSecondRunner): Added. Called by _runLoop. (PerfTestRunner._perSecondRunnerIterator): Added. Tools: Allow " runs/s" or " ms" to appear after numerical values in tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): LayoutTests: Add tests for PerfTestRunner.runPerSecond. * fast/harness/perftests/runs-per-second-iterations-expected.txt: Added. * fast/harness/perftests/runs-per-second-iterations.html: Added. * fast/harness/perftests/runs-per-second-log-expected.txt: Added. * fast/harness/perftests/runs-per-second-log.html: Added. Canonical link: https://commits.webkit.org/103913@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@116916 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-05-14 05:23:37 +00:00
PerfTestRunner.prepareToMeasureValuesAsync = function (test) {
PerfTestRunner.unit = test.unit;
start(test);
}
PerformanceTests: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Made one small modification to Droameo's webrunner.js so that it reports individual runs/s values for each subtest. This allows us to compute the aggregated run/s for each iteration like other performance tests. Also stop measuring memory usage in Dromaeo tests because some Dromaeo tests (e.g. jslib-modify-jquery) have unrealistic memory usage, and measuring them at the time of teardown doesn't make much sense. * Animation/balls.html: Fixed typo: measureValueAync. * Dromaeo/resources/dromaeo/web/webrunner.js: * Dromaeo/resources/dromaeorunner.js: (DRT.setup): Call prepareToMeasureValuesAsync so that DRT.teardown can use meausreValueAsync, and log "Running 5 times". Since the log container will be inserted before iframe, we need to explicitly insert the iframe as the first child of the body element to avoid logs from affecting the iframe's position. Also specify the number of iterations by calling PerfTestRunner.iterationCount() so that we may adjust the number of iterations in PerfTestRunner. (DRT.progress): Log individual measurement for each subtest. (DRT.teardown): Compute the aggregated result for each iteration, and log them using measureValueAsync. * resources/runner.js: (PerfTestRunner.logStatistics): Merged printStatistics since it's no longer needed after r131638. (PerfTestRunner): Removed getAndPrintMemoryStatistics since it was used only in Dromaeo tests but we no longer measure memory usage in Dromaeo tests. (start): Increment completedRuns from -1 to 0 for Dromaeo tests where we don't want to ignore the initial measurement. Note that ignoreWarmUpAndLog ignores the measurements for which completedRuns is negative. (ignoreWarmUpAndLog): We don't measure memory usage in Dromaeo tests. See above. (PerfTestRunner.iterationCount): Added. This abstraction allows us to auto-adjust the number of iterations from run-perf-tests in near future. (PerfTestRunner.measureValueAsync): Renamed from measureValueAync. Tools: Dromaeo should report individual test result https://bugs.webkit.org/show_bug.cgi?id=99800 Reviewed by Eric Seidel. Ignore subtest results spit out by Dromaeo tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): Added a line to ignore. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (MainTest.test_parse_output_with_subtests): Added. LayoutTests: Fix a test and re-enable fast/harness/perftests on Chromium. * fast/harness/perftests/runs-per-second-log.html: * platform/chromium/TestExpectations: Canonical link: https://commits.webkit.org/122080@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@136492 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-12-04 10:00:54 +00:00
PerfTestRunner.measureValueAsync = function (measuredValue) {
completedIterations++;
try {
ignoreWarmUpAndLog(measuredValue);
} catch (exception) {
logFatalError("Got an exception while logging the result with name=" + exception.name + ", message=" + exception.message);
return false;
}
if (completedIterations >= iterationCount) {
finish();
return false;
}
return true;
}
Record subtest values in Dromaeo tests https://bugs.webkit.org/show_bug.cgi?id=124498 Reviewed by Andreas Kling. PerformanceTests: Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues. * Dromaeo/resources/dromaeorunner.js: (.): Moved the definition out of DRT.setup. (DRT.setup): Ditto. (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting. continueTesting is set true for subtests; i.e. when name is specified. (DRT.progress): Call PerfTestRunner.reportValues to report subtest results. (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync. * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues. (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into start since they need to be initialized before running each subtest. Initialize logLines here since we need to use the same logger for all subtests. (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues. (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once. (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done when continueTesting is set on the test object. (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test. Tools: Supported parsing subtest results. * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of which contains a dictionary with its name and an ordered list of subtest's metrics. (PerfTest.__init__): Initialize _metrics as a list. (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics. (PerfTest._run_with_driver): (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics. * Scripts/webkitpy/performance_tests/perftest_unittest.py: (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics. (TestPerfTest.test_parse_output): Ditto. (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as assertions to ensure subtest results are parsed properly. (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics. (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp. * Scripts/webkitpy/performance_tests/perftestsrunner.py: (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when adding new results. * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: (TestWithSubtestsData): Added. (TestDriver.run_test): (MainTest.test_run_test_with_subtests): Added. LayoutTests: Rebaselined the test. * fast/harness/perftests/runs-per-second-log-expected.txt: Canonical link: https://commits.webkit.org/143056@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@159805 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2013-11-27 05:33:17 +00:00
PerfTestRunner.reportValues = function (test, values) {
PerfTestRunner.unit = test.unit;
start(test, null, true);
for (var i = 0; i < values.length; i++) {
completedIterations++;
ignoreWarmUpAndLog(values[i], true);
}
finish();
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
PerfTestRunner.measureTime = function (test) {
PerfTestRunner.unit = "ms";
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
start(test, measureTimeOnce);
}
performance tests should be able to measure runs/sec rather than time https://bugs.webkit.org/show_bug.cgi?id=86021 Reviewed by Ojan Vafai. PerformanceTests: Add PerfTestRunner.runPerSecond. It uses _runLoop but replaces _runner by _perSecondRunner to compute runs/s of runFunction. When _perSecondRunner is called for the first time, i.e. _completedRuns is 0 (notice this is -1 in regular run/_runner), it slowly increases the number of function calls to runFunction between time measurements in order to discount the time used by new Date() calls themselves until the total time spent reaches 100 milliseconds. By default, runPerSecond runs the test for at least 750 milliseconds in each run, and executes 21 runs, yielding the total run time of roughly 18 seconds. This is significantly faster than most of existing performance tests. Also see http://ejohn.org/blog/accuracy-of-javascript-time/. Finally, refactored the existing methods of PerfTestRunner to allow "runs/s" unit and share code. * Layout/flexbox-column-nowrap.html: * Layout/flexbox-column-wrap.html: * Layout/flexbox-row-nowrap.html: * Layout/flexbox-row-wrap.html: * resources/runner.js: (PerfTestRunner.computeStatistics): Takes unit. (PerfTestRunner.logStatistics): Ditto. (PerfTestRunner._runLoop): (PerfTestRunner._runner): (PerfTestRunner.runPerSecond): Added. (PerfTestRunner._perSecondRunner): Added. Called by _runLoop. (PerfTestRunner._perSecondRunnerIterator): Added. Tools: Allow " runs/s" or " ms" to appear after numerical values in tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): LayoutTests: Add tests for PerfTestRunner.runPerSecond. * fast/harness/perftests/runs-per-second-iterations-expected.txt: Added. * fast/harness/perftests/runs-per-second-iterations.html: Added. * fast/harness/perftests/runs-per-second-log-expected.txt: Added. * fast/harness/perftests/runs-per-second-log.html: Added. Canonical link: https://commits.webkit.org/103913@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@116916 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-05-14 05:23:37 +00:00
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
function measureTimeOnce() {
var start = PerfTestRunner.now();
var returnValue = currentTest.run();
var end = PerfTestRunner.now();
2012-09-28 04:16:48 +00:00
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
if (returnValue - 0 === returnValue) {
if (returnValue < 0)
PerfTestRunner.log("runFunction returned a negative value: " + returnValue);
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
return returnValue;
}
2012-09-28 04:16:48 +00:00
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
return end - start;
}
Some perf tests time out when ran by run-perf-tests https://bugs.webkit.org/show_bug.cgi?id=76612 Reviewed by Dirk Pranke and Eric Seidel. PerformanceTests: Replace all images in html5.html by geenbox.png to avoid accessing whatwg.org when running the parser tests. Also call dumpAsText, waitUntilDone, and notifyDone automatically inside runner.js to avoid having to call them in individual tests. * Bindings/event-target-wrapper.html: Removed calls to layoutTestController methods since they are now called by runner.js automatically. * Parser/resources/greenbox.png: Copied from LayoutTests/fast/css/resources/greenbox.png. * Parser/resources/html5.html: * Parser/resources/runner.js: (runLoop): Tools: Always pass --no-timeout to DumpRenderTree from run-perf-tests. Otherwise some tests such as Parser/xml-parser.html will timeout. --no-timeout option is currently supported by Chromium and Mac ports. * Scripts/webkitpy/layout_tests/port/base.py: (Port.to.create_driver): * Scripts/webkitpy/layout_tests/port/chromium.py: (ChromiumDriver.__init__): (ChromiumDriver._wrapper_options): * Scripts/webkitpy/layout_tests/port/driver.py: (Driver.__init__): (DriverProxy.__init__): * Scripts/webkitpy/layout_tests/port/webkit.py: (WebKitDriver.__init__): (WebKitDriver.cmd_line): * Scripts/webkitpy/layout_tests/port/webkit_unittest.py: (WebKitDriverTest.test_read_binary_block): (WebKitDriverTest): (WebKitDriverTest.test_no_timeout): * Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py: (get_tests_run.RecordingTestDriver.__init__): * Scripts/webkitpy/performance_tests/perftestsrunner.py: (PerfTestsRunner._parse_args): (PerfTestsRunner._run_tests_set): * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py: (create_runner): Canonical link: https://commits.webkit.org/93487@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@105443 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-01-19 21:50:52 +00:00
Rename PerfTestRunner.runPerSecond to PerfTestRunner.measureRunsPerSecond for consistency https://bugs.webkit.org/show_bug.cgi?id=99642 Reviewed by Dirk Pranke. Renamed the method. * Bindings/append-child.html: * Bindings/create-element.html: * Bindings/event-target-wrapper.html: * Bindings/first-child.html: * Bindings/get-attribute.html: * Bindings/get-element-by-id.html: * Bindings/get-elements-by-tag-name.html: * Bindings/id-getter.html: * Bindings/id-setter.html: * Bindings/insert-before.html: * Bindings/node-list-access.html: * Bindings/scroll-top.html: * Bindings/set-attribute.html: * Bindings/typed-array-construct-from-array.html: * Bindings/typed-array-construct-from-same-type.html: * Bindings/typed-array-construct-from-typed.html: * Bindings/typed-array-set-from-typed.html: * Bindings/undefined-first-child.html: * Bindings/undefined-get-element-by-id.html: * Bindings/undefined-id-getter.html: * CSS/CSSPropertySetterGetter.html: * CSS/CSSPropertyUpdateValue.html: * CSS/PseudoClassSelectors.html: * DOM/textarea-dom.html: * DOM/textarea-edit.html: * Interactive/resources/window-resize.js: * Layout/flexbox-column-nowrap.html: * Layout/flexbox-column-wrap.html: * Layout/flexbox-row-nowrap.html: * Layout/flexbox-row-wrap.html: * Layout/line-layout.html: * Parser/css-parser-yui.html: * Parser/innerHTML-setter.html: * Parser/query-selector-deep.html: * Parser/query-selector-first.html: * Parser/query-selector-last.html: * Parser/simple-url.html: * Parser/textarea-parsing.html: * Parser/tiny-innerHTML.html: * Parser/url-parser.html: * Parser/xml-parser.html: * SVG/SvgNestedUse.html: * resources/runner.js: Canonical link: https://commits.webkit.org/117606@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@131651 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-17 22:06:52 +00:00
PerfTestRunner.measureRunsPerSecond = function (test) {
PerfTestRunner.unit = "runs/s";
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
start(test, measureRunsPerSecondOnce);
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
function measureRunsPerSecondOnce() {
var timeToRun = 750;
var totalTime = 0;
var numberOfRuns = 0;
performance tests should be able to measure runs/sec rather than time https://bugs.webkit.org/show_bug.cgi?id=86021 Reviewed by Ojan Vafai. PerformanceTests: Add PerfTestRunner.runPerSecond. It uses _runLoop but replaces _runner by _perSecondRunner to compute runs/s of runFunction. When _perSecondRunner is called for the first time, i.e. _completedRuns is 0 (notice this is -1 in regular run/_runner), it slowly increases the number of function calls to runFunction between time measurements in order to discount the time used by new Date() calls themselves until the total time spent reaches 100 milliseconds. By default, runPerSecond runs the test for at least 750 milliseconds in each run, and executes 21 runs, yielding the total run time of roughly 18 seconds. This is significantly faster than most of existing performance tests. Also see http://ejohn.org/blog/accuracy-of-javascript-time/. Finally, refactored the existing methods of PerfTestRunner to allow "runs/s" unit and share code. * Layout/flexbox-column-nowrap.html: * Layout/flexbox-column-wrap.html: * Layout/flexbox-row-nowrap.html: * Layout/flexbox-row-wrap.html: * resources/runner.js: (PerfTestRunner.computeStatistics): Takes unit. (PerfTestRunner.logStatistics): Ditto. (PerfTestRunner._runLoop): (PerfTestRunner._runner): (PerfTestRunner.runPerSecond): Added. (PerfTestRunner._perSecondRunner): Added. Called by _runLoop. (PerfTestRunner._perSecondRunnerIterator): Added. Tools: Allow " runs/s" or " ms" to appear after numerical values in tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): LayoutTests: Add tests for PerfTestRunner.runPerSecond. * fast/harness/perftests/runs-per-second-iterations-expected.txt: Added. * fast/harness/perftests/runs-per-second-iterations.html: Added. * fast/harness/perftests/runs-per-second-log-expected.txt: Added. * fast/harness/perftests/runs-per-second-log.html: Added. Canonical link: https://commits.webkit.org/103913@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@116916 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-05-14 05:23:37 +00:00
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
while (totalTime < timeToRun) {
totalTime += callRunAndMeasureTime(callsPerIteration);
numberOfRuns += callsPerIteration;
if (completedIterations < 0 && totalTime < 100)
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
callsPerIteration = Math.max(10, 2 * callsPerIteration);
}
performance tests should be able to measure runs/sec rather than time https://bugs.webkit.org/show_bug.cgi?id=86021 Reviewed by Ojan Vafai. PerformanceTests: Add PerfTestRunner.runPerSecond. It uses _runLoop but replaces _runner by _perSecondRunner to compute runs/s of runFunction. When _perSecondRunner is called for the first time, i.e. _completedRuns is 0 (notice this is -1 in regular run/_runner), it slowly increases the number of function calls to runFunction between time measurements in order to discount the time used by new Date() calls themselves until the total time spent reaches 100 milliseconds. By default, runPerSecond runs the test for at least 750 milliseconds in each run, and executes 21 runs, yielding the total run time of roughly 18 seconds. This is significantly faster than most of existing performance tests. Also see http://ejohn.org/blog/accuracy-of-javascript-time/. Finally, refactored the existing methods of PerfTestRunner to allow "runs/s" unit and share code. * Layout/flexbox-column-nowrap.html: * Layout/flexbox-column-wrap.html: * Layout/flexbox-row-nowrap.html: * Layout/flexbox-row-wrap.html: * resources/runner.js: (PerfTestRunner.computeStatistics): Takes unit. (PerfTestRunner.logStatistics): Ditto. (PerfTestRunner._runLoop): (PerfTestRunner._runner): (PerfTestRunner.runPerSecond): Added. (PerfTestRunner._perSecondRunner): Added. Called by _runLoop. (PerfTestRunner._perSecondRunnerIterator): Added. Tools: Allow " runs/s" or " ms" to appear after numerical values in tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): LayoutTests: Add tests for PerfTestRunner.runPerSecond. * fast/harness/perftests/runs-per-second-iterations-expected.txt: Added. * fast/harness/perftests/runs-per-second-iterations.html: Added. * fast/harness/perftests/runs-per-second-log-expected.txt: Added. * fast/harness/perftests/runs-per-second-log.html: Added. Canonical link: https://commits.webkit.org/103913@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@116916 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-05-14 05:23:37 +00:00
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
return numberOfRuns * 1000 / totalTime;
performance tests should be able to measure runs/sec rather than time https://bugs.webkit.org/show_bug.cgi?id=86021 Reviewed by Ojan Vafai. PerformanceTests: Add PerfTestRunner.runPerSecond. It uses _runLoop but replaces _runner by _perSecondRunner to compute runs/s of runFunction. When _perSecondRunner is called for the first time, i.e. _completedRuns is 0 (notice this is -1 in regular run/_runner), it slowly increases the number of function calls to runFunction between time measurements in order to discount the time used by new Date() calls themselves until the total time spent reaches 100 milliseconds. By default, runPerSecond runs the test for at least 750 milliseconds in each run, and executes 21 runs, yielding the total run time of roughly 18 seconds. This is significantly faster than most of existing performance tests. Also see http://ejohn.org/blog/accuracy-of-javascript-time/. Finally, refactored the existing methods of PerfTestRunner to allow "runs/s" unit and share code. * Layout/flexbox-column-nowrap.html: * Layout/flexbox-column-wrap.html: * Layout/flexbox-row-nowrap.html: * Layout/flexbox-row-wrap.html: * resources/runner.js: (PerfTestRunner.computeStatistics): Takes unit. (PerfTestRunner.logStatistics): Ditto. (PerfTestRunner._runLoop): (PerfTestRunner._runner): (PerfTestRunner.runPerSecond): Added. (PerfTestRunner._perSecondRunner): Added. Called by _runLoop. (PerfTestRunner._perSecondRunnerIterator): Added. Tools: Allow " runs/s" or " ms" to appear after numerical values in tests. * Scripts/webkitpy/performance_tests/perftest.py: (PerfTest): LayoutTests: Add tests for PerfTestRunner.runPerSecond. * fast/harness/perftests/runs-per-second-iterations-expected.txt: Added. * fast/harness/perftests/runs-per-second-iterations.html: Added. * fast/harness/perftests/runs-per-second-log-expected.txt: Added. * fast/harness/perftests/runs-per-second-log.html: Added. Canonical link: https://commits.webkit.org/103913@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@116916 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-05-14 05:23:37 +00:00
}
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
function callRunAndMeasureTime(callsPerIteration) {
var startTime = PerfTestRunner.now();
for (var i = 0; i < callsPerIteration; i++)
currentTest.run();
return PerfTestRunner.now() - startTime;
}
PerfTestRunner.startCheckingResponsiveness = function() {
lastResponsivenessTimestamp = PerfTestRunner.now();
_longestResponsivenessDelay = 0;
continueCheckingResponsiveness = true;
var timeoutFunction = function() {
var now = PerfTestRunner.now();
var delta = now - lastResponsivenessTimestamp;
if (delta > _longestResponsivenessDelay)
_longestResponsivenessDelay = delta;
lastResponsivenessTimestamp = now;
if (continueCheckingResponsiveness)
setTimeout(timeoutFunction, 0);
}
timeoutFunction();
}
PerfTestRunner.stopCheckingResponsiveness = function() {
continueCheckingResponsiveness = false;
}
PerfTestRunner.longestResponsivenessDelay = function() {
return _longestResponsivenessDelay;
}
PerfTestRunner.measurePageLoadTime = function(test) {
test.run = function() {
var file = PerfTestRunner.loadFile(test.path);
if (!test.chunkSize)
this.chunkSize = 50000;
var chunks = [];
// The smaller the chunks the more style resolves we do.
// Smaller chunk sizes will show more samples in style resolution.
// Larger chunk sizes will show more samples in line layout.
// Smaller chunk sizes run slower overall, as the per-chunk overhead is high.
var chunkCount = Math.ceil(file.length / this.chunkSize);
for (var chunkIndex = 0; chunkIndex < chunkCount; chunkIndex++) {
var chunk = file.substr(chunkIndex * this.chunkSize, this.chunkSize);
chunks.push(chunk);
}
PerfTestRunner.logInfo("Testing " + file.length + " byte document in " + chunkCount + " " + this.chunkSize + " byte chunks.");
var iframe = document.createElement("iframe");
document.body.appendChild(iframe);
iframe.sandbox = ''; // Prevent external loads which could cause write() to return before completing the parse.
iframe.style.width = "600px"; // Have a reasonable size so we're not line-breaking on every character.
iframe.style.height = "800px";
iframe.contentDocument.open();
for (var chunkIndex = 0; chunkIndex < chunks.length; chunkIndex++) {
iframe.contentDocument.write(chunks[chunkIndex]);
// Note that we won't cause a style resolve until we've encountered the <body> element.
// Thus the number of chunks counted above is not exactly equal to the number of style resolves.
if (iframe.contentDocument.body)
iframe.contentDocument.body.clientHeight; // Force a full layout/style-resolve.
else if (iframe.documentElement.localName == 'html')
iframe.contentDocument.documentElement.offsetWidth; // Force the painting.
}
iframe.contentDocument.close();
document.body.removeChild(iframe);
};
PerfTestRunner.measureTime(test);
}
window.PerfTestRunner = PerfTestRunner;
Encapsulate private properties in PerfTestRunner better https://bugs.webkit.org/show_bug.cgi?id=97833 Reviewed by Ojan Vafai. PerformanceTests: This patch moves "private" methods and properties of PerfTestRunner into a closure so that they're inaccssible from outside. Also catch exceptions from test.run, test.done, and other runner code to ensure we call notifyDone() even if we broke tests. Otherwise DRT will timeout and we end up waiting for 10 minutes per each broken test on bots. * resources/runner.js: (PerfTestRunner.gc): (logInDocument): Extracted from PerfTestRunner.log. (PerfTestRunner.log): Moved. (logFatalError): Added. (start): Renamed from PerfTestRunner._start. (scheduleNextRun): Extracted from PerfTestRunner._runLoop. Also catch any exceptions that happen in the runner and ignoreWarmUpAndLog so that we don't end up timing out. We call logFatalError in such cases, which in turn ensures notifyDone() is called. (ignoreWarmUpAndLog): Renamed from PerfTestRunner._ignoreWarmUpAndLog. (finish): Extracted from PerfTestRunner._runLoop. (PerfTestRunner.measureTime): Moved. The initialization of runCount is moved into start(). (measureTimeOnce): Renamed from PerfTestRunner._measureTimeOnce. (PerfTestRunner.runPerSecond): Moved. Ditto about runCount. (measureRunsPerSecondOnce): Renamed from PerfTestRunner._measureRunsPerSecondOnce. (callRunAndMeasureTime): Renamed from PerfTestRunner._perSecondRunnerIterator. LayoutTests: Override PerfTestRunner.now instead of PerfTestRunner._perSecondRunnerIterator since the latter is no longer exposed. * fast/harness/perftests/runs-per-second-iterations-expected.txt: * fast/harness/perftests/runs-per-second-iterations.html: Increase the runtime of the last 4 runs since test.timeToRun is no longer supported by PerfTestRunner. * fast/harness/perftests/runs-per-second-log-expected.txt: * fast/harness/perftests/runs-per-second-log.html: Avoid use numbers that contain primes other than 2 and 5 as runs because they cause rouding errors in PerfTestRunner.measureRunsPerSecondOnce and make the output dependent on the underlying floating number implementation. Canonical link: https://commits.webkit.org/116100@main git-svn-id: https://svn.webkit.org/repository/webkit/trunk@130099 268f45cc-cd09-0410-ab3c-d52691b4dbfc
2012-10-01 23:48:17 +00:00
})();