2023-03-28 Triage Log

A busy week with lots of real performance gains. Most regressions seemed to be due to noise. The biggest highlight was large wins in incremental compilation leading to a lot of (albeit modest) gains of 1% performance in a majority of incremental compilation test scenarios. Other than that most performance gains were smaller and more incremental. One of the biggest performance regressions came in an update to LLVM. However, nearly just as many test cases showed improvements as regressions.

Triage done by @rylev. Revision range: ef03fda3..cbc064b3

Summary:

(instructions:u)meanrangecount
Regressions ❌
(primary)
1.7%[0.5%, 3.5%]24
Regressions ❌
(secondary)
1.2%[0.2%, 2.6%]18
Improvements ✅
(primary)
-1.5%[-10.9%, -0.3%]168
Improvements ✅
(secondary)
-4.0%[-65.3%, -0.4%]119
All ❌✅ (primary)-1.1%[-10.9%, 3.5%]192

3 Regressions, 7 Improvements, 8 Mixed; 5 of them in rollups 46 artifact comparisons made in total

Regressions

Add RANLIB_x86_64_unknown_illumos env for dist-x86_64-illumos dockerfile #109163 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.8%[0.7%, 0.8%]4
Regressions ❌
(secondary)
0.5%[0.4%, 0.7%]9
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
--0
All ❌✅ (primary)0.8%[0.7%, 0.8%]4
  • Noise

Make NLL Type Relating Eager #108861 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.6%[0.3%, 0.8%]7
Regressions ❌
(secondary)
0.5%[0.3%, 0.6%]10
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
-0.8%[-0.8%, -0.8%]1
All ❌✅ (primary)0.6%[0.3%, 0.8%]7
  • A good chunk of the regressions are noise, and the rest are small enough that I don‘t think it’s worth looking too deeply into.

Refactor try_execute_query #109100 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.7%[0.7%, 0.8%]4
Regressions ❌
(secondary)
0.5%[0.3%, 0.7%]8
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
--0
All ❌✅ (primary)0.7%[0.7%, 0.8%]4
  • Noise

Improvements

Rollup of 10 pull requests #109442 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.4%[-0.5%, -0.3%]4
Improvements ✅
(secondary)
--0
All ❌✅ (primary)-0.4%[-0.5%, -0.3%]4

mv tests/codegen/issue-* tests/codegen/issues/ #109172 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.6%[-0.7%, -0.6%]4
Improvements ✅
(secondary)
-0.5%[-0.6%, -0.4%]2
All ❌✅ (primary)-0.6%[-0.7%, -0.6%]4

Rollup of 7 pull requests #109517 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.4%[-0.4%, -0.4%]1
Improvements ✅
(secondary)
-0.5%[-0.7%, -0.4%]6
All ❌✅ (primary)-0.4%[-0.4%, -0.4%]1

Don't pass TreatProjections separately to fast_reject #109202 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.4%[0.4%, 0.4%]1
Improvements ✅
(primary)
-0.9%[-2.3%, -0.2%]17
Improvements ✅
(secondary)
-2.6%[-10.5%, -0.4%]24
All ❌✅ (primary)-0.9%[-2.3%, -0.2%]17

Add #[inline] to the Into for From impl #109546 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.8%[0.8%, 0.8%]1
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.8%[-1.5%, -0.4%]15
Improvements ✅
(secondary)
-0.9%[-1.4%, -0.3%]19
All ❌✅ (primary)-0.7%[-1.5%, 0.8%]16

Optimize incremental_verify_ich #109371 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.9%[-1.6%, -0.4%]78
Improvements ✅
(secondary)
-1.1%[-2.2%, -0.5%]39
All ❌✅ (primary)-0.9%[-1.6%, -0.4%]78

Rollup of 7 pull requests #109581 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.7%[-0.7%, -0.7%]2
Improvements ✅
(secondary)
-0.6%[-0.7%, -0.6%]5
All ❌✅ (primary)-0.7%[-0.7%, -0.7%]2

Mixed

Only clear written-to locals in ConstProp #109087 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.5%[0.5%, 0.5%]3
Improvements ✅
(primary)
-2.7%[-6.1%, -0.8%]27
Improvements ✅
(secondary)
-15.8%[-63.2%, -0.2%]17
All ❌✅ (primary)-2.7%[-6.1%, -0.8%]27
  • Given the overall positive impact of this PR and the complex relationship it has with some other PRs, I think it's safe to say the perf results are fine here.

Rollup of 11 pull requests #109496 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.5%[0.3%, 0.8%]4
Improvements ✅
(primary)
-0.7%[-0.8%, -0.7%]4
Improvements ✅
(secondary)
-0.6%[-0.7%, -0.4%]4
All ❌✅ (primary)-0.7%[-0.8%, -0.7%]4
  • Unfortunately most of the positives here seem to be correction from noise. The regressions are really small though so I don't think investigation is worth it.

Rollup of 9 pull requests #109503 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.5%[0.5%, 0.5%]1
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-1.5%[-1.7%, -1.4%]2
Improvements ✅
(secondary)
-2.4%[-5.3%, -0.3%]13
All ❌✅ (primary)-0.9%[-1.7%, 0.5%]3
  • The one regression is outweighed by many other improvements. Given this is a rollup which requires extra steps to investigate regressions, I think it's safe to mark this as triaged.

Add CastKind::Transmute to MIR #108442 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.7%[0.3%, 1.5%]4
Improvements ✅
(primary)
-1.6%[-1.6%, -1.6%]1
Improvements ✅
(secondary)
--0
All ❌✅ (primary)-1.6%[-1.6%, -1.6%]1
  • The regressions are small enough that I don't think this is worth investigating.

rustdoc: Optimize impl sorting during rendering #109399 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.7%[0.7%, 0.7%]1
Improvements ✅
(primary)
-10.1%[-10.1%, -10.1%]1
Improvements ✅
(secondary)
--0
All ❌✅ (primary)-10.1%[-10.1%, -10.1%]1
  • The one regression is noise.

Implement Default for some alloc/core iterators #99929 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.7%[0.7%, 0.7%]4
Regressions ❌
(secondary)
0.5%[0.3%, 0.6%]9
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
-0.3%[-0.4%, -0.3%]6
All ❌✅ (primary)0.7%[0.7%, 0.7%]4
  • Given the nature of this PR (adding APIs) and the fact that the primary benchmark impacted is noisy, I think we can triage this.

Use SmallVec in bitsets #109458 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.8%[0.7%, 1.0%]8
Improvements ✅
(primary)
-0.8%[-1.4%, -0.3%]15
Improvements ✅
(secondary)
-0.7%[-1.4%, -0.3%]15
All ❌✅ (primary)-0.8%[-1.4%, -0.3%]15

Upgrade to LLVM 16, again #109474 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
1.1%[0.3%, 3.6%]64
Regressions ❌
(secondary)
0.9%[0.2%, 2.5%]23
Improvements ✅
(primary)
-1.0%[-2.9%, -0.5%]49
Improvements ✅
(secondary)
-1.1%[-4.1%, -0.3%]75
All ❌✅ (primary)0.2%[-2.9%, 3.6%]113
  • Given that the perf results are somewhat even (though regressions do win out), I think we sort of have to take this as is. I don't imagine we would revert an LLVM upgrade unless the perf results were really bad.