2022-10-11 Triage Log

Overall, a fairly quiet week where the change to primary benchmarks ended up breaking exactly even. Secondary benchmarks saw improvements but not in large enough numbers for it to be particularly noteworthy.

Triage done by @rylev. Revision range: 02cd79a..1e926f0

Summary:

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.8%[0.2%, 1.4%]19
Regressions ❌
(secondary)
1.0%[0.3%, 1.8%]9
Improvements ✅
(primary)
-0.6%[-1.8%, -0.3%]29
Improvements ✅
(secondary)
-1.0%[-6.4%, -0.2%]39
All ❌✅ (primary)-0.0%[-1.8%, 1.4%]48

3 Regressions, 1 Improvements, 6 Mixed; 4 of them in rollups 41 artifact comparisons made in total

Regressions

Reduce CString allocations in std as much as possible #93668 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
1.0%[1.0%, 1.0%]4
Regressions ❌
(secondary)
0.2%[0.2%, 0.2%]2
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
--0
All ❌✅ (primary)1.0%[1.0%, 1.0%]4
  • The hello-world opt benchmarks are dominated by link time.
  • It makes sense that a change to an FFI type CString could have an impact on these.
  • I don‘t think there’s a need though to really do anything about it.

Rollup of 6 pull requests #102867 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.3%[0.3%, 0.4%]6
Regressions ❌
(secondary)
1.4%[1.2%, 1.6%]6
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
-0.2%[-0.2%, -0.2%]1
All ❌✅ (primary)0.3%[0.3%, 0.4%]6
  • The impacted benchmarks are more sensitive to changes to the trait system, so it looks like it might be #102845.
  • Kicked of a perf run to investigate.

tools/remote-test-{server,client}: Use /data/local/tmp on Android #102755 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.7%[0.6%, 0.9%]6
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
--0
All ❌✅ (primary)0.7%[0.6%, 0.9%]6
  • Looks like Diesel is becoming more noisy lately. You can see that in this graph.

Improvements

Rewrite representability #100720 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
-0.4%[-0.8%, -0.2%]38
Improvements ✅
(secondary)
-0.9%[-3.3%, -0.2%]21
All ❌✅ (primary)-0.4%[-0.8%, -0.2%]38

Mixed

Remove TypeckResults from InferCtxt #101632 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.7%[0.5%, 1.2%]13
Regressions ❌
(secondary)
4.3%[3.2%, 5.7%]6
Improvements ✅
(primary)
-0.3%[-0.6%, -0.2%]19
Improvements ✅
(secondary)
-0.6%[-1.6%, -0.2%]52
All ❌✅ (primary)0.1%[-0.6%, 1.2%]32
  • Looks specialization_graph::Children::insert is getting called way more.
  • Perhaps some strategic placement of inline could help help.

Rollup of 6 pull requests #102787 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.3%[0.2%, 0.3%]2
Regressions ❌
(secondary)
1.4%[1.1%, 1.9%]6
Improvements ✅
(primary)
-0.8%[-1.0%, -0.4%]8
Improvements ✅
(secondary)
-2.5%[-3.7%, -0.3%]7
All ❌✅ (primary)-0.6%[-1.0%, 0.3%]10
  • Most of the regressions are in secondary benchmarks, so I don‘t think it’s worth investigating what caused this.

std: use futex in Once #99505 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.5%[0.5%, 0.5%]1
Regressions ❌
(secondary)
1.7%[1.0%, 3.3%]7
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
-0.3%[-0.5%, -0.2%]9
All ❌✅ (primary)0.5%[0.5%, 0.5%]1
  • The regression results are small and neutral enough that we don't need to investigate.

Rollup of 8 pull requests #102809 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.3%[0.2%, 0.4%]13
Regressions ❌
(secondary)
0.4%[0.3%, 0.6%]3
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
-1.1%[-1.1%, -1.1%]1
All ❌✅ (primary)0.3%[0.2%, 0.4%]13

Rollup of 6 pull requests #102875 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
0.2%[0.2%, 0.2%]2
Regressions ❌
(secondary)
--0
Improvements ✅
(primary)
--0
Improvements ✅
(secondary)
-5.0%[-6.6%, -1.8%]5
All ❌✅ (primary)0.2%[0.2%, 0.2%]2
  • This is neutral enough that I don't believe it warrants investigation.

slice: #[inline] a couple iterator methods. #96711 (Comparison Link)

(instructions:u)meanrangecount
Regressions ❌
(primary)
--0
Regressions ❌
(secondary)
0.5%[0.3%, 0.7%]4
Improvements ✅
(primary)
-0.8%[-1.5%, -0.5%]8
Improvements ✅
(secondary)
-1.4%[-1.8%, -1.2%]6
All ❌✅ (primary)-0.8%[-1.5%, -0.5%]8
  • From the reviewer: “Perf results are more positive than negative, I think that's all that matters for this kind of change. The regressions are minor ones in secondary benchmarks”