Skip to content

Conversation

@maggie-lou
Copy link
Collaborator

@maggie-lou maggie-lou commented Oct 30, 2025

                                         │ base_10_30.txt │            new_10_30.txt            │
                                         │     sec/op     │   sec/op     vs base                │
_PausePerformance/memory_size_500_mb-16       3.805 ±  2%   3.306 ± 32%        ~ (p=0.089 n=10)
_PausePerformance/memory_size_1000_mb-16      8.931 ± 15%   6.319 ± 17%  -29.25% (p=0.000 n=10)
_PausePerformance/memory_size_2000_mb-16      13.98 ±  7%   13.93 ±  1%        ~ (p=0.853 n=10)
_PausePerformance/memory_size_4000_mb-16      23.38 ±  2%   23.74 ±  5%        ~ (p=0.247 n=10)
_PausePerformance/memory_size_8000_mb-16      22.99 ±  3%   42.59 ±  1%  +85.29% (p=0.000 n=10)
geomean                                       12.06         12.41         +2.89%

@maggie-lou
Copy link
Collaborator Author

maggie-lou commented Oct 31, 2025

There's a huge jump in run time for memory_size_8000_mb-16. I ran it again and saw the exact same thing, so it must hit a point where exporting a very large full memory snapshot sees a large degradation in performance. Just creating a full snapshot file on an 8GB machine (firecracker code) consistently takes 35s. Creating a diff snapshot for the same size machine takes ~10s and merging takes 2-3s.

@vanja-p
Copy link
Contributor

vanja-p commented Nov 3, 2025

Looking at the baseline performance, it's interesting that increasing memory from 1 to 2 and 2 to 4 nearly doubles the snapshot time, but increasing from 4 to 8 actually runs faster with 8. Maybe something is up with the benchmark setup?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants