Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust staticcheck CI settings to lower memory use #181

Merged
merged 1 commit into from May 5, 2022

Conversation

bstoll
Copy link
Collaborator

@bstoll bstoll commented May 5, 2022

Github runners have 7GB of RAM allocated to them. Static check analysis appears to exceed this limit and crashes sometimes. This is mostly due to the large generated code in Ondatra/ygot that this repo uses heavily.

Static checks on my dev machine report the following stats:

/usr/bin/time -v staticcheck ./...
...
Elapsed (wall clock) time (h:mm:ss or m:ss): 10:07.36
Maximum resident set size (kbytes): 5965512
...

Adjusting the GOGC setting, we can lower the memory usage at the cost of some additional CPU time:

GOGC=30 /usr/bin/time -v staticcheck ./...
...
Elapsed (wall clock) time (h:mm:ss or m:ss): 11:14.21
Maximum resident set size (kbytes): 4088248
...

We can also get a big improvement by adding ~/.cache/staticcheck to the cache. A cached run example:

GOGC=30 /usr/bin/time -v staticcheck ./...
...
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.77
Maximum resident set size (kbytes): 73468
...

@coveralls
Copy link

Pull Request Test Coverage Report for Build 2277398881

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 64.254%

Totals Coverage Status
Change from base Build 2277136135: 0.0%
Covered Lines: 586
Relevant Lines: 912

💛 - Coveralls

@dominikh
Copy link

dominikh commented May 6, 2022

I'm currently looking into some of that poor performance, which is indeed caused largely by ygot.

The first change to Staticcheck, which should land soonish, seems promising, at least on the speed front:

before: 861.65s user 9.52s system 266% cpu 5:26.48 total
after:  193.85s user 8.77s system 852% cpu 23.765 total

@liulk
Copy link
Contributor

liulk commented May 6, 2022

Oh wow, looks like the new change both reduced total CPU usage and increased parallelism. Very cool!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants