Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAPI/Swagger parsing is (?) performance bottleneck #3670

Closed
justinsb opened this issue Mar 3, 2021 · 10 comments · Fixed by #4568
Closed

OpenAPI/Swagger parsing is (?) performance bottleneck #3670

justinsb opened this issue Mar 3, 2021 · 10 comments · Fixed by #4568
Assignees
Labels
area/kyaml issues for kyaml area/openapi Issues to OpenAPI in kyaml kind/regression Categorizes issue or PR as related to a regression from a prior release. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@justinsb
Copy link
Contributor

justinsb commented Mar 3, 2021

I was wondering why running kustomize (& kpt) is not effectively instant, and I tracked down a large performance bottleneck - initializing the OpenAPI / parsing the swagger.

I believe this is the most significant constant-overhead for kustomize - obviously on very large kustomizations the data manipulation will eventually dominate.

I sent #3669 which adds a benchmark to quantify the impact; based on that it takes about 900ms to json parse the swagger itself; it takes about 20ms to un-gzip from the data embedded into the binary. These roughly tally with the performance overhead I see in the real world.

This particularly matters for e.g. kpt, where a kpt setter which uses kyaml also takes at least 1 second to run.

I think it would be interesting to cache the deserialized form.

But there's also some low-hanging fruit, e.g. the swagger UnmarshalJSON code calls json.Unmarshal twice over the same data (once for SwaggerProps, once for VendorExtensible). But that only gets us 2x, whereas caching (I'd SWAG) could be 100-1000x.

@Shell32-Natsu Shell32-Natsu added area/kyaml issues for kyaml area/openapi Issues to OpenAPI in kyaml kind/regression Categorizes issue or PR as related to a regression from a prior release. labels Mar 4, 2021
@Shell32-Natsu
Copy link
Contributor

@natasha41575

@natasha41575
Copy link
Contributor

natasha41575 commented Mar 4, 2021

Thank you for filing the issue, will look into it

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 2, 2021
@natasha41575
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 2, 2021
@KnVerey KnVerey added this to To do in Kustomize performance Jun 9, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 31, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 30, 2021
@natasha41575
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 30, 2021
@natasha41575
Copy link
Contributor

Discussed offline, one option would be to try to switch to using kube-openapi, which stores the the openapi data in proto form. This could be worth looking into.

@natasha41575 natasha41575 added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Nov 16, 2021
@mengqiy
Copy link
Member

mengqiy commented Dec 6, 2021

/assign

@natasha41575
Copy link
Contributor

natasha41575 commented Apr 19, 2022

Only took a year, but I believe this is fixed now by #4568 :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kyaml issues for kyaml area/openapi Issues to OpenAPI in kyaml kind/regression Categorizes issue or PR as related to a regression from a prior release. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Development

Successfully merging a pull request may close this issue.

7 participants