New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow performance with large schemas #2105
Comments
@mxstbr I think the issue here is related to |
You are right! Same configuration except for removing 11s is still too slow though, if I replace the large schema with a much smaller schema it only takes 1s, so there's still a lot of seemingly unnecessary overhead there? |
I think it's related to the filtering we are doing before executing the plugin (per each file). I'll check :) |
btw, @mxstbr , this PR by @mvestergaard is merged: #2073 (and available as |
On For reference, the schema I am working with is almost 100k LoC and the file is 1.7MB in size. |
That's a little odd. For me there was a big improvement: With 1.3.1:
With
You upgraded all packages related to graphql-codegen to the alpha, i assume? Maybe you can try running a profiler to see where the slowness comes from.
It'll output a flamegraph html file to a folder in your project |
Oh and with
|
You are right, I must have done something wrong with my last upgrades—using the latest Still not super fast but acceptable. |
Closing this for now, since @mvestergaard changes it works much better :) |
Describe the bug
When generating code from large GraphQL schemas, running graphql codegen takes > 25s, which is way too slow.
To Reproduce
Generate schema types and some queries from the official GitHub GraphQL API, which is very large:
You are going to need to get a GitHub personal access token from your settings and insert . that where it says
${GITHUB_PERSONAL_ACCESS_TOKEN}
. A potential example query could be something likeExpected behavior
Graphql codegen's performance should be related to how many queries/mutations/subscriptions it needs to generate code for, not how large the input schema is.
The text was updated successfully, but these errors were encountered: