-
-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor models to improve performance #1266
Conversation
9416fc4
to
8ef5f92
Compare
- Avoid many linear searchs - Cache version construction
8ef5f92
to
7894430
Compare
Tested with single and multisegments. I only have 2 wled devices to test with though, and this PR was the result of reviewing user performance data in beta |
There hasn't been any activity on this pull request recently. This pull request has been automatically marked as stale because of that and will be closed if no further activity occurs within 7 days. Thank you for your contributions. |
Not stale |
There hasn't been any activity on this pull request recently. This pull request has been automatically marked as stale because of that and will be closed if no further activity occurs within 7 days. Thank you for your contributions. |
keep-alive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @bdraco 👍
../Frenck
Thanks |
Proposed Changes
The linear searches produce 100000s of dict gets per minute with ~25 devices