-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to debug catastrophic crash #34
Comments
Thank you very much for the opening this issue and posting your logs. I notice you're running version 0.4.14. Can you see if the issue occurs on iodine 0.4.19? 1. Debugging and fixing: Do you have any way to replicate the issue? If I can replicate it on my system I can usually debug it. I use both Another approach I use is re-writing the app in C using the facil.io framework to test if the issue is iodine relate or facil.io relate. 2. Possible Workarounds: One possible work-around is to try iodine 0.5.1. The 0.4.x versions are based on a totally different design. The design was for a number of reasons that would show up on high stress machines (such as limitations on the use of pipes in the pub/sub system). A second possible solution is to allow iodine to crash in cluster mode. When a worker crushes in cluster mode, iodine will re spawn another worker to keep the application running. I think this was true also for the 0.4.x versions, but I might be mixing things up (it might be an 0.5.x feature). |
Thanks for your super quick and detail response 💪 |
I test stress Iodine before each release, but I use a trivial HTTP/WebSocket "broadcast" application broadcasting everything to a single channel (the HTTP tests are a simple "hello"). I find it hard to think that the stress alone would cause an issue.
Yes, there was an issue in the C<=>Ruby bridge that I fixed in version 0.4.17. I think this observation might be related to that issue. 1. Any migration is needed on upgrading to 0.5.1? or simply a gem update thing Hmm... say yes, though it really depends how you use iodine. I would recommend to upgrade to 0.4.19 first and see if this solves the issue. The 0.5.0 release changes the pub/sub API considerably and uses a pub/sub subscription object instead of an I'm not sure if this effects your application, but it's part of the reason Plezi is still using the 0.4.19 release (though you can "hack" this by forcing Plezi to version 0.15.0). Also, some of the callbacks used for WebSockets were updated, replacing Some of the HTTP security features were updated, so if you're using the API to change the server default settings, that might need to be reviewed. I should point out that 0.5.x versions were only partly successful and (sadly) more API changes are coming. Plezi itself didn't migrate to 0.5.x yet and it might skip 0.5.x versions altogether. I'm already working on version 0.6.0 in accordance with some of the requested changes to the newly proposed Rack specification draft. Also, the 2. I saw you mention cluster mode, how to config that? currently I'm using bundle exec iodine -v -t 3 -w 1 Any value of bundle exec iodine -v -t 3 -w 2 |
Thank you! I think I will try cluster mode before updating version because I mainly use plezi to connect my IoT devices. |
You're welcome, and good luck! Please re-open the issue (or start a new one) if you experience anything else. I'm happy to help. B. |
I have encountered follow error
The app is totally dead and need to restart. Do you have an idea of how to better debug this kind of problem to prevent it from happening?
The text was updated successfully, but these errors were encountered: