New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to control exception depth to avoid CPU spikes #1395
Comments
Wow, I actually run into this issue, after crazy tracing… sequelize/sequelize#9652 (comment)
I face this issue in one incident when I would love to be able to control the depth though. I'm using |
I think this might give a hint? |
Our issue was also that axios dumps everything relating to request and response. This turns out to be a massive amount of fields. Without a proper limit, raven spins out of control, causing server slow down which made the issue worse. |
can confirm we're seeing this behaviour as described by @scarlac on Axios calls that fail with a 500 error. I think this can be improved by using the Checking for a |
sentry-javascript/packages/utils/src/object.ts Lines 206 to 210 in 3c10d9c
@adriaanmeuris It's coming :) |
@kamilogorek any idea in which version this would become available? |
Closing the issue, as it seems like the original issue has been resolved. |
Do you want to request a feature or report a bug?
Feature.
What is the current behavior?
Raven client for NodeJS currently sets
MAX_SERIALIZE_EXCEPTION_DEPTH = 3
. For large exceptions from external APIs the exception objects may contain a large amount of data and nested keys, causing Sentry to spike NodeJS process, blocking all other actions.What is the expected behavior?
captureException
should allow control over how deepserializeException
goes by default. Raven already has akwargs
parameter in which this could reasonably be passed down toserializeException
, who's 2nd parameter is the maximum depth.The text was updated successfully, but these errors were encountered: