Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[object_store] Better Error Handling & Response Propagation #5607

Open
matthewfollegot opened this issue Apr 8, 2024 · 2 comments
Open

[object_store] Better Error Handling & Response Propagation #5607

matthewfollegot opened this issue Apr 8, 2024 · 2 comments
Labels
question Further information is requested

Comments

@matthewfollegot
Copy link

matthewfollegot commented Apr 8, 2024

Is your feature request related to a problem or challenge? Please describe what you are trying to do.

First off, thank you for having a client that works seamlessly for the various cloud providers and a clean interface to go along with it! The problem I'm encountering is related to error messages returned by the client.

Currently, error responses are limited to a seemingly short summary of the entire error response from the store (e.g. S3).
Here's an example error

"Generic S3 error: Error after 0 retries in 30.16539508s, max_retries:10, retry_timeout:300s, source:error sending request for url (https://s3.ap-northeast-1.amazonaws.com/<bucket>/<key>): operation timed out"

In order to further debug these errors and do things such as opening a support ticket with S3, it would be extremely helpful to include the attributes listed in the S3 docs here, some examples being the error code, message, and especially the request ID.

Describe the solution you'd like

I'd love to see as much of the original error propagated to the caller.
A solution could be to update all relevant or just the Generic variant of the object_store::Error with the fields mentioned above and propagate the errors from the store to the caller.

Describe alternatives you've considered

N/A

Additional context

I'm observing a large amount of errors on S3 PUT requests.

@matthewfollegot matthewfollegot added the enhancement Any new improvement worthy of a entry in the changelog label Apr 8, 2024
@tustvold
Copy link
Contributor

tustvold commented Apr 8, 2024

The provided error is a timeout not an error returned by AWS, in cases where an error is returned by AWS we print the response body. In this case there is no response body as the request took too long and hit a client side timeout before returning anything.

I would double check the ClientConfig is set appropriately for the size of uploads you are performing, and if you are uploading very large objects perhaps you might consider using a multipart upload - this will be faster and more reliable

@tustvold tustvold added question Further information is requested and removed enhancement Any new improvement worthy of a entry in the changelog labels Apr 8, 2024
@matthewfollegot
Copy link
Author

The provided error is a timeout not an error returned by AWS, in cases where an error is returned by AWS we print the response body. In this case there is no response body as the request took too long and hit a client side timeout before returning anything.

I would double check the ClientConfig is set appropriately for the size of uploads you are performing, and if you are uploading very large objects perhaps you might consider using a multipart upload - this will be faster and more reliable

Thanks for the quick response! I have 10s of deployments running with the same client config and a maximum object size that is consistent across all deployments and doesn't change, and there's only a single deployment that's encountering this issue consistently, despite it being one of many staging/sandbox/shadow deployments, so I'm not too sure a multipart upload is absolutely necessary. I'll continue debugging and maybe try multipart uploads too just in case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants