Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major issue in OkHttp 4.9.3: MultipartBody file upload to Amazon S3 triggering multiple times #6953

Closed
vlad-roid opened this issue Dec 9, 2021 · 14 comments
Labels
bug Bug in existing code

Comments

@vlad-roid
Copy link

I don't have the time to investigate this and check where exactly the issue is happening, sorry, but I noticed that only when using 4.9.3, in my code for file uploads to Amazon S3 using MultipartBody and RequestBody, the writeTo(sink: BufferedSink) is triggered 3 times instead of once, sequentially after the file upload is successful. After the 3rd time the upload is ultimately successful. For now I'm sticking with 4.9.2 where the issue does not occur.

@vlad-roid vlad-roid added the bug Bug in existing code label Dec 9, 2021
@yschimke
Copy link
Collaborator

yschimke commented Dec 9, 2021

We can't investigate this as is. But if you can enable the LoggingEventListener it should show some additional details.

  OkHttpClient client = new OkHttpClient.Builder()
    .eventListenerFactory(new LoggingEventListener.Factory())
    .build();
INFO: [294 ms] requestHeadersEnd
Dec. 09, 2021 7:13:47 AM okhttp3.internal.platform.Platform log
INFO: [634 ms] responseHeadersStart
Dec. 09, 2021 7:13:47 AM okhttp3.internal.platform.Platform log
INFO: [635 ms] responseHeadersEnd: Response{protocol=h2, code=301, message=, url=https://raw.github.com/square/okhttp/master/README.md}
Dec. 09, 2021 7:13:47 AM okhttp3.internal.platform.Platform log
INFO: [637 ms] responseBodyStart

https://square.github.io/okhttp/4.x/okhttp-logging-interceptor/okhttp3.logging/-logging-event-listener/

Please reopen if you can help us reproduce.

@yschimke yschimke closed this as completed Dec 9, 2021
@yschimke
Copy link
Collaborator

yschimke commented Dec 9, 2021

It's most likely #6293, so the server rejecting the request without executing it, and a follow up request is successful. Without further information, I'll leave closed as not reproducible.

@vlad-roid
Copy link
Author

Sure, will do next year :D or maybe someone else does until then.

@vlad-roid
Copy link
Author

This was somehow sorted by 4.10.0, issue is not happening there anymore

@yschimke
Copy link
Collaborator

yschimke commented Aug 8, 2022

Presumably #6293 then. Thanks for following up

@workmanw
Copy link

👋 We are having this issue and it unfortunately is not fixed in 4.10.0 for us. The issue is fixed in 5.0.0-alpha.1. Looking at the tag for 4.10.0-RC.1, it doesn't look like #6293 is included. I was curious if it would be at all possible to back port that in the next 4.x release? Or if there was any kind of timeline for 5.x being GA.

Open to other ideas as well. We're just trying to figure out how best to work around this issue in our application.

@yschimke
Copy link
Collaborator

It's in the 4.10 branch https://github.com/square/okhttp/commits/okhttp_4.10.x, about a page down.

So perhaps your issue is slightly different?

@workmanw
Copy link

@yschimke Oh yea. I was looking at the commit hash from master and not realizing it had been cherry-picked. My mistake.

The issue were having seems best described my the following statement from the 5.0.0-alpha.1 CHANGELOG entry.

Fix: Attempt to read the response body even if the server canceled the request. This will cause some calls to return nice error codes like HTTP/1.1 429 Too Many Requests instead of transport errors like SocketException: Connection reset and StreamResetException: stream was reset: CANCEL.

In our case, we get a validation error from the server within 10ms of starting a multipart form upload. This results in a stream reset exception writing to the socket. We have tested with 4.10 and it is still broken. But the issue seems resolved with 5.0.0-alpha.1 .

Any chance you might know which commit that Fix was referencing?

@yschimke
Copy link
Collaborator

yschimke commented Sep 17, 2022

Possibly #6295 ?

@workmanw
Copy link

@yschimke Thanks a lot for sharing that! I was holding out hope that I might be able to come up with a work around for 4.x using a network interceptor, but I think that is a bit more involved.

Do you have any advice for how I might work around this issue in 4.10 while we wait on 5.x?

@yschimke
Copy link
Collaborator

There was concern about the change, ill take a look at backporting it. See if we think its now safe.

@workmanw
Copy link

@yschimke Wow, I would greatly appericate that. Thanks a lot for the info. Have a great night!

@yschimke
Copy link
Collaborator

No promises, we may decide to defer. #7453

Also, 5.x is production quality, just has some aspects that are not API stable.

@workmanw
Copy link

@yschimke 👍 . I appericate the effort either way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bug in existing code
Projects
None yet
Development

No branches or pull requests

3 participants