New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming of S3 large files using S3AsyncClient and AsyncResponseTransformer.toPublisher(), CRT vs netty backpressure #5158
Comments
I have found something interesting there. @GetMapping(path="/{filekey}")
public Mono<ResponseEntity<Flux<ByteBuffer>>> downloadFile(@PathVariable("filekey") String filekey) {
GetObjectRequest request = GetObjectRequest.builder()
.bucket(s3config.getBucket())
.key(filekey)
.build();
return Mono.fromFuture(s3client.getObject(request, AsyncResponseTransformer.toPublisher()))
.map(response -> {
checkResult(response.response());
String filename = getMetadataItem(response.response(),"filename",filekey);
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_TYPE, response.response().contentType())
.header(HttpHeaders.CONTENT_LENGTH, Long.toString(response.response().contentLength()))
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + filename + "\"")
.body(Flux.from(response));
});
} I tried to add As I said before I also tried Then I tried to wrap the Flux into MyFlux which wraps Subscriber into MySubscriber which wraps Subscription into MySubscription in order to override requested number. public class MySubscription implements Subscription {
private final Subscription original; //+ constructor
@Override
public void request(long n) {
return original.request(2);
}
} And that does override requested 127 to 2, however it seems like it completely breaks MonoSendMany logic and I think it stops publishing anything to the channel/consumer. Consumer blocks (remember I am using blocking IO for consumer, so it's completely stuck). Let me check what are s3-netty chunk sizes and how it handles request 127. |
Ok, s3-netty uses 8KB chunks instead of 8MB chunks which really helps here. So, here is what I think could be a solution:
Please advice. |
Alright, I just tried S3AsyncClient.crtBuilder()
.minimumPartSizeInBytes(1L * 1024 * 1024) // 1 MB And even though parameter name is "uploadPartSize" and javadoc refers to Amazon Multipart Upload Limits, it does control both download and upload chunk sizes. So, the javadoc is fine except the parameter name. From here, I will let you review this issue to check if you want to update javadoc/param names a little bit, or even split them into download and upload chunk sizes. Also I will wait for an answer on |
Hi @VadimKirilchuk, AWS CRT-based S3 client utilizes S3 multipart upload and ranged GET (GetObject request with ranged header) to achieve parallel transfers. The reason why the default When you invoke GetObject, under the hood, AWS-CRT S3 client will initially downloads a few parts and buffer them in memory (buffer size can be configured via https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3CrtAsyncClientBuilder.html#initialReadBufferSizeInBytes(java.lang.Long)) and after that, the CRT will send more data only after the existing parts have been consumed. Maybe you can try lowering |
Hi @zoewangg, We can definitely play with that setting and this may reduce CRT read buffer, but the problem between reactor-netty and s3 client is that reactor-netty requests 128 part sizes to be pushed from s3 client to reactor-netty. Since the part size is 8MB by default it is equal to 1024MB per single GetObject request. We reduced the part size to 1MB and overall behavior is more stable now as it will push 128*1=128MB at max until waiting for next So, is there a way to limitRate on the AsyncResponseTransformer publisher instead of changing the part size? I think the answer is no, but maybe I miss something. Sounds like I should raise another request for reactor-netty as they have these MAX_SIZE/REFILL_SIZE hardcoded which doesn't play nicely with huge chunks. |
Right, not for AWS-CRT based S3 client. Currently, the chunk size is the same as the part size, so it can only be configured by part size. There's is a plan on CRT side to support streaming with smaller chunks, but we don't have a timeline at the moment. |
Hi, just posting an update on this one. We haven't experimented with I am actually thinking about increasing it back to 80MB, or even more. Basically, by playing with both I am also still behind on opening an issue against reactor-netty in regards to hardcoded MAX_SIZE/REFILL_SIZE. |
Describe the bug
To stream data directly instead of writing it to a file or into a memory #391 introduced
AsyncResponseTransformer.toPublisher()
method which essentially provides a way to returnMono<ResponseEntity<Flux<ByteBuffer>>
to the caller.The javadoc however, clearly mentions that: "You are responsible for subscribing to this publisher and managing the associated back-pressure. Therefore, this transformer is only recommended for advanced use cases."
I am attaching a project which has two things: Spring Boot server which can download large file from S3 using S3AsyncClient (use
spring.profiles.active
inapplication.yml
to use either CRT or Netty client) and very slow consumer.s3-streaming.zip
If you specify a bucket and large file in the application.yml, start the application with CRT client (default) and start slow client you may observe that in the log, every 10 seconds spring reactor netty direct memory consumption will be printed, it will quickly fill with the size of the file while consumer slowly keeps reading it.
Now, if you have 10 slow consumers, your direct mem will quickly hit the wall and requests will start to error out, causing Premature End of Content and closed connections.
I was not able to find any examples on how to maintain backpressure as
Flux.limitRate
didn't work at all.CRT client was chosen by us based on the AWS http configuration page.
It would be nice if someone can look at this to figure out if there is an issue with the code itself, or code itself is fine and this use case doesn't play well with CRT.
I need to also mention that once CRT is replaced with netty, memory consumption stays minimal, so backpressure seems to work without any manual intervention if netty is used on both sides (reactor-netty and s3 sdk netty). However, we observed a different issue with that - under load, some of the requests are idling and closed by a timeout on the ALB side, it is under investigation.
The point is, while documentation says about backpressure, it seems to work if using netty as s3 http client and doesn't work if using CRT as client. Please advice.
Expected Behavior
CRT s3 http client behaves similar to s3 netty http client and doesn't push all the data to server direct buffers.
Current Behavior
CRT instantly pushes all data to the server's netty direct byte buffers for slow consumers.
That doesn't happen if using netty http client instead of CRT.
Reproduction Steps
Use the attached zip file maven project.
Update application.yml to point to s3 bucket and file which is larger than 200MB.
You may also need to specify AWS credentials provider in the code or in the system.
Start the server. Start the SlowDrainApplication main method.
Observe the server logs to see Netty: direct mem consumption.
All data will get pushed into the direct memory.
Go to application.yml and change spring.profiles.active from 'crt' to 'netty'.
Repeat the exercise.
Observe the server logs to see that no excess memory consumption is happening without any additional backpressure configuration.
Possible Solution
No response
Additional Information/Context
No response
AWS Java SDK version used
2.25.10
CRT Client Version
0.29.14
Reactor netty and netty version
reactor-netty 1.1.3
netty 4.1.108.Final
JDK version used
openjdk version "17.0.3" 2022-04-19
oracle jdk 17
Operating System and version
windows 11 23H2 and some linux image in EKS
The text was updated successfully, but these errors were encountered: