Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow chunked uploads (and maybe even streaming uploads) #459

Open
samschott opened this issue Jan 22, 2023 · 3 comments
Open

Allow chunked uploads (and maybe even streaming uploads) #459

samschott opened this issue Jan 22, 2023 · 3 comments

Comments

@samschott
Copy link
Contributor

samschott commented Jan 22, 2023

Why is this feature valuable to you? Does it solve a problem you're having?
The requests library allows both streaming and chunked uploads (see https://requests.readthedocs.io/en/latest/user/advanced/#streaming-uploads and https://requests.readthedocs.io/en/latest/user/advanced/#chunk-encoded-requests). This has two benefits:

  1. It is sufficient to only load small parts of a file into memory before upload.
  2. It is possible to limit bandwidth usage by using a generator that provides chunks at a limited rate.

The Dropbox API of course already requires upload sessions (files/upload_session_start, files/upload_session_append and files/upload_session_finish) to upload files > 150 MB. However, this approach by itself does not replace chunked or streaming uploads because:

  1. The request body should be ideally >= 4 MB to reduce the total number of API calls (both for efficiency and to not exhaust data transport API call limits).
  2. Bandwidth control will be very coarse when performed on chunks of 4 MB compared for example 2 kB.
  3. Memory usage will still be larger compared to 1 kB or 2 kB chunks, especially for parallel uploads.

Describe the solution you'd like
Requests supports streaming uploads by passing a file-like object as the request body and chunked uploads by passing a generator as the request body. However, the Python SDK explicitly prevents both by requiring the request body to be of type bytes:

if not isinstance(request_binary, (six.binary_type, type(None))):
# Disallow streams and file-like objects even though the underlying
# requests library supports them. This is to prevent incorrect
# behavior when a non-rewindable stream is read from, but the
# request fails and needs to be re-tried at a later time.
raise TypeError('expected request_binary as binary type, got %s' %
type(request_binary))

It would be good to either completely drop this limitation, with appropriate warnings in the doc string, or at least allow chunked uploads (where requests handles retry / rewind logic) even when disallowing streaming uploads.

Describe alternatives you've considered
Not at present.

@greg-db
Copy link
Contributor

greg-db commented Jan 23, 2023

Thanks for the detailed request! I can't make any promises on if/when this would be supported, but I'm sending this along to the team.

@samschott
Copy link
Contributor Author

samschott commented Jan 25, 2023

If there is some consensus by the team this is worth considering, I'm happy to create a PR. But if this is not an approach that you want to pursue, do let me know and I'll look for other solutions.

@greg-db
Copy link
Contributor

greg-db commented Jan 26, 2023

We welcome PRs in general, but I can't say offhand if this in particular is something the team would or wouldn't want to support in the SDK. I'll ask them though to see if I can get some guidance on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants