-
Notifications
You must be signed in to change notification settings - Fork 13
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Rate limits are now handled by the library
- Loading branch information
1 parent
fed0df5
commit 84c675c
Showing
4 changed files
with
2 additions
and
7 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
84c675c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ErikKalkoken I cannot confirm that the rate limits are handled by the library, or at least not all of them.
I'm experiencing rate limit exceptions in
conversations_replies()
(and possibly elsewhere) in both 1.3.1 and 1.4.0. I'll submit my own proof-of-concept rate limiting solution as a PR soon.84c675c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not familiar with the Slack API or any of its clients, but a cursory search through the documentation might imply that this maybe needs to be explicitly enabled using the RateLimitErrorRetryHandler?
I couldn't find any reference to that in the slackchannel2pdf source, but I really only went for a simple grep, so maybe I'm totally off here.
84c675c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your feedback. There are two different concepts to distinguish here:
a) Causing a rate limit
b) Reacting to a rate limit error reporting by the API
As I understand the newer version of the slack library has a "smart rate limiter", which should prevent a) by automatically limiting how many request are send to the API. This is explained here: slackapi/python-slack-sdk#1101
I also made some local tests and was not able to cause any rate limits despite downloading thousands of messages with the current version. But yeah, it may not be perfect yet. I could look into implementing b) as a mitigation.
84c675c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ErikKalkoken Thank you, that is helpful context, I indeed did not mentally differentiate between those two concepts.
I'm wondering if I'm doing something wrong, because I don't feel like the "smart rate limiter" is in effect for me. I'm just running 1.4.0 against a single channel, and I'm hitting the rate limits pretty reliably:
It makes about five requests for metadata, then 18 pages of
messages
(3,476), then the threads and after a few hundreds of them (<1min), I get a rate limiting exception. This isn't even one of my larger channels, but we've always been using threads a lot.Is is possible that you're using a different token, which allows a higher frequency of requests? I created an app just to use slackchannel2pdf and basically googled my way there, with no understanding what I'm doing. So I almost certainly did it wrong. (Yet the token produces correct data.)
Or maybe you're just not using threads much and this is a threading problem? I haven't really surveyed multiple channels and compared notes, because I unfortunately need to get the export project done, and my (soon-to-be-submitted) fixes are eliminating the issue for me completely.