You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that changing my code to the below significantly increased the throughput when using the paginator.
Would it make sense to update the waiting strategy in the client to try and sleep for 1s before using the response header value?
from time import sleep
import tweepy
import os
client = tweepy.Client(bearer_token=os.environ["TWITTER_BEARER_TOKEN"], wait_on_rate_limit=True)
query = "tweepy"
for response in tweepy.Paginator(client.search_all_tweets, query=query):
sleep(1)
process_response(response)
The text was updated successfully, but these errors were encountered:
Ah wonderful thank you, I had missed those with the keywords I searched before posting this. Apologies for the duplicate.
I think it could be helpful to have a note in the Pagination docs. I completely missed the FAQ; and it's easy to not even notice the issue if running jobs without monitoring them.
The following code will immediately output a Rate limit exceeded error and sleep for 900s.
I think what happens here is due to the following rate limits for
GET /2/tweets/search/all
:300 requests / 15 minsPER APP
1 requests / secondPER USER
1 requests / secondPER APP
I think this might be due to the following line, where the sleep time is taken from the response headers:
https://github.com/tweepy/tweepy/blob/0eac99beedf7f76c9587d91742453312dbca13d8/tweepy/client.py#
I found that changing my code to the below significantly increased the throughput when using the paginator.
Would it make sense to update the waiting strategy in the client to try and sleep for 1s before using the response header value?
The text was updated successfully, but these errors were encountered: