-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyError on self._in_use_connections.remove(connection) #1138
Comments
Can you elaborate on your setup a little more? Are you using redis-py in a forked or multiprocess or threaded environment? |
So this is a Flask(Nginx+) I think threaded Is the correct answer for this. |
I have the same problem with python 2.7 + Django + uwsgi.
Sometimes I get the following error
I guess the problem is caused by the Shared connection pool used by uwsgi workers ? |
We get the same issue. Forked environment, all connections are made after fork. |
@kmerenkov What version of redis-py are you using? 3.2.0+ fixes a lot of the problems with connections in forked processes. |
3.2.1 |
I guess there's one case, thread A and B, A get lock and then call |
I have the same issue (on 3.3.11). I have no idea if this is the correct fix, but based on @yht804421715's observation, I moved the update of def reset(self):
self._created_connections = 0
self._available_connections = []
self._in_use_connections = set()
self._check_lock = threading.Lock()
self.pid = os.getpid() Since then, I can't reproduce it anymore. |
@gmbnomis without fully understanding what is happening here, i think you should not move the pid-updating behind the discarding of the lock. From the comment i read that by setting the pid, the thread communicates, that is has done the work. So moving it behind clearing the containers is ok imho. |
This fixes a race condition, where a task can be returned an old connection instead of waiting, while another one already started the reset. Thanks for the analysis done by @yht804421715 and @gmbnomis . fixes redis#1138
I'm not a fan of simply moving the I just created a new PR #1270 that implements what I believe is a fully thread-safe pool. Can anyone please give it a spin and see if it fixes your issue? Thanks! |
Came by this error on the forums: https://forum.sentry.io/t/worker-error-unable-to-incr-internal-metric/13098?u=byk and then found redis/redis-py#1138 which got fixed in version 3.4.0. This patch upgrades it to `3.4.1` which has a fix for a regression introduced in `3.4.0`. No breaking changes.
Came by this error on the forums: https://forum.sentry.io/t/worker-error-unable-to-incr-internal-metric/13098?u=byk and then found redis/redis-py#1138 which got fixed in version 3.4.0. This patch upgrades it to `3.4.1` which has a fix for a regression introduced in `3.4.0`. No breaking changes.
Version: 2.10.5
Platform: Python 2.7, Ubuntu
This is a very rare occurrence on a very high traffic server but I receive a key error occasionally from here:
self._in_use_connections.remove(connection)
https://github.com/andymccurdy/redis-py/blob/2.10.5/redis/connection.py#L913
I haven't been able to reproduce this in testing or root out the cause.
Theories:
self._available_connections
is alist
so you couldappend
in the same connection more then once. Then when you try toself._in_use_connections.add(connection)
https://github.com/andymccurdy/redis-py/blob/2.10.5/redis/connection.py#L898
You get no error for using the same connection. Until you try to release
_in_use_connections
is aset
so were theadd()
multiple times desen't raise an errorremove()
will.The text was updated successfully, but these errors were encountered: