Closed
Description
Is your feature request related to a problem? Please describe.
It would be a nice feature if the mitm event's can be run as async functions, this will enable developers of addon's to use async methods.
Describe the solution you'd like
Make the event's async!
Describe alternatives you've considered
I've tried to schedule futures in the current running eventloop but it seems that it doesn't apply changes to the HTTP flow object.
Additional context
Would love to see async possibilities, that can make some advanced addons more efficient and extensible
Activity
lionel126 commentedon Mar 10, 2021
extremely look forward to
rachmadaniHaryono commentedon Mar 28, 2021
above example is based on #4259 and
for mitmproxy v6.0.2e: add simple edit for 7.0.0.dev
e2: more change based on mitmproxy.tools.main:run
e3: group addons section
roniemartinez commentedon Apr 7, 2021
+1
Prinzhorn commentedon May 27, 2021
One thing I'm concerned about are race conditions. They exist with
@concurrent
and maybe we can get rid of them with async events.I just switched to
@concurrent
to see how things are going. I just ran into a race condition whereserver_connected
was racingrequest
. This crashed my app because of the assumption that the connection exists when the request arrives (a SQL constraint failed).websocket_message
seems to behave as expected, that means multiple messages from different connections are processed concurrently but on the same connection message B will only be processed when message A is done. That's perfect. I'd love to see connection-wide serialization in all of mitmproxy. That means therequest
event cannot happen beforeserver_connected
is done and so on. Is this feasible?mhils commentedon Aug 27, 2021
Yes and no. The good news for you is that most events are already serialized, e.g. the respective part in the proxy core is blocking until it a specific hook completes. That won't change, i.e. you'll never see
request
appear beforerequestheaders
has completed.The order of
request
andserver_connected
is not deterministic. If a request reuses an existing connection,server_connected
obviously comes beforerequest
. If you setconnection_strategy
tolazy
,request
should happen even beforeserver_connect
. So no guarantees here. :)mhils commentedon Aug 27, 2021
Regarding the actual suggestion: this is something we definitely plan to implement at some point, we can probably/hopefully use this to ditch
.reply
entirely. Let's see.Prinzhorn commentedon Aug 27, 2021
That makes sense but I am explicitly using
lazy
. I've looked at my exception from back then again and my example above is not accurate. What failed was myNOT NULL
constraint forresponses.server_connection_id
(not request, which I associate withrequests.client_connection_id
). That happened only with@concurrent
and only rarely. Never without@concurrent
. Sorequest
was processed by me afterserver_connected
because the long running gRPC call inserver_connected
did not preventrequest
from firing and somehowrequest
arrived earlier on the other end (I have a talent for race conditions).But if I understand correctly async hooks solve this because it would "block" in a similar fashion that the current synchronous hooks do. But without blocking unrelated events (e.g. other connections). And since I'm now using asyncio gRPC it should just work and perform better.
mhils commentedon Aug 27, 2021
Ah, you are hitting a special case here: We are not waiting for
server_connected
to complete before notifying the HTTP layer that the connection has been established (I didn't think this would be useful to anyone and it surely isn't faster). If you have a concrete use case here please open a separate issue, this should be straightforward to adjust. In fact at the moment it's a bit more complicated because we don't block.Support async hooks. Fixes mitmproxy#4207.
8 remaining items