You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am relying heavily on the docker based code execution, which works pretty well, however it seems to be pretty slow, likely because of the fact that each time it instantiates a new container. This is very slow. Additionally it would be nice if there would be a way to influence the docker container be initialized with packages we already know would be likely used. This would also speed up code execution.
Thanks!
The text was updated successfully, but these errors were encountered:
Yes, a new container (not image) is spun up for every message that has code, and it is much slower than native code execution. A more viable option is to create one Docker container, and run everything in it (in which case, it acts as if running natively, yet is still sandboxed). This is what many of us do anyways, but it would be good if somehow it could be achieved via configuration.
I second this, metagpt and others will work in a group sharing file workspace to create a project with tests and usually have a requirements.txt to install so tester can install code and run it with tests.. its probably useful to have a way to install.. I have been using subprocess but i think requirements is a better way.
I am relying heavily on the docker based code execution, which works pretty well, however it seems to be pretty slow, likely because of the fact that each time it instantiates a new container. This is very slow. Additionally it would be nice if there would be a way to influence the docker container be initialized with packages we already know would be likely used. This would also speed up code execution.
Thanks!
The text was updated successfully, but these errors were encountered: