We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I noticed memory usage was increasing while training an RL agent using the CarRacing-V0 environment. Looks similar to #1882.
CarRacing-V0
I was able to isolate the issue to the gym environment by running the following code with a memory profiler:
import gym env = gym.make('CarRacing-v0') env.reset() for i in range(500_000): # env.render() _, _, done, _ = env.step(env.action_space.sample()) if done: env.reset() env.close()
The memory profiler returned the following plot.:
When only running env.reset in the loop, I get the following memory growth behavior:
env.reset
import gym env = gym.make('CarRacing-v0') env.reset() for i in range(500_000): env.reset() env.close()
When only calling reset(), the memory consumption seems to grow slower, but grows nevertheless.
reset()
The text was updated successfully, but these errors were encountered:
#2096 Addresses this issue.
Sorry, something went wrong.
No branches or pull requests
I noticed memory usage was increasing while training an RL agent using the
CarRacing-V0
environment. Looks similar to #1882.I was able to isolate the issue to the gym environment by running the following code with a memory profiler:
The memory profiler returned the following plot.:
When only running
env.reset
in the loop, I get the following memory growth behavior:When only calling
reset()
, the memory consumption seems to grow slower, but grows nevertheless.The text was updated successfully, but these errors were encountered: