Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support vectorized video rendering #61

Merged
merged 9 commits into from Mar 4, 2021

Conversation

vwxyzjn
Copy link
Contributor

@vwxyzjn vwxyzjn commented Mar 1, 2021

Hello, this PR enables the video recording through VecVideoRecorder from sb3. Below is an example code:

from stable_baselines3 import PPO
from pettingzoo.butterfly import pistonball_v4
from stable_baselines3.common.vec_env import VecVideoRecorder
import supersuit as ss
import time
env = pistonball_v4.parallel_env()
env = ss.color_reduction_v0(env, mode='B')
env = ss.resize_v0(env, x_size=84, y_size=84)
env = ss.frame_stack_v1(env, 3)
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=1, base_class='stable_baselines3')
env = VecVideoRecorder(
    env, f'videos/{time.time()}',
    record_video_trigger=lambda x: x % 1000 == 0, video_length=20)
model = PPO('CnnPolicy', env, verbose=3, n_steps=16, device="cpu")
model.learn(total_timesteps=2000000)

It will be great to incorporate this code. Being selfish for a second, this change will help me record the videos of agents playing the game throughout training, much like my experiments with the procgen env. https://wandb.ai/cleanrl/cleanrl.benchmark/reports/Procgen-New--Vmlldzo0NDUyMTg

d

@benblack769
Copy link
Contributor

You can run tests locally with:

pytest test

@benblack769
Copy link
Contributor

And I agree it would be a cool feature, just hasn't been a priority to implement. I'm really glad you are doing this!

@vwxyzjn
Copy link
Contributor Author

vwxyzjn commented Mar 1, 2021

It was my pleasure. Really like the work you guys are doing. Sorry for the flood of CI fixes, but the tests passed locally:

image

The problem is that the ubuntu instance that linux is using does not render things like Pendulum-v0, so I am attempting to fix the CI.

@benblack769
Copy link
Contributor

Hi, so when I try to run the example code on top of your fork, it runs and gives out a bunch of .mp4 files, but when I try to play these files in a video player, I get an error message "this file is not playable". The files are also super small at 1kb. Do you happen to know what is going wrong?

@vwxyzjn
Copy link
Contributor Author

vwxyzjn commented Mar 2, 2021

"a bunch of .mp4 file" was due to the video_length=20 parameter. Try with video_length=200. "this file is not playable" is perhaps due to this issue openai/gym#2139. Even though it is fixed in the master branch, I think it hasn't been published in PyPi. Try do pip install gym==0.17.3

@benblack769
Copy link
Contributor

Weird. Well, the example is generating videos now, so one way or another, it seems to be working.

There are still a couple of things that are a little broken. In particular, calling env.render(mode="human") throws an error. Also, env.close() throws an error. The issue with close() might have been introduced by your previous PR, I'm not sure.

Again, thanks for the contribution. Rendering in vector environments was something we were definitely missing.

@vwxyzjn
Copy link
Contributor Author

vwxyzjn commented Mar 3, 2021

It was my pleasure. I just pushed a fix for the env.close() issue.

@benblack769
Copy link
Contributor

Looks good! I'm happy to see this merged. @justinkterry

@jkterry1 jkterry1 merged commit 0be2760 into Farama-Foundation:master Mar 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants