Skip to content

Releases: openai/gym

0.22.0

17 Feb 18:59
95063a0
Compare
Choose a tag to compare

v0.22 Release Notes

This release represents the largest set of changes ever to Gym, and represents a huge step towards the plans for 1.0 outlined here: #2524

Gym now has a new comprehensive documentation site: https://www.gymlibrary.ml/ !

API changes

  • Env.reset now accepts three new arguments:

  • options: Usable for things like controlling curriculum learning without reinitializing the environment, which can be expensive (@RedTachyon)

  • seed: Environment seeds can be passed to this reset argument in the future. The old .seed() method is being deprecated in favor of this, though it will continue to function as before until the 1.0 release for backwards compatibility purposes (@RedTachyon)

  • return_info: when set to True, reset will return obs, info. This currently defaults to False, but will become the default behavior in Gym 1.0 (@RedTachyon)

  • Environment names no longer require a version during registration and will suggest intelligent similar names (@kir0ul, @JesseFarebro)

  • Vector environments now support terminal_observation in info and support batch action spaces (@vwxyzjn, @tristandeleu)

Environment changes

  • The blackjack and frozen lake toy_text environments now have nice graphical rendering using PyGame (@1b15)
  • Moved robotics environments to gym-robotics package (@seungjaeryanlee, @Rohan138, @vwxyzjn) (per discussion in #2456 (comment))
  • The bipedal walker and lunar lander environments were consolidated into one class (@andrewtanJS)
  • Atari environments now use standard seeding API (@JesseFarebro)
  • Fixed large bug fixes in car_racing box2d environment, bumped version (@carlosluis, @araffin)
  • Refactored all box2d and classic_control environments to use PyGame instead of Pyglet as issues with pyglet has been one of the most frequent sources of GitHub issues over the life of the gym project (@andrewtanJS)

Other changes

  • Removed DiscreteEnv class, built in environments no longer use it (@carlosluis)
  • Large numbers of type hints added (@ikamensh, @RedTachyon)
  • Python 3.10 support
  • Tons of additional code refactoring, cleanup, error message improvements and small bug fixes (@vwxyzjn, @Markus28, @RushivArora, @jjshoots, @XuehaiPan, @Rohan138, @JesseFarebro, @Ericonaldo, @AdilZouitine, @RedTachyon)
  • All environment files now have dramatically improved readmes at the top (that the documentation website automatically pulls from)
  • As part of the seeding changes, Gym's RNG has been modified to use the np.random.Generator as the RandomState API has been deprecated. The methods randint, rand, randn are replaced by integers, random and standard_normal respectively. As a consequence, the random number generator has changed from MT19937 to PCG64.

Full Changelog: v0.21.0...0.22.0

v0.21.0

02 Oct 00:37
c755d5c
Compare
Choose a tag to compare

v0.21.0 Release Notes

  • The old Atari entry point that was broken with the last release and the upgrade to ALE-Py is fixed (@JesseFarebro)
  • Atari environments now give much clearer error messages and warnings (@JesseFarebro)
  • A new plugin system to enable an easier inclusion of third party environments has been added (@JesseFarebro)
  • Atari environments now use the new plugin system to prevent clobbered names and other issues (@JesseFarebro)
  • pip install gym[atari] no longer distributes Atari ROMs that the ALE (the Atari emulator used) needs to run the various games. The easiest way to install ROMs into the ALE has been to use AutoROM. Gym now has a hook to AutoROM for easier CI automation so that using pip install gym[accept-rom-license] calls AutoROM to add ROMs to the ALE. You can install the entire suite with the shorthand gym[atari, accept-rom-license]. Note that as described in the name name, by installing gym[accept-rom-license] you are confirming that you have the relevant license to install the ROMs. (@JesseFarebro)
  • An accidental breaking change when loading saved policies trained on old versions of Gym with environments using the box action space have been fixed. (@RedTachyon)
  • Pendulum has had a minor fix to it's physics logic made and the version has been bumped to v1 (@RedTachyon)
  • Tests have been refactored into an orderly manner (@RedTachyon)
  • Dict spaces now have standard dict helper methods (@Rohan138)
  • Environment properties are now forwarded to the wrapper (@tristandeleu)
  • Gym now properly enforces calling reset before stepping for the first time (@ahmedo42)
  • Proper piping of error messages to stderr (@XuehaiPan)
  • Fix video saving issues (@zlig)

Also, Gym is compiling a list of third party environments to into the new documentation website we're working on. Please submit PRs for ones that are missing: https://github.com/openai/gym/blob/master/docs/third_party_environments.md

Full Changelog: v0.20.0...v0.21.0

v0.20.0

14 Sep 13:19
a7b6462
Compare
Choose a tag to compare

v0.20.0 Release Notes

Major Change

  • Replaced Atari-Py dependency with ALE-Py and bumped all versions. This is a massive upgrade with many changes, please see the full explainer (@JesseFarebro)
  • Note that ALE-Py does not include ROMs. You can install ROMs in two lines of bash with AutoROM though (pip3 install autorom and then autorom), see https://github.com/PettingZoo-Team/AutoROM. This is the recommended approach for CI, etc.

Breaking changes and new features:

  • Add RecordVideo wrapper, deprecate monitor wrapper in favor of it and RecordEpisodeStatistics wrapper (@vwxyzjn)
  • Dependencies used outside of environments (e.g. for wrappers) are now in gym[other] (@jkterry1)
  • Moved algorithmic and unused toy-text envs (guessing game, hotter colder, nchain, roulette, kellycoinflip) to third party repos (@jkterry1, @Rohan138)
  • Fixed flatten utility and flatdim in MultiDiscrete space (@tristandeleu)
  • Add __setitem__ to dict space (@jfpettit)
  • Large fixes to .contains method for box space (@FirefoxMetzger)
  • Made blackjack environment properly comply with Barto and Sutton book standard, bumped to v1 (@RedTachyon)
  • Added NormalizeObservation and NormalizeReward wrappers (@vwxyzjn)
  • Add __getitem__ and __len__ to MultiDiscrete space (@XuehaiPan)
  • Changed .shape to be a property of box space to prevent unexpected behaviors (@RedTachyon)

Bug fixes and upgrades

  • Video recorder gracefully handles closing (@XuehaiPan)
  • Remaining unnecessary dependencies in setup.py are resolved (@jkterry1)
  • Minor acrobot performance improvements (@TuckerBMorgan)
  • Pendulum properly renders when 0 force is sent (@Olimoyo)
  • Make observations dtypes be consistent with observation space dtypes for all classic control envs and bipedal-walker (@RedTachyon)
  • Removed unused and long deprecated features in registration (@Rohan138)
  • Framestack wrapper now inherits from obswrapper (@jfpettit)
  • Seed method for spaces.Tuple and spaces.Dict now properly function, are fully stochastic, are fully featured and behave in the expected manner (@XuehaiPan, @RaghuSpaceRajan)
  • Replace time() with perf_counter() for better measurements of short duration (@zuoxingdong)

Full Changelog: 0.19.0...v0.20.0

0.19.0

13 Aug 04:23
4ede928
Compare
Choose a tag to compare

Gym 0.19.0 is a large maintenance release, and the first since @jkterry1 became the maintainer. There should be no breaking changes in this release.

New features:

  • Added custom datatype argument to multidiscrete space (@m-orsini)
  • API compliance test added based on SB3 and PettingZoo tests (@amtamasi)
  • RecordEpisodeStatics works with VectorEnv (@vwxyzjn)

Bug fixes:

  • Removed unused dependencies, removed unnescesary dependency version requirements that caused installation issues on newer machines, added full requirements.txt and moved general dependencies to extras. Notably, "toy_text" is not a used extra. atari-py is now pegged to a precise working version pending the switch to ale-py (@jkterry1)
  • Bug fixes to rewards in FrozenLake and FrozenLake8x8; versions bumped to v1 (@ZhiqingXiao)
    -Removed remaining numpy depreciation warnings (@super-pirata)
  • Fixes to video recording (@mahiuchun, @zlig)
  • EZ pickle argument fixes (@zzyunzhi, @Indoril007)
  • Other very minor (nonbreaking) fixes

Other:

  • Removed small bits of dead code (@jkterry1)
  • Numerous typo, CI and documentation fixes (mostly @cclauss)
  • New readme and updated third party env list (@jkterry1)
  • Code is now all flake8 compliant through black (@cclauss)

0.12.5

29 May 00:54
ff4664b
Compare
Choose a tag to compare
Fixes fetch/slide environment. (#1511)

v0.9.6

01 Feb 19:05
Compare
Choose a tag to compare
  • Now your Env and Wrapper subclasses should define step, reset, render, close, seed rather than underscored method names.
  • Removed the board_game, debugging, safety, parameter_tuning environments since they're not being maintained by us at OpenAI. We encourage authors and users to create new repositories for these environments.
  • Changed MultiDiscrete action space to range from [0, ..., n-1] rather than [a, ..., b-1].
  • No more render(close=True), use env-specific methods to close the rendering.
  • Removed scoreboard directory, since site doesn't exist anymore.
  • Moved gym/monitoring to gym/wrappers/monitoring
  • Add dtype to Space.
  • Not using python's built-in module anymore, using gym.logger

v0.9.5

26 Jan 21:18
6af4a5b
Compare
Choose a tag to compare
Migrate to mujoco-py 1.50 (#834)

* all envs run offscreen

* render works

* changed mujoco-py version

* Bump versions

* Update version and README

* Same versioning for all mujoco envs

* Fix typo

* Fix version

* Bump version again

* Revert "Fix version"

This reverts commit decc5779811801deb6ae9fad697dfe247d2bdd94.

v0.7.4

05 Mar 21:57
Compare
Choose a tag to compare
Cut release for v0.7.4

v0.7.3

01 Feb 03:43
Compare
Choose a tag to compare
Bump version