border-py-gym-env 0.0.5

Reinforcement learning library failed to build border-py-gym-env-0.0.5
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure builds.
If you believe this is' fault, open an issue.
Visit the last successful build: border-py-gym-env-0.0.4

A wrapper of gym environments on Python.

[PyGymEnv] is a wrapper of gym based on PyO3. It supports some classic control, Atari, and PyBullet environments.

This wrapper accepts array-like observation and action (Box spaces), and discrete action. In order to interact with Python interpreter where gym is running, [PyGymEnvObsFilter] and [PyGymEnvActFilter] provides interfaces for converting Python object (numpy array) to/from ndarrays in Rust. [PyGymEnvObsRawFilter], [PyGymEnvContinuousActRawFilter] and [PyGymEnvDiscreteActRawFilter] do the conversion for environments where observation and action are arrays. In addition to the data conversion between Python and Rust, we can implements arbitrary preprocessing in these filters. For example, [FrameStackFilter] keeps four consevutive observation frames (images) and outputs a stack of these frames.

For Atari environments, a tweaked version of is required to be in PYTHONPATH. The frame stacking preprocessing is implemented in [FrameStackFilter] as an [PyGymEnvObsFilter].

Examples with a random controller (Policy) are in examples directory. Examples with border-tch-agents, which are collections of RL agents implemented with tch-rs, are in here.