Environment

MinimalRLCore.AbstractEnvironmentType

Represents an abstract environment for reinforcement learning agents. Has several functions that need to be implemented to work. All interfaces expect an abstract environment!

source
MinimalRLCore.environment_step!Method
environment_step!(env::AbstractEnvironment, action, args...)

Update the state of the environment based on the underlying dynamics and the action. This is not used directly, but through the step function.

You can implement with or without a personally defined RNG. If you choose to not implement with a personally maintained RNG remember this is not a thread safe function.

source
MinimalRLCore.reset!Method
reset!(env::AbstractEnvironment, args...)

Reset the environment to initial conditions based on the random number generator.

You can implement with or without a personally defined RNG. If you choose to not implement with a personally maintained RNG remember this is not a thread safe function.

source
MinimalRLCore.start!Method
start!(env::AbstractEnvironment, args...)

Function to start the passed environment env. There are three variants. Two which start the environment from a random start state (as implemented with reset!) and another which starts the environment from a provided start state. These three variants call the reset! functions of the same call signiture.

Returns the starting state of the environment.

source
MinimalRLCore.step!Method
step!(env::AbstractEnvironment, action, args...)

Update the state of the passed environment env based on the underlying dynamics and the action.

source