Environment
MinimalRLCore.AbstractEnvironment — TypeRepresents an abstract environment for reinforcement learning agents. Has several functions that need to be implemented to work. All interfaces expect an abstract environment!
MinimalRLCore.environment_step! — Methodenvironment_step!(env::AbstractEnvironment, action, args...)Update the state of the environment based on the underlying dynamics and the action. This is not used directly, but through the step function.
You can implement with or without a personally defined RNG. If you choose to not implement with a personally maintained RNG remember this is not a thread safe function.
MinimalRLCore.get_reward — Methodget_reward(env::AbstractEnvironment)Retrieve reward for the current state of the environment.
MinimalRLCore.get_state — Methodget_state(env::AbstractEnvironment)Retrieve the current state of the environment
MinimalRLCore.is_terminal — Methodis_terminal(env::AbstractEnvironment)Check if the environment is in a terminal state
MinimalRLCore.reset! — Methodreset!(env::AbstractEnvironment, args...)Reset the environment to initial conditions based on the random number generator.
You can implement with or without a personally defined RNG. If you choose to not implement with a personally maintained RNG remember this is not a thread safe function.
MinimalRLCore.start! — Methodstart!(env::AbstractEnvironment, args...)Function to start the passed environment env. There are three variants. Two which start the environment from a random start state (as implemented with reset!) and another which starts the environment from a provided start state. These three variants call the reset! functions of the same call signiture.
Returns the starting state of the environment.
MinimalRLCore.step! — Methodstep!(env::AbstractEnvironment, action, args...)Update the state of the passed environment env based on the underlying dynamics and the action.