Skip to content

Jupyter notebook containing a solution to Sutton and Barto's gridworld problem with both a random agent and a Q-learning agent.

Notifications You must be signed in to change notification settings

michaeltinsley/Gridworld-with-Q-Learning-Reinforcement-Learning-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Gridworld Reinforcement Learning (Q-Learning)

In this exercise, you will implement the interaction of a reinforecment learning agent with its environment. We will use the gridworld environment from the second lecture. You will find a description of the environment below, along with two pieces of relevant material from the lectures: the agent-environment interface and the Q-learning algorithm.

  1. Create an agent that chooses actions randomly with this environment.

  2. Create an agent that uses Q-learning. You can use initial Q values of 0, a stochasticity parameter for the $\epsilon$-greedy policy function $\epsilon=0.05$, and a learning rate $\alpha = 0.1$. But feel free to experiment with other settings of these three parameters.

  3. Plot the mean total reward obtained by the two agents through the episodes. This is called a learning curve. Run enough episodes for the Q-learning agent to converge to a near-optimal policy.

The environment: Navigation in a gridworld

The agent has four possible actions in each state (grid square): west, north, south, and east. The actions are unreliable. They move the agent in the intended direction with probability 0.8, and with probability 0.2, they move the agent in a random other direction. It the direction of movement is blocked, the agent remains in the same grid square. The initial state of the agent is one of the five grid squares at the bottom, selected randomly. The grid squares with the gold and the bomb are terminal states. If the agent finds itself in one of these squares, the episode ends. Then a new episode begins with the agent at the initial state.

You will use a reinforcement learning algorithm to compute the best policy for finding the gold with as few steps as possible while avoiding the bomb. For this, we will use the following reward function: -1 for each navigation action, an additional +10 for finding the gold, and an additional -10 for hitting the bomb. For example, the immediate reward for transitioning into the square with the gold is -1 + 10 = +9. Do not use discounting (that is, set gamma=1).

About

Jupyter notebook containing a solution to Sutton and Barto's gridworld problem with both a random agent and a Q-learning agent.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published