In this course, you will learn how to solve problems with large, high-dimensional, and potentially infinite state spaces. You will see that estimating value functions can be cast as a supervised learning problem---function approximation---allowing you to build agents that carefully balance generalization and discrimination in order to maximize reward. We will begin this journey by investigating how our policy evaluation or prediction methods like Monte Carlo and TD can be extended to the function approximation setting. You will learn about feature construction techniques for RL, and representation learning via neural networks and backprop. We conclude this course with a deep-dive into policy gradient methods; a way to learn policies directly without learning a value function. In this course you will solve two continuous-state control tasks and investigate the benefits of policy gradient methods in a continuous-action environment.
Этот курс входит в специализацию ''Специализация Обучения с подкреплением'
от партнера
Об этом курсе
Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
Приобретаемые навыки
- Artificial Intelligence (AI)
- Machine Learning
- Reinforcement Learning
- Function Approximation
- Intelligent Systems
Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
от партнера

Альбертский университет
UAlberta is considered among the world’s leading public research- and teaching-intensive universities. As one of Canada’s top universities, we’re known for excellence across the humanities, sciences, creative arts, business, engineering and health sciences.

Alberta Machine Intelligence Institute
The Alberta Machine Intelligence Institute (Amii) is home to some of the world’s top talent in machine intelligence. We’re an Alberta-based
Программа курса: что вы изучите
Welcome to the Course!
Welcome to the third course in the Reinforcement Learning Specialization: Prediction and Control with Function Approximation, brought to you by the University of Alberta, Onlea, and Coursera. In this pre-course module, you'll be introduced to your instructors, and get a flavour of what the course has in store for you. Make sure to introduce yourself to your classmates in the "Meet and Greet" section!
On-policy Prediction with Approximation
This week you will learn how to estimate a value function for a given policy, when the number of states is much larger than the memory available to the agent. You will learn how to specify a parametric form of the value function, how to specify an objective function, and how estimating gradient descent can be used to estimate values from interaction with the world.
Constructing Features for Prediction
The features used to construct the agent’s value estimates are perhaps the most crucial part of a successful learning system. In this module we discuss two basic strategies for constructing features: (1) fixed basis that form an exhaustive partition of the input, and (2) adapting the features while the agent interacts with the world via Neural Networks and Backpropagation. In this week’s graded assessment you will solve a simple but infinite state prediction task with a Neural Network and TD learning.
Control with Approximation
This week, you will see that the concepts and tools introduced in modules two and three allow straightforward extension of classic TD control methods to the function approximation setting. In particular, you will learn how to find the optimal policy in infinite-state MDPs by simply combining semi-gradient TD methods with generalized policy iteration, yielding classic control methods like Q-learning, and Sarsa. We conclude with a discussion of a new problem formulation for RL---average reward---which will undoubtedly be used in many applications of RL in the future.
Policy Gradient
Every algorithm you have learned about so far estimates a value function as an intermediate step towards the goal of finding an optimal policy. An alternative strategy is to directly learn the parameters of the policy. This week you will learn about these policy gradient methods, and their advantages over value-function based methods. You will also learn how policy gradient methods can be used to find the optimal policy in tasks with both continuous state and action spaces.
Рецензии
- 5 stars84,18 %
- 4 stars13,03 %
- 3 stars1,94 %
- 2 stars0,55 %
- 1 star0,27 %
Лучшие отзывы о курсе PREDICTION AND CONTROL WITH FUNCTION APPROXIMATION
Well peaced and thoughtfully explained course. Highly recommended for anyone willing to set solid grounding in Reinforcement Learning. Thank you Coursera and Univ. of Alberta for the masterclass.
Solid intro course. Wish we covered more using neural nets. The neural net equations used very non-standard notation. Wish the assignments were a little more creative. Too much grid world.
Great Learning, the best part was the Actor-Critic algorithm for a small pendulum swing task all from stratch using RLGLue library. Love to learn how experimentation in RL works.
Really enjoyed every part of the course. Programming assignments are helpful in asserting the theoretical understanding of the subject.
Специализация Обучения с подкреплением: общие сведения
The Reinforcement Learning Specialization consists of 4 courses exploring the power of adaptive learning systems and artificial intelligence (AI).

Часто задаваемые вопросы
Когда я получу доступ к лекциям и заданиям?
Что я получу, оформив подписку на специализацию?
Можно ли получить финансовую помощь?
Остались вопросы? Посетите Центр поддержки учащихся.