日本综合久久_特级丰满少妇一级aaaa爱毛片_91在线视频观看_久久999免费视频_99精品热播_黄色片地址

課程目錄: 基于函數逼近的預測與控制培訓
4401 人關注
(78637/99817)
課程大綱:

    基于函數逼近的預測與控制培訓

 

 

 

Welcome to the Course!

Welcome to the third course in the Reinforcement Learning Specialization:

Prediction and Control with Function Approximation, brought to you by the University of Alberta,

Onlea, and Coursera.

In this pre-course module, you'll be introduced to your instructors,

and get a flavour of what the course has in store for you.

Make sure to introduce yourself to your classmates in the "Meet and Greet" section!

On-policy Prediction with Approximation

This week you will learn how to estimate a value function for a given policy,

when the number of states is much larger than the memory available to the agent.

You will learn how to specify a parametric form of the value function,

how to specify an objective function, and how estimating gradient descent can be used to estimate values from interaction with the world.

Constructing Features for Prediction

The features used to construct the agent’s value estimates are perhaps the most crucial part of a successful learning system.

In this module we discuss two basic strategies for constructing features: (1) fixed basis that form an exhaustive partition of the input,

and (2) adapting the features while the agent interacts with the world via Neural Networks and Backpropagation.

In this week’s graded assessment you will solve a simple but infinite state prediction task with a Neural Network and

TD learning.Control with ApproximationThis week,

you will see that the concepts and tools introduced in modules two and three allow straightforward extension of classic

TD control methods to the function approximation setting. In particular,

you will learn how to find the optimal policy in infinite-state MDPs by simply combining semi-gradient

TD methods with generalized policy iteration, yielding classic control methods like Q-learning, and Sarsa.

We conclude with a discussion of a new problem formulation for RL---average reward---which will undoubtedly

be used in many applications of RL in the future.

Policy GradientEvery algorithm you have learned about so far estimates

a value function as an intermediate step towards the goal of finding an optimal policy.

An alternative strategy is to directly learn the parameters of the policy.

This week you will learn about these policy gradient methods, and their advantages over value-function based methods.

You will also learn how policy gradient methods can be used

to find the optimal policy in tasks with both continuous state and action spaces.

主站蜘蛛池模板: 中文字幕在线精品 | 一区二区小视频 | 久久久91精品国产一区二区三区 | 精品国产一区二区三区久久影院 | www.国产 | 雨宫琴音一区二区在线 | 久草影视在线 | 亚洲综合五月天婷婷 | 国产不卡视频 | 午夜午夜精品一区二区三区文 | 亚洲高清av在线 | 日韩一区二区三区视频在线播放 | 国产乱码精品一区二区三区五月婷 | 欧美激情黄色 | 超黄毛片 | 国产日韩欧美在线 | 久久久久99 | 国产电影一区 | 天天视频一区二区三区 | 国产精品观看 | 羞羞羞视频| xxx国产精品视频 | 一区二区视频免费观看 | 色视频网站在线观看 | 国产精品高潮呻吟久久久久 | 国产精品久久久久久久久久久久 | 美美女高清毛片视频免费观看 | 自拍视频一区二区三区 | 午夜av一区二区 | 久久精品亚洲精品国产欧美kt∨ | 欧美不卡在线 | 欧美一级在线观看 | 久久久91精品国产一区二区三区 | 国产羞羞视频在线观看 | 国产综合在线视频 | 91在线视频观看 | 国产精品a久久久久 | 亚洲一区二区三区久久 | 亚洲最大av网站 | 日本爱爱视频 | 亚洲三级在线观看 |