Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
от партнера
Об этом курсе
Карьерные результаты учащихся
43%
29%
17%
Приобретаемые навыки
Карьерные результаты учащихся
43%
29%
17%
от партнера

Стэнфордский университет
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
Программа курса: что вы изучите
Learning: Overview
This module presents some of the learning tasks for probabilistic graphical models that we will tackle in this course.
Review of Machine Learning Concepts from Prof. Andrew Ng's Machine Learning Class (Optional)
This module contains some basic concepts from the general framework of machine learning, taken from Professor Andrew Ng's Stanford class offered on Coursera. Many of these concepts are highly relevant to the problems we'll tackle in this course.
Parameter Estimation in Bayesian Networks
This module discusses the simples and most basic of the learning problems in probabilistic graphical models: that of parameter estimation in a Bayesian network. We discuss maximum likelihood estimation, and the issues with it. We then discuss Bayesian estimation and how it can ameliorate these problems.
Learning Undirected Models
In this module, we discuss the parameter estimation problem for Markov networks - undirected graphical models. This task is considerably more complex, both conceptually and computationally, than parameter estimation for Bayesian networks, due to the issues presented by the global partition function.
Learning BN Structure
This module discusses the problem of learning the structure of Bayesian networks. We first discuss how this problem can be formulated as an optimization problem over a space of graph structures, and what are good ways to score different structures so as to trade off fit to data and model complexity. We then talk about how the optimization problem can be solved: exactly in a few cases, approximately in most others.
Learning BNs with Incomplete Data
In this module, we discuss the problem of learning models in cases where some of the variables in some of the data cases are not fully observed. We discuss why this situation is considerably more complex than the fully observable case. We then present the Expectation Maximization (EM) algorithm, which is used in a wide variety of problems.
Рецензии
Лучшие отзывы о курсе PROBABILISTIC GRAPHICAL MODELS 3: LEARNING
An amazing course! The assignments and quizzes can be insanely difficult espceially towards the conclusion.. Requires textbook reading and relistening to lectures to gather the nuances.
very good course for PGM learning and concept for machine learning programming. Just some description for quiz of final exam is somehow unclear, which lead to a little bit confusing.
Great course! Very informative course videos and challenging yet rewarding programming assignments. Hope that the mentors can be more helpful in timely responding for questions.
Great course, especially the programming assignments. Textbook is pretty much necessary for some quizzes, definitely for the final one.
Специализация Графические вероятностные модели : общие сведения
Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.

Часто задаваемые вопросы
Когда я получу доступ к лекциям и заданиям?
Что я получу, оформив подписку на специализацию?
Is financial aid available?
Learning Outcomes: By the end of this course, you will be able to
Получу ли я зачеты в университете за прохождение курса?
Остались вопросы? Посетите Центр поддержки учащихся.