Today we will be covering about the motivation behind Probabilistic Graphical Models (PGM) and then we will delve deep into one of the types of PGM known as Bayesian Networks.

Fundamentals

Probabilistic models can be challenging to design and applied due to the lack of knowledge about the conditional dependence between random variables.

A common approach to addressing this would be to assume that the random variables in the model are conditionally independent. The Naive Bayes classification model is one implementation of this assumption.

An alternative approach would be to come up with a probabilistic model with partial conditional independence assumptions, that is, we assume some variables are conditionally independent and some are not. The benefit of this approach is that it provides a middle ground that avoids the strict constraint of a global assumption of conditional independence while maintaining tractability by allowing us to avoid working with a fully conditionally dependent model.

But how do we illustrate these assumptions that we have? One way we can do this is via Probabilistic Graphical Models, which is a representation of a probabilistic model with a graph structure.

Describing Probabilities with Graphs

In a PGM, the nodes in the graph represent the random variables and the edges that connect the nodes represent the relationships between the random variables. It can be difficult to visualize this at this point of time but fret not, we will be diving extensively into one type of PGM, the Bayesian Network.

Bayesian Networks

Bayesian Networks is a probabilistic graphical model where the edges of the graph are directed, meaning they can be navigated in one direction. This is often referred to as a directed acyclic graph (DAG), where cycles are not allowed.

A running example that we will be using throughout is the student model. The model is represented by 5 random variables,