1. Homepage
  2. Programming
  3. CS5340 Uncertainty Modelling in AI - Assignment 1: BELIEF PROPAGATION AND MAXIMAL PROBABILITY

CS5340 Uncertainty Modelling in AI - Assignment 1: BELIEF PROPAGATION AND MAXIMAL PROBABILITY

Engage in a Conversation
National University of SingaporeSingaporeNUSCS5340Uncertainty Modelling in AIArtificial IntelligenceBELIEF PROPAGATION AND MAXIMAL PROBABILITYPython

CS5340 ASSIGNMENT 1: BELIEF PROPAGATION AND MAXIMAL PROBABILITY CourseNana.COM

1. OVERVIEW CourseNana.COM

Welcome to the course! In this first assignment, you will write code to perform inference on simple tree-like graphs using belief propagation. You will write the code for both the sum-product and max-product algorithm. CourseNana.COM

References: Lecture 4 and 5 CourseNana.COM

Honour Code. This coding assignment constitutes 15% of your final grade in CS5340. Note that plagiarism will not be condoned! You may discuss with your classmates and check the internet for references, but you MUST NOT submit code/report that is copied directly from other sources! CourseNana.COM

2. SUBMISSION INSTRUCTIONS Items to be submitted: CourseNana.COM

  • Source code (lab1.py). This is where you fill in all your code.
  • Report (report.pdf). This should describe your implementation and be no more than one page.

Please indicate clearly your name and student number (the one that looks like A1234567X) in the report as well as the top of your source code. Zip the two files together and name it in the following format: A1234567X_lab1.zip (replace with your student number). CourseNana.COM

3. GETTING STARTED CourseNana.COM

This assignment as well as the subsequent ones require Python 3.5, or later. You need certain python packages, which can be installed using the following command: CourseNana.COM

pip install -r requirements.txt CourseNana.COM

If you have any issues with the installation, please post them in the forum, so that other students or the instructors can help accordingly. CourseNana.COM

4. TEST CASES CourseNana.COM

To help with your implementation, we have provided a few sample datasets and their expected outputs to help you debug your code. They can be found in the data/ folder. You can find the code that loads the sample data and checks your program output in test_lab1.py. CourseNana.COM

We will mainly work with the Bayesian network stored in graph_small.json. It is shown below in Figure 1 for your convenience. In addition, we also provide a larger graphical model in graph_large.json for you to further test your code. CourseNana.COM

Note that during grading, we will run your code on hidden test cases on top of the two provided graphs. CourseNana.COM

  CourseNana.COM

X0 CourseNana.COM

X4 CourseNana.COM

P(X4|X0) CourseNana.COM

0 CourseNana.COM

0 CourseNana.COM

0.9 CourseNana.COM

1 CourseNana.COM

0 CourseNana.COM

0.6 CourseNana.COM

2 CourseNana.COM

0 CourseNana.COM

0.1 CourseNana.COM

0 CourseNana.COM

1 CourseNana.COM

0.1 CourseNana.COM

1 CourseNana.COM

1 CourseNana.COM

0.3 CourseNana.COM

2 CourseNana.COM

1 CourseNana.COM

0.4 CourseNana.COM

0 CourseNana.COM

2 CourseNana.COM

0.0 CourseNana.COM

1 CourseNana.COM

2 CourseNana.COM

0.1 CourseNana.COM

2 CourseNana.COM

2 CourseNana.COM

0.5 CourseNana.COM

  CourseNana.COM

X0 CourseNana.COM

X1 CourseNana.COM

P(X1|X0) CourseNana.COM

X0 CourseNana.COM

X1 CourseNana.COM

P(X1|X0) CourseNana.COM

0 CourseNana.COM

0 CourseNana.COM

0.6 CourseNana.COM

0 CourseNana.COM

2 CourseNana.COM

0.1 CourseNana.COM

1 CourseNana.COM

0 CourseNana.COM

0.3 CourseNana.COM

1 CourseNana.COM

2 CourseNana.COM

0.2 CourseNana.COM

2 CourseNana.COM

0 CourseNana.COM

0.1 CourseNana.COM

2 CourseNana.COM

2 CourseNana.COM

0.45 CourseNana.COM

0 CourseNana.COM

1 CourseNana.COM

0.2 CourseNana.COM

0 CourseNana.COM

3 CourseNana.COM

0.1 CourseNana.COM

1 CourseNana.COM

1 CourseNana.COM

0.4 CourseNana.COM

1 CourseNana.COM

3 CourseNana.COM

0.1 CourseNana.COM

2 CourseNana.COM

1 CourseNana.COM

0.2 CourseNana.COM

2 CourseNana.COM

3 CourseNana.COM

0.25 CourseNana.COM

  CourseNana.COM

X1 CourseNana.COM

X3 CourseNana.COM

P(X3|X1) CourseNana.COM

0 CourseNana.COM

0 CourseNana.COM

0.7 CourseNana.COM

1 CourseNana.COM

0 CourseNana.COM

0.5 CourseNana.COM

2 CourseNana.COM

0 CourseNana.COM

0.1 CourseNana.COM

3 CourseNana.COM

0 CourseNana.COM

0.0 CourseNana.COM

0 CourseNana.COM

1 CourseNana.COM

0.2 CourseNana.COM

1 CourseNana.COM

1 CourseNana.COM

0.2 CourseNana.COM

2 CourseNana.COM

1 CourseNana.COM

0.4 CourseNana.COM

3 CourseNana.COM

1 CourseNana.COM

0.1 CourseNana.COM

0 CourseNana.COM

2 CourseNana.COM

0.1 CourseNana.COM

1 CourseNana.COM

2 CourseNana.COM

0.3 CourseNana.COM

2 CourseNana.COM

2 CourseNana.COM

0.5 CourseNana.COM

3 CourseNana.COM

2 CourseNana.COM

0.9 CourseNana.COM

  CourseNana.COM

  CourseNana.COM

  CourseNana.COM

CourseNana.COM

Figure 1: The Bayesian network you will perform inference on CourseNana.COM

5. BASIC FACTOR OPERATIONS CourseNana.COM

As you recall from the lectures, we use factor tables to represent the conditional probability distributions of a Bayesian Network. You will now implement the core functionality for manipulating the factor tables. CourseNana.COM

factor product(): This function should compute the product of two factors.
factor_marginalize(): This function should sum over the indicated variable(s) and CourseNana.COM

return the resulting factor.
observe_evidence(): This function should take in a list of factors and the observed CourseNana.COM

6. NAÏVE SUMMATION CourseNana.COM

To motivate the need for belief propagation, we will first implement the naïve summation algorithm. That is, you will compute the required (conditional) probabilities by computing the full joint distribution, then marginalising out irrelevant variables. CourseNana.COM

6.1 COMPUTING THE JOINT DISTRIBUTION CourseNana.COM

Complete the function compute_joint_distribution(), which computes the joint distribution over a Bayesian Network. Recall that for a Bayesian network over variables ?!, ... , ?" , the joint distribution can be computed as1: CourseNana.COM

?(?!,... ,?") = )?*?#+??????(?#)1 # CourseNana.COM

6.2 COMPUTING MARGINALS CourseNana.COM

Having computed the joint distribution, we can compute the marginal probabilities for a variable by marginalising out irrelevant variables from the joint distribution. In the event we have observed evidence, we will also need to reduce the joint distribution by setting the assignments inconsistent with the evidence to zero. So, complete compute_marginals_naive() with your implementation of the full naïve summation algorithm. In summary, there are three steps to perform: CourseNana.COM

Compute the joint distribution over all variables Reduce the joint distribution by the evidence
Marginalizing out irrelevant variables CourseNana.COM

After you perform all these operations, you will notice that the factor obtained is un-normalized, so be sure to normalize the final probability distribution. CourseNana.COM

If you implemented it correctly, for the small Bayesian network in Figure 1, you will get the following marginals for ?(?$|?% = 1). CourseNana.COM

7. SUM-PRODUCT ALGORITHM CourseNana.COM

Notice that even for our small graph in Figure 1, the joint distribution has 3 × 4 × 2 × 3 × 3 = 216 different assignments. This is very large and not scalable2! The standard variable elimination reduces this computational complexity but requires a separate run for every variable we want to compute the marginals for. CourseNana.COM

An efficient solution is to use the belief propagation (or sum-product) algorithm, which allows us to compute all single-node marginals for tree-like graphs in a single pass by “reusing” messages. Recall that the sum-product algorithm consists of the following phases: CourseNana.COM

  • Apply evidence
  • Send messages inward towards the root
  • Send messages outward towards the leaves
  • Compute the marginal of the relevant variables

The belief propagation algorithm will require you to traverse through the graph. For this purpose, we use a NetworkX graph. We have provided a function generate_graph_from_factors() that converts a list of factors into a NetworkX graph. CourseNana.COM

NetworkX provides useful functions for easy graph traversal. For example, neighbours of variable ? can be accessed through: CourseNana.COM

>>> graph.neighbors(i) CourseNana.COM

For convenience, unary and pairwise factors will also be stored as node and edge attributes. To access the unary factor of variable ?, CourseNana.COM

8. MAP INFERENCE USING MAX-PRODUCT CourseNana.COM

In the final part you will write code to compute the Maximum a Posterior (MAP) configuration and its probability. To avoid underflow, we will be working in log-space for this assignment; otherwise, the probability of any single assignment in a large enough network is typically low enough to cause numerical issues. You should therefore take logarithm of all the probabilities before message passing, and perform all operations in log-space. That is, max-product is actually implemented using max-sum. CourseNana.COM

To facilitate this, first implement the relevant factor operations for log-space: CourseNana.COM

  • factor_sum(): Analogous to factor_product(), but for log space.
  • factor_max_marginalize(): This function should max over the indicated variable(s)

and return the resulting factor. You should also keep track of the maximising values in the field factor.val_argmax. CourseNana.COM

Now, you are ready to complete the code for map_eliminate(). The function should take in a list of factors and optionally evidence, and compute the most probable joint assignment and its log probability. CourseNana.COM

  CourseNana.COM

Get in Touch with Our Experts

WeChat (微信) WeChat (微信)
Whatsapp WhatsApp
National University of Singapore代写,Singapore代写,NUS代写,CS5340代写,Uncertainty Modelling in AI代写,Artificial Intelligence代写,BELIEF PROPAGATION AND MAXIMAL PROBABILITY代写,Python代写,National University of Singapore代编,Singapore代编,NUS代编,CS5340代编,Uncertainty Modelling in AI代编,Artificial Intelligence代编,BELIEF PROPAGATION AND MAXIMAL PROBABILITY代编,Python代编,National University of Singapore代考,Singapore代考,NUS代考,CS5340代考,Uncertainty Modelling in AI代考,Artificial Intelligence代考,BELIEF PROPAGATION AND MAXIMAL PROBABILITY代考,Python代考,National University of Singaporehelp,Singaporehelp,NUShelp,CS5340help,Uncertainty Modelling in AIhelp,Artificial Intelligencehelp,BELIEF PROPAGATION AND MAXIMAL PROBABILITYhelp,Pythonhelp,National University of Singapore作业代写,Singapore作业代写,NUS作业代写,CS5340作业代写,Uncertainty Modelling in AI作业代写,Artificial Intelligence作业代写,BELIEF PROPAGATION AND MAXIMAL PROBABILITY作业代写,Python作业代写,National University of Singapore编程代写,Singapore编程代写,NUS编程代写,CS5340编程代写,Uncertainty Modelling in AI编程代写,Artificial Intelligence编程代写,BELIEF PROPAGATION AND MAXIMAL PROBABILITY编程代写,Python编程代写,National University of Singaporeprogramming help,Singaporeprogramming help,NUSprogramming help,CS5340programming help,Uncertainty Modelling in AIprogramming help,Artificial Intelligenceprogramming help,BELIEF PROPAGATION AND MAXIMAL PROBABILITYprogramming help,Pythonprogramming help,National University of Singaporeassignment help,Singaporeassignment help,NUSassignment help,CS5340assignment help,Uncertainty Modelling in AIassignment help,Artificial Intelligenceassignment help,BELIEF PROPAGATION AND MAXIMAL PROBABILITYassignment help,Pythonassignment help,National University of Singaporesolution,Singaporesolution,NUSsolution,CS5340solution,Uncertainty Modelling in AIsolution,Artificial Intelligencesolution,BELIEF PROPAGATION AND MAXIMAL PROBABILITYsolution,Pythonsolution,