1. Homepage
  2. Programming
  3. CS6601 Artificial Intelligence - Assignment 3: Bayes Nets

CS6601 Artificial Intelligence - Assignment 3: Bayes Nets

Engage in a Conversation
GaTechArtificial IntelligenceBayes NetsPythonProbabilistic ReasoningQuantifying UncertaintyMarkov ChainGibbs SamplingMetropolis Hastings SamplingCS6601

CS 6601 Assignment 3: Bayes Nets

In this assignment, you will work with probabilistic models known as Bayesian networks to efficiently calculate the answer to probability questions concerning discrete random variables. CourseNana.COM

Resources

You will find the following resources helpful for this assignment. CourseNana.COM

Canvas Videos:
Lecture 5 on Probability
Lecture 6 on Bayes Nets CourseNana.COM

Textbook:
4th edition:
Chapter 12: Quantifying Uncertainty
Chapter 13: Probabilistic Reasoning
CourseNana.COM

3rd edition:
Chapter 13: Quantifying Uncertainty
Chapter 14: Probabilistic Reasoning
CourseNana.COM

Setup

  1. Clone the project repository from Github

Substitute your actual username where the angle brackets are. CourseNana.COM

  1. Navigate to assignment_3/ directory CourseNana.COM

  2. Activate the environment you created during Assignment 0 CourseNana.COM

    conda activate ai_env

    In case you used a different environment name, to list of all environments you have on your machine you can run conda env list. CourseNana.COM

  3. Run the following command in the command line to install and update the required packages CourseNana.COM

    pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html
    pip install --upgrade -r requirements.txt

Submission

Please include all of your own code for submission in submission.py. CourseNana.COM

Important: There is a TOTAL submission limit of 5 on Gradescope for this assignment. This means you can submit a maximum of 5 times during the duration of the assignment. Please use your submissions carefully and do not submit until you have thoroughly tested your code locally. CourseNana.COM

If you're at 4 submissions, use your fifth and last submission wisely. The submission marked as ‘Active’ in Gradescope will be the submission counted towards your grade. CourseNana.COM

Restrictions

You are not allowed to use following set of modules from 'pgmpy' Library. CourseNana.COM

  • pgmpy.sampling.*
  • pgmpy.factor.*
  • pgmpy.estimators.*

Part 1 Bayesian network tutorial:

[35 points total] CourseNana.COM

To start, design a basic probabilistic model for the following system: CourseNana.COM

James Bond, Q (Quartermaster), and M (Head of MI6) are in charge of the security system at MI6, the Military Intelligence Unit of the British Secret Service. MI6 runs a special program called “Double-0”, where secret spy agents are trained and deployed to gather information concerning national security. A terrorist organization named “Spectre” is planning an espionage mission and its aim is to gain access to the secret “Double-0” files stored in the MI6 database. Q has designed a special security system to protect the secret “Double-0” files. In order to gain access to these files, Spectre needs to steal from MI6 a cipher and the key to crack this cipher. Q stores this cipher in his personal database, which is guarded by heavy security protocols. The key to cracking the cipher is known only to M, who is protected by Bond. CourseNana.COM

1a: Casting the net

[10 points] CourseNana.COM

Thus, Spectre can carry out their mission by performing the following steps: CourseNana.COM

  • Hire professional hackers who can write programs to launch a cyberattack on Q’s personal database.
  • Buy a state-of-the-art computer called “Contra” to actually launch this cyberattack.
  • Hire ruthless mercenaries to kidnap M and get access to the key.
  • Make sure Bond is not available with M at the time of the kidnapping.
  • Use the cipher and key to access the target “Double-0” files.

Sensing the imminent danger, MI6 has hired you to design a Bayes Network for modeling this espionage mission, so that it can be avoided. MI6 requires that you use the following name attributes for the nodes in your Bayes Network: CourseNana.COM

  • “H”: The event that Spectre hires professional hackers
  • “C”: The event that Spectre buys Contra
  • “M”: The event that Spectre hires mercenaries
  • “B”: The event that Bond is guarding M at the time of the kidnapping
  • “Q”: The event that Q’s database is hacked and the cipher is compromised
  • “K”: The event that M gets kidnapped and has to give away the key
  • “D”: The event that Spectre succeeds in obtaining the “Double-0” files

Based on their previous encounters with Spectre, MI6 has provided the following classified information that can help you design your Bayes Network: CourseNana.COM

  • Spectre will not be able to find and hire skilled professional hackers (call this false) with a probability of 0.5.
  • Spectre will get their hands on Contra (call this true) with a probability of 0.3.
  • Spectre will be unable to hire the mercenaries (call this false) with a probability of 0.2.
  • Since Bond is also assigned to another mission, the probability that he will be protecting M at a given moment (call this true) is just 0.5!
  • The professional hackers will be able to crack Q’s personal database (call this true) without using Contra with a probability of 0.55. However, if they get their hands on Contra, they can crack Q’s personal database with a probability of 0.9. In case Spectre can not hire these professional hackers, their less experienced employees will launch a cyberattack on Q’s personal database. In this case, Q’s database will remain secure with a probability of 0.75 if Spectre has Contra and with a probability of 0.95 if Spectre does not have Contra.
  • When Bond is protecting M, the probability that M stays safe (call this false) is 0.85 if mercenaries conduct the attack. Else, when mercenaries are not present, it the probability that M stays safe is as high as 0.99! However, if M is not accompanied by Bond, M gets kidnapped with a probability of 0.95 and 0.75 respectively, with and without the presence of mercenaries.
  • With both the cipher and the key, Spectre can access the “Double-0” files (call this true) with a probability of 0.99! If Spectre has none of these, then this probability drops down to 0.02! In case Spectre has just the cipher, the probability that the “Double-0” files remain uncompromised is 0.4. On the other hand, if Spectre has just the key, then this probability changes to 0.65.

Use the description of the model above to design a Bayesian network for this model. The pgmpy package is used to represent nodes and conditional probability arcs connecting nodes. Don't worry about the probabilities for now. Use the functions below to create the net. You will write your code in submission.py. CourseNana.COM

Fill in the function make_security_system_net() CourseNana.COM

The following commands will create a BayesNet instance add node with name "node_name": CourseNana.COM

BayesNet = BayesianModel()
BayesNet.add_node("node_name")

You will use BayesNet.add_edge() to connect nodes. For example, to connect the parent and child nodes that you've already made (i.e. assuming that parent affects the child probability): CourseNana.COM

Use function BayesNet.add_edge(<parent node name>,<child node name>). For example: CourseNana.COM

BayesNet.add_edge("parent","child")

After you have implemented make_security_system_net(), you can run the following test in the command line to make sure your network is set up correctly. CourseNana.COM

python probability_tests.py ProbabilityTests.test_network_setup

1b: Setting the probabilities

[15 points] CourseNana.COM

Now set the conditional probabilities for the necessary variables on the network you just built. CourseNana.COM

Fill in the function set_probability() CourseNana.COM

Using pgmpy's factors.discrete.TabularCPD class: if you wanted to set the distribution for node 'A' with two possible values, where P(A) to 70% true, 30% false, you would invoke the following commands: CourseNana.COM

cpd_a = TabularCPD('A', 2, values=[[0.3], [0.7]])

NOTE: Use index 0 to represent FALSE and index 1 to represent TRUE, or you may run into testing issues. CourseNana.COM

If you wanted to set the distribution for P(G|A) to be CourseNana.COM

A P(G=true given A)
T 0.75
F 0.85

you would invoke: CourseNana.COM

cpd_ga = TabularCPD('G', 2, values=[[0.15, 0.25], \
                    [ 0.85, 0.75]], evidence=['A'], evidence_card=[2])

Reference for the function: https://pgmpy.org/_modules/pgmpy/factors/discrete/CPD.html CourseNana.COM

Modeling a three-variable relationship is a bit trickier. If you wanted to set the following distribution for P(T|A,G) to be CourseNana.COM

A G P(T=true given A and G)
T T 0.15
T F 0.6
F T 0.2
F F 0.1

you would invoke CourseNana.COM

cpd_tag = TabularCPD('T', 2, values=[[0.9, 0.8, 0.4, 0.85], \
                    [0.1, 0.2, 0.6, 0.15]], evidence=['A', 'G'], evidence_card=[2, 2])

The key is to remember that first entry represents the probability for P(T==False), and second entry represents P(T==true). CourseNana.COM

Add Tabular conditional probability distributions to the bayesian model instance by using following command. CourseNana.COM

bayes_net.add_cpds(cpd_a, cpd_ga, cpd_tag)

You can check your probability distributions in the command line with CourseNana.COM

python probability_tests.py ProbabilityTests.test_probability_setup

1c: Probability calculations : Perform inference

[10 points] CourseNana.COM

To finish up, you're going to perform inference on the network to calculate the following probabilities: CourseNana.COM

  • What is the marginal probability that the “Double-0” files get compromised?
  • You just received an update that the British Elite Forces have successfully secured and shut down Contra, making it unavailable for Spectre. Now, what is the conditional probability that the “Double-0” files get compromised?
  • Despite shutting down Contra, MI6 still believes that an attack is imminent. Thus, Bond is reassigned full-time to protect M. Given this new update and Contra still shut down, what is the conditional probability that the “Double-0” files get compromised?

You'll fill out the "get_prob" functions to calculate the probabilities: CourseNana.COM

  • get_marginal_double0()
  • get_conditional_double0_given_no_contra()
  • get_conditional_double0_given_no_contra_and_bond_guarding()

Here's an example of how to do inference for the marginal probability of the "A" node being True (assuming bayes_net is your network): CourseNana.COM

solver = VariableElimination(bayes_net)
marginal_prob = solver.query(variables=['A'], joint=False)
prob = marginal_prob['A'].values

To compute the conditional probability, set the evidence variables before computing the marginal as seen below (here we're computing P('A' = false | 'B' = true, 'C' = False)): CourseNana.COM

solver = VariableElimination(bayes_net)
conditional_prob = solver.query(variables=['A'],evidence={'B':1,'C':0}, joint=False)
prob = conditional_prob['A'].values

NOTE: marginal_prob and conditional_prob return two probabilities corresponding to [False, True] case. You must index into the correct position in prob to obtain the particular probability value you are looking for. CourseNana.COM

If you need to sanity-check to make sure you're doing inference correctly, you can run inference on one of the probabilities that we gave you in 1a. For instance, running inference on P(M=false) should return 0.20 (i.e. 20%). However, due to imprecision in some machines it could appear as 0.199xx. You can also calculate the answers by hand to double-check. CourseNana.COM

Part 2: Sampling

[65 points total] CourseNana.COM

For the main exercise, consider the following scenario. CourseNana.COM

There are three frisbee teams who play each other: the Airheads, the Buffoons, and the Clods (A, B and C for short). Each match is between two teams, and each team can either win, lose, or draw in a match. Each team has a fixed but unknown skill level, represented as an integer from 0 to 3. The outcome of each match is probabilistically proportional to the difference in skill level between the teams. CourseNana.COM

Sampling is a method for ESTIMATING a probability distribution when it is prohibitively expensive (even for inference!) to completely compute the distribution. CourseNana.COM

Here, we want to estimate the outcome of the matches, given prior knowledge of previous matches. Rather than using inference, we will do so by sampling the network using two [Markov Chain Monte Carlo] models: Gibbs sampling (2c) and Metropolis-Hastings (2d). CourseNana.COM

2a: Build the network.

[10 points] CourseNana.COM

For the first sub-part, consider a network with 3 teams : the Airheads, the Buffoons, and the Clods (A, B and C for short). 3 total matches are played. Build a Bayes Net to represent the three teams and their influences on the match outcomes. CourseNana.COM

Fill in the function get_game_network() CourseNana.COM

Assume the following variable conventions: CourseNana.COM

variable name description
A A's skill level
B B's skill level
C C's skill level
AvB the outcome of A vs. B
(0 = A wins, 1 = B wins, 2 = tie)
BvC the outcome of B vs. C
(0 = B wins, 1 = C wins, 2 = tie)
CvA the outcome of C vs. A
(0 = C wins, 1 = A wins, 2 = tie)

Use the following name attributes: CourseNana.COM

  • "A"
  • "B"
  • "C"
  • "AvB"
  • "BvC"
  • "CvA"

Assume that each team has the following prior distribution of skill levels: CourseNana.COM

skill level P(skill level)
0 0.15
1 0.45
2 0.30
3 0.10

In addition, assume that the differences in skill levels correspond to the following probabilities of winning: CourseNana.COM

skill difference
(T2 - T1)
T1 wins T2 wins Tie
0 0.10 0.10 0.80
1 0.20 0.60 0.20
2 0.15 0.75 0.10
3 0.05 0.90 0.05

You can check your network implementation in the command line with CourseNana.COM

python probability_tests.py ProbabilityTests.test_games_network

2b: Calculate posterior distribution for the 3rd match.

[5 points] CourseNana.COM

Suppose that you know the following outcome of two of the three games: A beats B and A draws with C. Calculate the posterior distribution for the outcome of the BvC match in calculate_posterior(). CourseNana.COM

Use the VariableElimination provided to perform inference. CourseNana.COM

You can check your posteriors in the command line with CourseNana.COM

python probability_tests.py ProbabilityTests.test_posterior

NOTE: In the following sections, we'll be arriving at the same values by using sampling. CourseNana.COM

NOTE: pgmpy's VariableElimination may sometimes produce incorrect Posterior Probability distributions. While, it doesn't have an impact on the Assignment, we discourage using it beyong the scope of this Assignment. CourseNana.COM

Hints Regarding sampling for Part 2c, 2d, and 2e

Hint 1: In both Metropolis-Hastings and Gibbs sampling, you'll need access to each node's probability distribution and nodes. You can access these by calling: CourseNana.COM

A_cpd = bayes_net.get_cpds('A')      
team_table = A_cpd.values
AvB_cpd = bayes_net.get_cpds("AvB")
match_table = AvB_cpd.values

Hint 2: While performing sampling, you will have to generate your initial sample by sampling uniformly at random an outcome for each non-evidence variable and by keeping the outcome of your evidence variables (AvB and CvA) fixed. CourseNana.COM

Hint 3: You'll also want to use the random package, e.g. random.randint() or random.choice(), for the probabilistic choices that sampling makes. CourseNana.COM

Hint 4: In order to count the sample states later on, you'll want to make sure the sample that you return is hashable. One way to do this is by returning the sample as a tuple. CourseNana.COM

2c: Gibbs sampling

[15 points] CourseNana.COM

Implement the Gibbs sampling algorithm, which is a special case of Metropolis-Hastings. You'll do this in Gibbs_sampler(), which takes a Bayesian network and initial state value as a parameter and returns a sample state drawn from the network's distribution. In case of Gibbs, the returned state differs from the input state at at-most one variable (randomly chosen). CourseNana.COM

The method should just consist of a single iteration of the algorithm. If an initial value is not given (initial state is None or and empty list), default to a state chosen uniformly at random from the possible states. CourseNana.COM

Note: DO NOT USE the given inference engines or pgmpy samplers to run the sampling method, since the whole point of sampling is to calculate marginals without running inference. CourseNana.COM

 "YOU WILL SCORE 0 POINTS ON THIS ASSIGNMENT IF YOU USE THE GIVEN INFERENCE ENGINES FOR THIS PART"

You may find this helpful in understanding the basics of Gibbs sampling over Bayesian networks. CourseNana.COM

2d: Metropolis-Hastings sampling

[15 points] CourseNana.COM

Now you will implement the independent Metropolis-Hastings sampling algorithm in MH_sampler(), which is another method for estimating a probability distribution. The general idea of MH is to build an approximation of a latent probability distribution by repeatedly generating a "candidate" value for each sample vector comprising of the random variables in the system, and then probabilistically accepting or rejecting the candidate value based on an underlying acceptance function. Unlike Gibbs, in case of MH, the returned state can differ from the initial state at more than one variable. This slide deck provides a nice intro. CourseNana.COM

This method should just perform a single iteration of the algorithm. If an initial value is not given, default to a state chosen uniformly at random from the possible states. CourseNana.COM

Note: DO NOT USE the given inference engines to run the sampling method, since the whole point of sampling is to calculate marginals without running inference. CourseNana.COM

 "YOU WILL SCORE 0 POINTS IF YOU USE THE PROVIDED INFERENCE ENGINES, OR ANY OTHER SAMPLING METHOD"

2e: Comparing sampling methods

[19 points] CourseNana.COM

Now we are ready for the moment of truth. CourseNana.COM

Given the same outcomes as in 2b, A beats B and A draws with C, you should now estimate the likelihood of different outcomes for the third match by running Gibbs sampling until it converges to a stationary distribution. We'll say that the sampler has converged when, for "N" successive iterations, the difference in expected outcome for the 3rd match differs from the previous estimated outcome by less than "delta". N is a positive integer, delta goes from (0,1). For the most stationary convergence, delta should be very small. N could typically take values like 10,20,...,100 or even more. CourseNana.COM

Use the functions from 2c and 2d to measure how many iterations it takes for Gibbs and MH to converge to a stationary distribution over the posterior. See for yourself how close (or not) this stable distribution is to what the Inference Engine returned in 2b. And if not, try tuning those parameters(N and delta). (You might find the concept of "burn-in" period useful). CourseNana.COM

You can choose any N and delta (with the bounds above), as long as the convergence criterion is eventually met. For the purpose of this assignment, we'd recommend using a delta approximately equal to 0.001 and N at least as big as 10. CourseNana.COM

Repeat this experiment for Metropolis-Hastings sampling. CourseNana.COM

Fill in the function compare_sampling() to perform your experiments CourseNana.COM

Which algorithm converges more quickly? By approximately what factor? For instance, if Metropolis-Hastings takes twice as many iterations to converge as Gibbs sampling, you'd say that Gibbs converged faster by a factor of 2. Fill in sampling_question() to answer both parts. CourseNana.COM

2f: Return your name

[1 point] CourseNana.COM

A simple task to wind down the assignment. Return your name from the function aptly called return_your_name(). CourseNana.COM

Get in Touch with Our Experts

WeChat (微信) WeChat (微信)
Whatsapp WhatsApp
GaTech代写,Artificial Intelligence代写,Bayes Nets代写,Python代写,Probabilistic Reasoning代写,Quantifying Uncertainty代写,Markov Chain代写,Gibbs Sampling代写,Metropolis Hastings Sampling代写,CS6601代写,GaTech代编,Artificial Intelligence代编,Bayes Nets代编,Python代编,Probabilistic Reasoning代编,Quantifying Uncertainty代编,Markov Chain代编,Gibbs Sampling代编,Metropolis Hastings Sampling代编,CS6601代编,GaTech代考,Artificial Intelligence代考,Bayes Nets代考,Python代考,Probabilistic Reasoning代考,Quantifying Uncertainty代考,Markov Chain代考,Gibbs Sampling代考,Metropolis Hastings Sampling代考,CS6601代考,GaTechhelp,Artificial Intelligencehelp,Bayes Netshelp,Pythonhelp,Probabilistic Reasoninghelp,Quantifying Uncertaintyhelp,Markov Chainhelp,Gibbs Samplinghelp,Metropolis Hastings Samplinghelp,CS6601help,GaTech作业代写,Artificial Intelligence作业代写,Bayes Nets作业代写,Python作业代写,Probabilistic Reasoning作业代写,Quantifying Uncertainty作业代写,Markov Chain作业代写,Gibbs Sampling作业代写,Metropolis Hastings Sampling作业代写,CS6601作业代写,GaTech编程代写,Artificial Intelligence编程代写,Bayes Nets编程代写,Python编程代写,Probabilistic Reasoning编程代写,Quantifying Uncertainty编程代写,Markov Chain编程代写,Gibbs Sampling编程代写,Metropolis Hastings Sampling编程代写,CS6601编程代写,GaTechprogramming help,Artificial Intelligenceprogramming help,Bayes Netsprogramming help,Pythonprogramming help,Probabilistic Reasoningprogramming help,Quantifying Uncertaintyprogramming help,Markov Chainprogramming help,Gibbs Samplingprogramming help,Metropolis Hastings Samplingprogramming help,CS6601programming help,GaTechassignment help,Artificial Intelligenceassignment help,Bayes Netsassignment help,Pythonassignment help,Probabilistic Reasoningassignment help,Quantifying Uncertaintyassignment help,Markov Chainassignment help,Gibbs Samplingassignment help,Metropolis Hastings Samplingassignment help,CS6601assignment help,GaTechsolution,Artificial Intelligencesolution,Bayes Netssolution,Pythonsolution,Probabilistic Reasoningsolution,Quantifying Uncertaintysolution,Markov Chainsolution,Gibbs Samplingsolution,Metropolis Hastings Samplingsolution,CS6601solution,