1. Homepage
  2. Programming
  3. COMP 7607: Natural Language Processing - Assignment 1: N-gram Language Model

COMP 7607: Natural Language Processing - Assignment 1: N-gram Language Model

Engage in a Conversation
HKUCOMP 7607COMP7607Natural Language ProcessingN-gram Language ModelPython

Assignment 1
COMP 7607: Natural Language Processing CourseNana.COM

Zoie comes from Beijing and she is learning English. She finds it sometimes difficult to decide which preposition she should use in some cases. For example, should she say “they live in a small town” or “they live at a small town”? One day she visited HKU and came across a student who happens to study COMP7607 this semester. CourseNana.COM

“You must be an expert in language!”, said Zoie. CourseNana.COM

“We learned language model recently!”, said the student with excitement. CourseNana.COM

“That’s nice! Could you please write a program for me that tells me what preposition I should use in a sentence”, asked Zoie with her eyes shining like stars. CourseNana.COM

“Sure! Anything!”, said the student. CourseNana.COM

And here comes the assignment 1. CourseNana.COM

Grade: We will evaluate your grade based on the code quality, output results/log, and discussion. We don’t evaluate your submission by perplexity in Section 1, but by better prediction accuracy in Sec- tion 2 increases your score. There are some optional problems in this assignment, and solving them increases your grade. Please don’t waste your time manually searching for hyperparameters. You are more than welcome to introduce a new hyperparameter optimization method to find more reasonable values. CourseNana.COM

Submit: You should submit two files: UniversityNumber.ipynb file and final prediction file University-Number.test.out of Section 2 to Moodle. All code and discussion should be included in your submitted .ipynb file. Make sure your code does not use your local files so that the results are reproducible. Before submitting, please run your notebook and keep all running logs so that we can check the results. CourseNana.COM

1 N-gram Language Model CourseNana.COM

A language model is a probability function p that assigns probabilities to word sequences such as w= (i, love, hong kong). Let’s envision p(w) as the probability that, if you were to tune in to the radio at any random moment, the following four words would be "i love Hong Kong." This probability can be CourseNana.COM

evaluated even in the context of a longer sentence, such as “the students of COMP7607 course says, i love Hong Kong more than ever.” Often, we analyze p(w) to compare it with alternative sequences and determine our preference. CourseNana.COM

Formally, in the context of language modeling, each element W belonging to the event space E represents a potential value of the infinite sequence of words that would be emitted from the radio once it is turned on. The notation p(w) serves as a shorthand for p(prefix(W, |w|) = w), where |w| represents the length of the sequence w. Consequently, p(i, love, hong, kong) encapsulates the cumu- lative probability of all infinite word sequences W that commence with the phrase “i love hong kong...”. CourseNana.COM

Suppose p(w) = w1w2 . . . wn (a sequence of n words). A trigram language model defines def CourseNana.COM

p(w)=p(w1)·p(w2 |w1)·p(w3 |w1 w2)·p(w4 |w2 w3)···p(wn |wn2 wn1) CourseNana.COM

on the assumption that the sequence was generated in the order w1, w2, w3, . . . (“from left to right”) with each word chosen in a way dependent on the previous two words. (But the first word w1 is not dependent on anything, since we turned on the radio at an arbitrary moment.) CourseNana.COM

Q1: Expand the above definition of p(w) using naive estimates of the parameters, such as def C(w2 w3 w4) CourseNana.COM

p(w4 |w2,w3)=
where C(w2w3w4) denotes the count of times the trigram w2w3w4 was observed in a training corpus. CourseNana.COM

Note: In practice, we refer to simple parameter estimates like these as "maximum-likelihood estimates" (MLE). The advantage of MLE is that it maximizes the probability, or minimizes perplexity, of the training data. However, they tend to perform poorly on test data CourseNana.COM

Q2: One could also define a kind of reversed trigram language model preversed that instead assumed the words were generated in reverse order (“from right to left”): CourseNana.COM

def
preversed(w) =p(wn) · p(wn1 | wn) · p(wn2 | wn1wn) · p(wn3 | wn2wn1) (1) CourseNana.COM

···p(w2 |w3w4)·p(w1 |w2w3) (2) CourseNana.COM

By manipulating the notation, show that the two models are identical (i.e., p(w) = preversed(w)) for any wprovided that both models use MLE parameters estimated from the same training data (see Q1 above). CourseNana.COM

2 N-gram Language Model Implementation CourseNana.COM

In this section, you will build your own language model. We provide a dataset with three files contain- ing many whitespace-tokenized sentences  CourseNana.COM

C(w2 w3) CourseNana.COM

train.txt: data for training your language model.
dev.txt: data for developing and optimizing your language model. CourseNana.COM

test.txt: data for only evaluating your language model. CourseNana.COM

You will first build vocabulary with the training data, and then build your own unigram, bigram, and trigram language models with some smoothing methods. CourseNana.COM

2.1 Building Vocabulary CourseNana.COM

You will download and preprocess the tokenized training data to build the vocabulary. To handle out- of-vocabulary (OOV) words, you will convert tokens that occur less than three times in the training data into a special unknown token <UNK>. You should also add start-of-sentence tokens <s> and end-of-sentence </s> tokens. CourseNana.COM

Please show the vocabulary size and discuss the number of parameters of n-gram models. 2.2 N-gram Language Modeling CourseNana.COM

After preparing your vocabulary, you are expected to build bigram and unigram language models and report their perplexity on the training set, and dev set. Please discuss your experimental results. If you encounter any problems, please analyze them and explain why. CourseNana.COM

2.3 Smoothing CourseNana.COM

In this section, you will implement two smoothing methods to improve your language models. CourseNana.COM

2.3.1 Add-one (Laplace) smoothing CourseNana.COM

Please improve your bigram language model with add-one smoothing. Report its perplexity on the training set and dev set. Briefly discuss the differences in results between this and the bi-gram model you built in Section 2.2. CourseNana.COM

2.3.2 Add-k smoothing CourseNana.COM

One alternative to add-one smoothing is to move a bit less of the probability mass from the seen to the unseen events. Instead of adding 1 to each count, we add a fractional count k (.5? .05? .01?). This algorithm is therefore called add-k smoothing, as shown here in the bigram case: CourseNana.COM

P(w |w ) = C(wi1 wi) + k (3) Add-k i i1 C(wi) + kV CourseNana.COM

Please optimize the perplexity on the dev set by trying different k (no more than three times). Report its perplexity on the training set and dev set. Briefly discuss the differences in results between this and the bi-gram model with add-one smoothing. CourseNana.COM

2.3.3 Linear Interpolation CourseNana.COM

Please implement linear interpolation smoothing between unigram, bigram, and trigram models. Re- port its perplexity on the training set and dev set. Please optimize on a dev set by trying different hyperparameter sets. Finally, report the perplexity on the test set with the best hyperparameter set you get. Briefly discuss the results. CourseNana.COM

Optimization. So far, we manually choose the hyperparameter that maximizes the perplexity of the dev set and evaluates it on the test set. There are various ways to find this optimal set of hyperparam- eters. Do you know any other learning algorithms? Please give an example. CourseNana.COM

CourseNana.COM

3 Preposition Prediction CourseNana.COM

In this section, you will use your language model to answer what preposition to use in a sentence. CourseNana.COM

dev.in: The whitespace-tokenized sentences for developing your language model, where 5 prepo- sitions (at, in, of, for on) are replaced with a special token <PREP>. CourseNana.COM

dev.out: The correct answers for the dev set.
test.in: Input sentences for testing in the same format as dev.in, which will be released a week CourseNana.COM

before the deadline. CourseNana.COM

Your task is to use your language model to predict for each special token <PREP> in the validation (and soon the test) set, which preposition is the proper one there. You should optimize the accuracy of your model on the dev set and predict the prepositions of sentences in the test set. Please strictly follow the format of dev.out for your output test.out as well. CourseNana.COM

Please explain your method, report the accuracy of your model on dev sets, and discuss the re- sults. Based on your hyperparameters and results, explain the relationship between the test perplexity model and performance in applications. CourseNana.COM

Go for it, Zoie. The journey of language modeling is about to start!  CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
HKU代写,COMP 7607代写,COMP7607代写,Natural Language Processing代写,N-gram Language Model代写,Python代写,HKU代编,COMP 7607代编,COMP7607代编,Natural Language Processing代编,N-gram Language Model代编,Python代编,HKU代考,COMP 7607代考,COMP7607代考,Natural Language Processing代考,N-gram Language Model代考,Python代考,HKUhelp,COMP 7607help,COMP7607help,Natural Language Processinghelp,N-gram Language Modelhelp,Pythonhelp,HKU作业代写,COMP 7607作业代写,COMP7607作业代写,Natural Language Processing作业代写,N-gram Language Model作业代写,Python作业代写,HKU编程代写,COMP 7607编程代写,COMP7607编程代写,Natural Language Processing编程代写,N-gram Language Model编程代写,Python编程代写,HKUprogramming help,COMP 7607programming help,COMP7607programming help,Natural Language Processingprogramming help,N-gram Language Modelprogramming help,Pythonprogramming help,HKUassignment help,COMP 7607assignment help,COMP7607assignment help,Natural Language Processingassignment help,N-gram Language Modelassignment help,Pythonassignment help,HKUsolution,COMP 7607solution,COMP7607solution,Natural Language Processingsolution,N-gram Language Modelsolution,Pythonsolution,