Looking for Natural Language Processing test answers and solutions? Browse our comprehensive collection of verified answers for Natural Language Processing at moodle.iitdh.ac.in.
Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!
Given a sentence with k tokens, how many n-grams with frequency greater than zero can be obtained from the sentence where n is an arbitrary natural number ?
You are working on finding the maximum likelihood estimate of seeing a bigram {“A”, “B”} where “B” and “A” are tokens and “B” follows “A”. The frequency of the bigram {“A”, “B”} in the corpus is 999 and “A” appears 2000 times in the corpus. You apply Laplace Smoothing. If there are 10,000 unique tokens in the corpus. What is the maximum likelihood estimate of bigram {“A”, “B”} ?
Consider the Dynamic Programming based solution of finding the minimum edit distance between two strings of different lengths. Let the substitution cost be S, insertion cost be I and deletion cost be D. Let the element in the rth row and cth column of the DP table be represented by the tuple (r, c)
After filling in R rows of the DP table we attempt to fill in the Cth column of the (R + 1) th row. It is observed that the element in (r, c) = 2 ; (r+1, c-1) = 3 ; (r, c - 1) = 4. Furthermore the terminal characters at (r+1, c) are not equal. Deduce the entry at (r + 1, c) from the above information.Consider three language models, A, B and C. Upon evaluating each of their performances on a test set we observe that A obtains a perplexity score of 962, B a perplexity score of 170 and C a perplexity score of 109. Which of the following is most likely to be the rationale behind the difference in performance between the three.
Considering the definition of edit distance which assigns a weight of 1 to insertions and deletions whereas 2 to substitutions, what is the minimum edit distance between lead and deal ?
Consider a vocabulary consisting of k tokens. How many n-grams can you construct from the vocabulary where n is an arbitrary natural number ? The frequency of the n-gram need not be greater than zero.
A spam filter classifies emails as Spam (S) or Not Spam (¬S) using the Naïve Bayes algorithm. Given a dataset, the following probabilities are known:
P(S)=0.4 (40 % of emails are spam)
P(¬S)=0.6 (60% of emails are not spam).
70% of spam emails contain "offer" and 10% of non-spam emails contain "offer".
If a new email contains the word offer, find the probability that it is spam.
Naive Bayes is a generative model. Let P(d | c) be the probability of observing a document d given that it belongs to c. P(c) represents the fraction of documents belonging to class c. What are P(d|c) and P(c) respectively called ?
Which of the following is a valid bigram from the sentence "I love NLP"?
In a certain programming language, the regular expression /\b99\b/ matches “99” and “99a” but not “299”, “$99” and “ _99”. From the information presented, what is included in the set of characters that form a word for that programming language ?