logo

Crowdly

Browser

Add to Chrome

Natural Language Processing

Looking for Natural Language Processing test answers and solutions? Browse our comprehensive collection of verified answers for Natural Language Processing at moodle.iitdh.ac.in.

Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!

In vector semantics, the meaning of a word can change based on context. If the word "bark" is represented in two different contexts—one with "tree" and another with "dog"—what does this imply about the vector representation?

View this question

During forward propagation in an RNN, the output at time step t is heavily influenced by

earlier inputs. Which mathematical property of RNN weight matrices is primarily responsible

for this temporal influence and also the root cause of gradient issues?

0%
0%
0%
0%
View this question

Which combination of gates and operations in LSTM ensures that long-term dependencies

are preserved better than in traditional RNNs?

View this question

The words "bank," "river," and "finance" are projected into a 2D vector space. If "bank" is closer to "finance" than "river," what does this imply about the training corpus?

0%
0%
0%
0%
View this question

Given an LSTM network, if the forget gate outputs 0.1 and the cell state from the previous

step is 0.9, what portion of the old memory will be retained in the current state?

0%
0%
0%
View this question

Which of the following forms of gradient descent updates the weight after computing the gradient over the entire dataset ?

0%
0%
0%
View this question
In a sentiment analysis task for logistic regression, it is observed that the true label is 1 and the predicted probability for y is 0.7 What is the cross entropy loss for the above prediction ?
0%
0%
0%
0%
View this question

The precision and recall for the urgent class respectively is

0%
0%
0%
0%
View this question

Compute the micro-averaged precision for the above confusion matrix.

View this question

What would be the adjusted count of bigram {“A”, “B”} if we had to observe the above maximum likelihood estimate for {“A”, “B”} without applying Laplace Smoothing ? Do not be concerned with finding a whole number.

0%
0%
0%
View this question

Want instant access to all verified answers on moodle.iitdh.ac.in?

Get Unlimited Answers To Exam Questions - Install Crowdly Extension Now!

Browser

Add to Chrome