Looking for Natural Language Processing test answers and solutions? Browse our comprehensive collection of verified answers for Natural Language Processing at moodle.iitdh.ac.in.
Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!
In vector semantics, the meaning of a word can change based on context. If the word "bark" is represented in two different contexts—one with "tree" and another with "dog"—what does this imply about the vector representation?
During forward propagation in an RNN, the output at time step t is heavily influenced byearlier inputs. Which mathematical property of RNN weight matrices is primarily responsiblefor this temporal influence and also the root cause of gradient issues?
Which combination of gates and operations in LSTM ensures that long-term dependenciesare preserved better than in traditional RNNs?
The words "bank," "river," and "finance" are projected into a 2D vector space. If "bank" is closer to "finance" than "river," what does this imply about the training corpus?
Given an LSTM network, if the forget gate outputs 0.1 and the cell state from the previousstep is 0.9, what portion of the old memory will be retained in the current state?
Which of the following forms of gradient descent updates the weight after computing the gradient over the entire dataset ?
What would be the adjusted count of bigram {“A”, “B”} if we had to observe the above maximum likelihood estimate for {“A”, “B”} without applying Laplace Smoothing ? Do not be concerned with finding a whole number.