Шукаєте відповіді та рішення тестів для Machine Learning-(BSCS-2, BSDS-2)? Перегляньте нашу велику колекцію перевірених відповідей для Machine Learning-(BSCS-2, BSDS-2) в moodle.ucu.ac.ug.
Отримайте миттєвий доступ до точних відповідей та детальних пояснень для питань вашого курсу. Наша платформа, створена спільнотою, допомагає студентам досягати успіху!
Two Decision Trees are trained on the same data. Tree A: max_depth=None (fully grown), test accuracy 82%. Tree B: max_depth=5, test accuracy 86%. Which conclusion is best supported?
Which algorithm uses Gini Impurity as its default splitting criterion?
Using distance-weighted KNN with K=3, a test point has three neighbours: A (class Red, distance 1), B (class Blue, distance 2), C (class Blue, distance 4). Using weights = 1/distance, what is the predicted class?
Which distance metric computes the straight-line distance between two points in Euclidean space and is the most commonly used default in KNN?
Consider a node with class distribution [50%, 50%]. After a split, one criterion yields children [90%, 10%] and [30%, 70%], while another criterion yields children [100%, 0%] and [40%, 60%]. In practice, which statement is most accurate about Entropy vs Gini for these two splits?
A fully grown Decision Tree (no pruning, no depth limit) tends to overfit. Which explanation best captures why?
The formula H(S) = -Σ pi log2(pi) is used to compute which metric in Decision Trees?
Recall is defined as the ratio of true positives to the sum of true positives and false positives.
You want to combine the interpretability of a Decision Tree with the flexibility of KNN. Which pipeline design best achieves this?
A Decision Tree achieves 62% training accuracy and 60% test accuracy on a task where state-of-the-art is ~90%. The tree has max_depth=2 and min_samples_leaf=50. What is the most likely issue?