Looking for Data Mining and Decision Support-Lecture,Section-1-Fall 2025 test answers and solutions? Browse our comprehensive collection of verified answers for Data Mining and Decision Support-Lecture,Section-1-Fall 2025 at moodle.nu.edu.kz.
Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!
Assume you want to perform supervised learning and to predict the price of houses according to the size of houses, it is an example of a clustering problem.
(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').
For the learning rate in gradient descent, if it is too big , it will require a lot of training time (assuming there is a convergence).
(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').
Choose methods for evaluating features for feature selection.
(NOTE: Multiple answers are allowed).
From the code snippet for a GridSearchCV based cross-validation for Ridge alpha hyper-parameter, choose all the correct answers.
Like the name, the Logistic Regression algorithm mainly will be used to solve a regression problem.
(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').
Among the k-Fold cross validation (CV) and leave-p-out CV techniques, which one belongs to the exhaustive (or more exhaustive) method.
(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').
In CV (Cross-Validation) process, which error is more important for the choice of the best parameters (including hyper-parameters)?
(NOTE: Deducted points for the wrong answers. 0 point for 'Not Answering').
The information of the weight table (weight and variance estimates) can be visualized in a weight plot. The following plot shows the results from a
Choose all correct interpretation/explanations.
In CNN (Convolutional Neural Networks) model training phase, how the windows size and the window stride in the Pooling layer can be built/selected?
From the code snippet for a GridSearchCV based cross-validation for multilayer perceptron (MLP) hyper-parameters, choose all the correct answers (assuming that the number of independent variables/features could be a large number (e.g., 100), and the number of instances are relatively small (e.g., 500)).