logo

Crowdly

Browser

Add to Chrome

Data Mining and Decision Support-Lecture,Section-1-Fall 2025

Looking for Data Mining and Decision Support-Lecture,Section-1-Fall 2025 test answers and solutions? Browse our comprehensive collection of verified answers for Data Mining and Decision Support-Lecture,Section-1-Fall 2025 at moodle.nu.edu.kz.

Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!

Among the k-Fold cross validation (CV)  and leave-p-out CV techniques, which one belongs to the exhaustive method.

(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').

View this question

Let's assume that a data scientist splits a dataset to  training and test datasets using a test size of 0.3;  and then he/she adopts a K-fold (k = 5) cross-validation technique. If so, which dataset can be used for a validation set?  (NOTE: Deducted points for the wrong answers. 0 point for 'Not Answering').

0%
0%
0%
View this question

While high bias and low variance  could result in over-fitting, low bias and high variance  could result in under-fitting. (NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').

View this question

Because bagging techniques use the randomized bootstrap samples, it can’t prevent (or reduce) overfitting problems.

(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').

0%
0%
0%
View this question

Let’s assume that we have 3 classifiers and a 3-class classification problems where we assign equal weights to all classifiers: w1 = 1, w2 = 1, w3 = 1; and assume the results of each probability for each classifier are like below. Which class label can be chosen by hard (majority rule) voting.

(NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').

0%
0%
50%
0%
View this question

To prevent under-fitting, regularization techniques are useful. (NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').

0%
0%
0%
View this question
How over-fitting problems can be prevented? (NOTE: multiple answers are allowed.)

View this question

An equation like y = Xh can be solved by inversion of X (i.e., h = X-1y ). And, it can be solved manually or using Python libraries, when the X is square). But, if X is not square or invertible then there is a well known recipe to solve the system approximately called the pseudo inverse (i.e., Ordinary Least Squares )

Using the below Python references, drag and drop a code for solving the ĥ, assuming X and y are numpy arrays/matrices, and all required modules are imported.

numpy.linalg.inv(a)

    Compute the (multiplicative) inverse of a matrix.

numpy.linalg.pinv(a)  //fyi, the second parameter can not be considered

    Compute the (Moore-Penrose) pseudo-inverse of a matrix.

numpy.dot(a, b, out=None)

    Dot product of two arrays.

numpy.transpose(a, axes=None)

    Permute the dimensions of an array.

numpy.ndarray.T

    Same as self.transpose(), except that self is returned if self.ndim < 2.

0%
0%
0%
0%
0%
View this question

You are given data about seismic (caused by an earthquake) activity in Japan, and you want to predict a magnitude of the next  earthquake, this is an example of unsupervised learning. (NOTE: Deducted points for the wrong answer(s). 0 point for 'Not Answering').

View this question

Want instant access to all verified answers on moodle.nu.edu.kz?

Get Unlimited Answers To Exam Questions - Install Crowdly Extension Now!

Browser

Add to Chrome