logo

Crowdly

Browser

Add to Chrome

Data Analytics (Eng) / Data Analitika (Ing) - 344

Looking for Data Analytics (Eng) / Data Analitika (Ing) - 344 test answers and solutions? Browse our comprehensive collection of verified answers for Data Analytics (Eng) / Data Analitika (Ing) - 344 at stemlearn.sun.ac.za.

Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!

During a tree's induction using the ID3 algorithm, assume the root note of the tree is at level 0. This root node represents a dataset (D) of size 10. The root node's direct child nodes are at level 1 and they represent datasets (D1 and D2) of sizes 5 and 5, respectively. D3 is a dataset of size 3 that is represented by a direct child of the node that represents D2. In D3, 1 and 2 instances are of target class A and B, respectively. The ID3 algorithm is provided with D3 and d (the current set of descriptive features) as input. If the maximum permissible depth of the tree has not been reached and the ID3 algorithm returns a leaf node with a class label of B, how many descriptive features are used to represent the instances in D?

View this question

Regression trees can use Gini index as a measure of homogeneity, such that a Gini Index of 0 represents complete homogeneity.

0%
0%
View this question

When inducing a classification tree using the information gain metric, the feature that yields the largest reduction in node impurity is selected for partitioning the dataset.

100%
0%
View this question

If a dataset (D) has equal numbers of class A, class B and class C instances and a split on the dataset results in 3 partitions each with equal numbers of only two of the classes and no instances of the third class, what is the information gain based on entropy?

View this question
Classification trees are more ideal than regression trees for predicting the probability of it raining tomorrow, since this probability lies between 0 and 1.
100%
0%
View this question

Which of the following statements is true about decision tree pruning?

0%
0%
0%
0%
View this question

In general, oblique trees result in less overfitted trees than non-oblique trees, when both trees are trained on the same training set?

0%
0%
View this question

In general model trees are less sensitive to outliers in the training set than ordinary regression trees, when both trees are trained on the same training set.

0%
0%
View this question
Rules extracted from a decision tree are symmetric i.e. the rule antecedent implies the consequent/decision and vice versa.
0%
100%
View this question
In general, during tree induction entropy is more sensitive to outliers than Gini index.
100%
0%
View this question

Want instant access to all verified answers on stemlearn.sun.ac.za?

Get Unlimited Answers To Exam Questions - Install Crowdly Extension Now!

Browser

Add to Chrome