Q151: __________ has been used to train vehicles to steer correctly and autonomously on road.

(A) Machine learning

(B) Data mining

(C) Robotics

(D) Neural networks

Q152: This type of learning to be used when there is no idea about the class or label of a particular data

(A) Supervised learning algorithm

(B) Unsupervised learning algorithm

(C) Semi-supervised learning algorithm

(D) Reinforcement learning algorithm

Q153: For understanding relationship between two variables, ____ can be used.

(A) Box plot

(B) Scatter plot

(C) Histogram

(D) None of the above

Q154: Feature ___ involves transforming a given set of input features to generate a new set of more powerful features.

(A) Selection

(B) Engineering

(C) Transformation

(D) Re-engineering

Q155: This approach is quite similar to wrapper approach as it also uses and inductive algorithm to evaluate the generated feature subsets.

(A) Embedded approach

(B) Filter approach

(C) Pro Wrapper approach

(D) Hybrid approach

Q156: The covariance between two random variables X and Y measures the degree to which X and Y are (linearly) related, which means how X varies with Y and vice versa. What is the formula for Cov (X,Y)?

(A) Cov(X,Y) = E(XY)−E(X)E(Y)

(B) Cov(X,Y) = E(XY)+ E(X)E(Y)

(C) Cov(X,Y) = E(XY)/E(X)E(Y)

(D) Cov(X,Y) = E(X)E(Y)/ E(XY)

Q157: Training data run on the algorithm is called as?

(A) Program

(B) Training

(C) Training Information

(D) Learned Function

Q158: What would be the relationship between the training time taken by 1-NN, 2-NN, and 3-NN?

(A) 1-NN > 2-NN > 3-NN

(B) 1-NN < 2-NN < 3-NN

(C) 1-NN ~ 2-NN ~ 3-NN

(D) None of these

Q159: Which of the following algorithms is an example of the ensemble learning algorithm?

(A) Random Forest

(B) Decision Tree

(C) NN

(D) SVM

Q160: Which of the following is not an inductive bias in a decision tree?

(A) It prefers longer tree over shorter tree

(B) Trees that place nodes near the root with high information gain are preferred

(C) Overfitting is a natural phenomenon in a decision tree

(D) Prefer the shortest hypothesis that fits the data