Which of the following statement(s) can

1.1 and 2

2.1 and 3

3.2 and 4

4.none of above

is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.

1.emoving the whole line

2.creating sub-model to predict those features C.

3.using an automatic strategy to input them according to the other known values

4.All of the above

showed better performance than other approaches, even without a context- based model

1.machine learning

2.deep learning

3.reinforcement learning

4.supervised learning

can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm.

1.supervised

2.semi-supervised

3.reinforcement

4.clusters

can be adopted when it's necessary to categorize a large amount of data with a fewcomplete examples or when there's the need to

1.supervised

2.semi- supervised

3.reinforcement

4.clusters

When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled

1. overfitting

2.overlearning

3.classification

4.regression

Which of the following option is true regarding Regression andCorrelation ?Note: y is dependent variable and x is independent variable.

1. the relationship is symmetric between x and y in both.

2.the relationship is not symmetric between x and y in both.

3.the relationship is not symmetric between x and y in case of correlation but in case of regression it is symmetric.

4.the relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric.

For the given weather data, what is the probability that players will play if weather is sunny

1.0.5

2.0.26

3.0.73

4.0.6

Generally, which of the following method(s) is used for predicting continuous dependent variable?1. Linear Regression2. Logistic Regression

1.1 and 2

2.only 1

3.only 2

4.None of These

Identify the various approaches for machine learning.

1.concept vs classification learning

2.symbolic vs statistical learning

3.inductive vs analytical learning

4.all above

If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?

1.underfitting B. C.

2. nothing, the model is perfect

3.overfitting

4.None of these

If Linear regression model perfectly first i.e., train error is zero, then

1.test error is also always zero

2.test error is non zero

3.couldnt comment on test error

4.test error is equal to train error

In a linear regression problem, we are using R-squared to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true?

1. if r squared increases, this variable is significant.

2. if r squared decreases, this variable is not significant.

3.individually r squared cannot tell about variable importance. we cant say anything about it right now.

4.None of These

In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options

1.1

2.2

3.3

4.4

Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time?

1.you want to increase your data points

2.you want to decrease your data points

3.you will try to calculate more variables

4. you will try to reduce the features

there's a growing interest in pattern recognition and associative memories whose structure and functioningare similar to what happens in the neocortex. Such an

1.regression

2.accuracy

3.modelfree

4.scalable

What do you mean by generalization error in terms of the SVM?

1.how far the hyperplane is from the support vectors

2.how accurately the svm can predict outcomes for unseen data

3. the threshold amount of error in an svm

4.None of the above

What is the purpose of performing cross- validation?

1.to assess the predictive performance of the models

2.to judge how the trained model performs outside the sample on test data

3.both a and b

4.None of These

which can accept a NumPy RandomState generator or an integer seed.

1.make_blobs

2.random_state

3.test_size

4.training_size

Which of the following is true about Naive Bayes ?

1. assumes that all the features in a dataset are equally important

2.assumes that all the features in a dataset are independent

3. both a and b

4.none of the above option

Which of the following is true aboutRidge or Lasso regression methods in case of feature selection?

1.ridge regression uses subset selection of features .

2.lasso regression uses subset selection of features

3. both use subset selection of features

4. none of above

100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man

1.0.4

2.0.2

3.0.6

4.0.45

For the given weather data, Calculate probability of not playing

1.0.4

2.0.64

3. 0.36

4. 0.5

For the given weather data, Calculate probability of playing

1.0.4

2.0.64

3.0.29

4.0.75

How many coefficients do you need to estimate in a simple linear regression model (One independent variable)?

1.1

2.2

3.3

4.4

Hyperplanes are boundaries that help classify the data points.

1.usual

2.decision

3.parallel

4.None of these

In reinforcement learning, this feedback is

1.overfitting

2.overlearning

3.reward

4.None of the above

In reinforcement learning, this feedback is usually called as .

1.overfitting

2.overlearning

3.reward

4.None of the above

In reinforcement learning, this feedback is usually called as .

1.overfitting

2.overlearning

3.reward

4.None of the above

In syntax of linear model lm(formula,data,..), data refers to

1.matrix

2.vector

3.array

4.list

In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called

1.deep learning

2.machine learning

3.reinforcement learning

4.unsupervised learning

In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called .

1. deep learning

2.machine learning

3.reinforcement learning

4.unsupervised learning

In which of the following each categorical label is first turned into a positive integer and then transformed into a vector where only one feature is 1 while all the others are 0

1.labelencoder class

2.dictvectorizer

3.labelbinarizer class

4.featurehasher

Lets say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data.You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?

1.all categories of categorical variable are not present in the test dataset.

2. frequency distribution of categories is different in train as compared to the test dataset.

3.train and test always have same distribution.

4.. both a and b

Reinforcement learning is particularly

1.the environment is not

2.its often very dynamic

3.its impossible to have a

4.All of the above

scikit-learn also provides functions for creatingdummy datasets from scratch:

1.make_classification()

2.make_regression()

3. make_blobs()

4.All of the above

Suppose that we have N independent variables (X1,X2 Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of its variable(Say X1) with Y is -0.95.Which of the following is true for X1?

1.relation between the x1 and y is weak

2.relation between the x1 and y is strong

3. relation between the x1 and y is neutral

4. correlation cant judge the relationship

Suppose we fit Lasso Regression to a data set, which has 100 features (X1,X2X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?

1. it is more likely for x1 to be excluded from the model

2. it is more likely for x1 to be included in the model

3. cant say

4. none of these

Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of its hyper parameter.What would happen when you use very large value of C(C->infinity)?

1.we can still classify data correctly for given setting of hyper parameter c

2.we can not classify data correctly for given setting of hyper parameter c

3.cant say

4.None of These

Suppose you are training a linear regression model. Now consider these points.1. Overfitting is more likely if we have less data2. Overfitting is more likely when the hypothesis space is small. Which of the above statement(s) are correct?

1.both are false

2.1 is false and 2 is true

3.1 is true and 2 is false

4.both are true

Suppose you are training a linear regression model. Now consider these points.1. Overfitting is more likely if we have less data2. Overfitting is more likely when the hypothesis space is small.Which of the above statement(s) are correct?

1.both are false

2.1 is false and 2 is true

3.1 is true and 2 is false

4.both are true

Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?1. I will add more variables2. I will start introducing polynomial degree variables3. I will remove some variables

1. 1 and 2

2.2 and 3

3.1 and 3

4.1, 2 and 3

The of the hyperplane depends upon the number of features.

1.dimension

2.classification

3.reduction

4.None of These

The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVMs?

1.large datasets

2.small datasets

3.medium sized datasets

4.size does not matter

there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an approach also allows simpler algorithms called

1. regression

2.accuracy

3.modelfree

4.scalable

We can also compute the coefficient of linear regression with the help of an analytical method called Normal Equation. Which of the following is/are true about Normal Equation?1. We dont have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate

1.1 and 2

2.1 and 3.

3.2 and 3

4.1,2 and 3.

We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error?

1.increase

2.decrease

3.remain constant

4.cant say

We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data?

1. bias increases and variance increases

2. bias decreases and variance increases

3.bias decreases and variance decreases

4.bias increases and variance decreases

We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1.We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM

1.1

2. 1 and 2

3.1 and 3

4. 2 and 3

What are the two methods used for the calibration in Supervised Learning?

1.platt calibration and isotonic regression

2. statistics and informal retrieval

3.both (a) and (b)

4.None of These

What does learning exactly mean?

1.robots are programed sothat they can

2. a set of data is used todiscover the

3.learning is the ability tochange

4. it is a set of data is used todiscover the

what is the function of Unsupervised Learning?

1. find clusters of the data and find low-dimensional representations of the data

2.find interesting directions in data and find novel observations/

3.interesting coordinates and correlations

4.all

What is the purpose of performing cross- validation?

1. to assess the predictive performance of the models

2.to judge how the trained model performs outside the

3. both a and b

4.None of these

When the C parameter is set to infinite, which of the following holds true?

1. the optimal hyperplane if exists, will be the one that completely separates the data

2.the soft-margin classifier will separate the data C.

3.both (a) and (b)

4.none of the above

Which of the following are several models for feature extraction

1.regression

2.classification

3.both (a) and (b)

4.None of the above

Which of the following assumptions do we make while deriving linear regression parameters?1. The true relationship between dependent y and predictor x is linear2. The model errors are statistically independent3. The errors are normally distributed with a 0 mean and constant standard deviation4. The predictor x is non-stochastic and is measured error-free

1.1,2 and 3.

2.1,3 and 4.

3.1 and 3.

4.all of above.

Which of the following isnotsupervised learning?

1.pca

2.decision tree

3. naive bayesian

4. linerar regression

Which of the following method is used to find the optimal features for cluster analysis

1.k-means

2.density-based spatial clustering

3.spectral clustering find clusters

4.All of the above

Which of the following option is true regarding Regression and Correlation ?Note: y is dependent variable and x is independent variable.

1.the relationship is symmetric between x and y in both.

2. the relationship is not symmetric between x and y in both.

3.the relationship is not symmetric between x and y in case of correlation but in case of regression it is symmetric.

4.the relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric.

Which of the following sentence is FALSE regarding regression?

1.it relates inputs to outputs.

2. it is used for prediction.

3.it may be used for interpretation.

4.it discovers causal relationships.

- Machine Learning (ML) MCQ Set 01
- Machine Learning (ML) MCQ Set 02
- Machine Learning (ML) MCQ Set 03
- Machine Learning (ML) MCQ Set 04
- Machine Learning (ML) MCQ Set 05
- Machine Learning (ML) MCQ Set 06
- Machine Learning (ML) MCQ Set 07
- Machine Learning (ML) MCQ Set 08
- Machine Learning (ML) MCQ Set 09
- Machine Learning (ML) MCQ Set 10

Online Exam TestTop Tutorials are Core Java,Hibernate ,Spring,Sturts.The content on Online Exam Testwebsite is done by expert team not only with the help of books but along with the strong professional knowledge in all context like coding,designing, marketing,etc!