showed better performance than other approaches, even without a context-based model

1.machine learning

2.deep learning

3.reinforcement learning

4.supervised learning

is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.

1.removing the whole line

2.creating sub- model to predict those features

3.using an automatic strategy to input them according to the other known values

4.All of the above

Gaussian NaÃ¯ve Bayes Classifier is _ distribution

1.continuous

2.discrete

3.binary

4.None of These

In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options

1.1

2.2

3.3

4.4

In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options

1.1

2.2

3.3

4.4

which can accept a NumPy RandomStategenerator or an integer seed.

1.make_blobs

2.random_state

3.test_size

4.training_size

During the last few years, many algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing

1. logical

2.classical

3.classification

4.none of above

For the given weather data, what is theprobability that players will play if weather is sunny

1. 0.5

2. 0.26

3.0.73

4.0.6

If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?

1.underfitting

2.nothing, the model is perfect

3.overfitting

4.None of These

if there is only a discrete number of possible outcomes (called categories),the process becomes a .

1.regression

2.classification

3.modelfree

4.categories

In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options

1.1

2.2

3.3

4.4

In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called .

1.deep learning

2.machine learning

3.reinforcement learning

4.unsupervised learning

It's possible to specify if the scaling process must include both mean and standard deviation using theparameters

1.with_mean=tru e/false

2.with_std=true/ false

3.both a & b

4.none of the mentioned

Multinomial NaÃ¯ve Bayes Classifier is _ distribution

1.continuous

2.discrete

3.binary

4.None of these

Reinforcement learning is particularly efficient when .

1. A. the environment is not completely deterministic

2.its often very dynamic

3. its impossible to have a precise error measure

4.All of the above

scikit-learn also provides functions for creating dummy datasets from scratch:

1.make_classifica tion()

2.make_regressio n()

3.make_blobs()

4.All of the above

Suppose you are using RBF kernel in SVM with high Gamma valu

1. the model wo

2. uthe model wo

3.the model wou

4.none of the ab

The minimum time complexity for training an SVM is O(n2). According to this fact, what sizesof datasets are not best suited for SVMâ€™s?

1.large datasets

2. small datasets

3. medium sized datasets

4.size does not matter

Which of the following sentence is FALSE regarding regression?

1.it relates inputs to outputs.

2.it is used for prediction.

3. it may be used forinterpretation.

4.it discovers causalrelationships.

100 people are at party. Given data gives informa

1. 0.4

2.0.2

3. 0.6

4.0.45

100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being aman

1.0.4

2.0.2

3.0.6

4. 0.45

Common deep learning applications include

1. image classification, real-time visual tracking

2.autonomous car driving, logistic optimization

3.bioinformatics, speech recognition

4.All of the above

During the last few years, many algorithms have been applied to deepneural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state.

1. logical

2.classical

3.classification

4.None of the above

For the given weather data, Calculate probability

1.0.4

2.0.36

3.0.36

4.0.5

For the given weather data, Calculate probabilityof not playing

1.0.4

2.0.64

3. 0.36

4.0.5

Function used for linear regression in R is

1.lm(formula, data)

2.lr(formula, data)

3.lrm(formula, data)

4.regression.linear (formula, data)

Gaussian NaÃ¯ve Bayes Classifier is _ distribution

1.continuous

2.discrete

3.binary

4.All of the above

If I am using all features of my dataset and I achieve 100% accura

1.underfitting

2.nothing, the m

3.overfitting

4.None of the above

If I am using all features of my dataset and I achieve 100% accuracy on my training set, but~70% on validation set, what should I look out for?

1.underfitting

2.nothing, the model is perfect

3.overfitting

4.None of these

if there is only a discrete number of possible outcomes (called categories),the process becomes a .

1.regression

2.classification

3.modelfree

4.categories

In reinforcement learning, this feedback is usually called as .

1.overfitting

2.overlearning

3.reward

4. none of above

In the mathematical Equation of Linear Regression Yâ€„=â€„Î²1 + Î²2X + Ïµ, (Î²1, Î²2) refers to

1.(x-intercept, slope)

2. (slope, x- intercept)

3. (y-intercept, slope)

4.(slope, y- intercept)

It's possible to specify if the scaling process must include both mean and standard deviation using the parameters .

1.with_mean=tru e/false

2.with_std=true/ false

3.both a & b

4. none of the mentioned

Letâ€™s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data.You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?

1.all categories of categorical variable are not present in the test dataset.

2. frequency distribution of categories is different in train as compared to the test dataset.

3.train and test always have same distribution.

4.both a and b

Multinomial NaÃ¯ve Bayes Classifier is _ distribution

1.continuous

2.discrete

3.binary

4.None of These

Suppose that we have N independent variables (X1,X2â€¦ Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of itâ€™s variable(Say X1) with Y is -0.95.Which of the following is true for X1?

1. relation between the x1 and y is weak

2.relation between the x1 and y is strong

3.relation between the x1 and y is neutral

4.correlation canâ€™t judge the relationship

Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias andvariance with lambda.

1.in case of very large lambda; bias is low, variance islow

2.in case of very large lambda; bias is low, variance ishigh

3. in case of very large lambda; bias is high, variance islow

4.in case of very large lambda; bias is high, variance ishigh

Techniques involve the usage of both labeled and unlabeled data is called .

1.supervised

2.semi- supervised

3.unsupervised

4.none of the above

The cost parameter in the SVM means:

1.the number of cross- validations to be made

2. the kernel to be used

3. the tradeoff between misclassificati on and simplicity of the model

4.none of the above

The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVMâ€™s?

1.large datasets

2.small datasets

3.medium sized datasets

4. size does not matter

The SVMâ€™s are less effective when:

1.the data is line

2. the data is cl

3. the data is noisy and contains

4.None of These

We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the meantraining error?

1. increase

2.decrease

3.remain constant

4. canâ€™t say

We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size oftraining data?

1.bias increases and variance increases

2. bias decreases and variance increases

3. bias decreases and variance decreases

4.bias increases and variance decreases

We usually use feature normalization before using the Gaussian k

1. e 1

2.1 and 2

3.1 and 3

4.2 and 3

We usually use feature normalization before using the Gaussian k

1.e 1

2.1 and 2

3.1 and 3

4.2 and 3

We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM

1.1

2.1 and 2

3.1 and 3

4. 2 and 3

We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we useGaussian kernel in SVM

1.1

2. 1 and 2

3.1 and 3

4. 2 and 3

What do you mean by generalization error in terms of the SVM?

1.how far the hy B. C. t

2.how accuratel

3.he threshold amount of error it

4.None of the above

What is â€˜Overfittingâ€™ in Machine learning?

1.when astatistical model describes random error or noise instead of

2.robots areprogramed so that they can perform the task based on data they gather from

3.while involving the process of learning â€˜overfittingâ€™ occurs.

4.a set of data is used to discover the potentially predictive relationship

What is â€˜Test setâ€™?

1.test set is used to test the accuracy of the hypotheses generated by the learner.

2. it is a set of data is used to discover the potentially predictive relationship.

3.both a & b

4.none of above

what is the function of â€˜Supervised Learningâ€™?

1.classifications, predict time series, annotate strings

2.speech recognition, regression

3.both a & b

4. none of above

What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesnâ€™t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity

1. 1 and 3

2.1 and 4

3.2 and 3

4.2 and 4

What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesnâ€™t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approachinginfinity

1.1 and 3

2.1 and 4

3.2 and 3

4.2 and 4

When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled .

1.overfitting

2.overlearning

3.classification

4.regression

Which of the following are real world applications of the SVM?

1.text and hype

2.image classifi

3.clustering of n

4.all of the above

Which of the following is not supervisedlearning?

1. pca

2.decisiontree

3.naivebayesian

4.linerarregression

Which of the following method(s) does not haveclosed form solution for its coefficients?

1.idgeregression

2.lasso

3.both ridgeand lasso

4.none of both

Which of the following selects the best K high-score features.

1.selectpercentil e

2.featurehasher

3.selectkbest

4.All of the above

Which of the following selects the best K high-scorefeatures.

1.selectpercentile

2.featurehasher

3.selectkbest

4. all above

Which of the following selects the best K high-scorefeatures.

1.selectpercentile

2.featurehasher

3.selectkbest

4. all above

- Machine Learning (ML) MCQ Set 01
- Machine Learning (ML) MCQ Set 02
- Machine Learning (ML) MCQ Set 03
- Machine Learning (ML) MCQ Set 04
- Machine Learning (ML) MCQ Set 05
- Machine Learning (ML) MCQ Set 06
- Machine Learning (ML) MCQ Set 07
- Machine Learning (ML) MCQ Set 08
- Machine Learning (ML) MCQ Set 09
- Machine Learning (ML) MCQ Set 10

Online Exam TestTop Tutorials are Core Java,Hibernate ,Spring,Sturts.The content on Online Exam Testwebsite is done by expert team not only with the help of books but along with the strong professional knowledge in all context like coding,designing, marketing,etc!