Categories
Portfolio

supervised learning is mcq

A) We can still classify data correctly for given setting of hyper parameter CB) We can not classify data correctly for given setting of hyper parameter CC) Can’t SayD) None of theseSolution: AFor large values of C, the penalty for misclassifying points is very high, so the decision boundary will perfectly separate the data if possible. 1 OnlyB. When we take the natural log of the odds function, we get a range of values from -∞ to ∞. After that, the machine is provided with a new set of examples (data) so that supervised learning algorithm analyses the … Distributed Database - Quiz 1 1. This unsupervised clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration.a) agglomerative clusteringb) conceptual clusteringc) K-Means clusteringd) expectation maximizationAns : Solution C, 46. b) Attributes are statistically dependent of one another given We are increasing the bias4. The density-based Random Forest is a black box model you will lose interpretability after using it. Now, data has only 2 classes. Inductive Learning. True-False: Linear Regression is mainly used for Regression.A) TRUEB) FALSESolution: (A)Linear Regression has dependent variables that have continuous values. 22. Random Forest - answer. In such case training error will be zero but test error may not be zero. Which of the following evaluation metrics can not be applied in case of logistic regression output to compare with target?A) AUC-ROCB) AccuracyC) LoglossD) Mean-Squared-ErrorSolution: DSince, Logistic Regression is a classification algorithm so it’s output can not be real time value so mean squared error can not use for evaluating it, 45. FALSESolution: (A)Sometimes it is very useful to plot the data in lower dimensions. None of theseAns Solution: (A)Option A is correct. It infers a function from labeled training data consisting of a set of training examples. B. Clustering As we know, the syllabus of the upcoming final exams contains only the first four units of this course, so, the below-given MCQs cover the first 4 units of ML subject as:-, Unit 4. Supervised Learning: Supervised learning is a method in which the machine learns using labeled data. 5. In such situation which of the following options would you consider?1. In scatter plot “a”, you correctly classified all data points using logistic regression ( black line is a decision boundary). We can take examples like y=|x| or y=x^2. Number of tree should be as large as possible2. Several sets of data related to each other used to make decisions in machine learning algorithms. A multiple regression model has the form: y = 2 + 3×1 + 4×2. Since data is fixed and we are fitting more polynomial term or parameters so the algorithm starts memorizing everything in the data2. Which of the following are real world applications of the SVM?A) Text and Hypertext CategorizationB) Image ClassificationC) Clustering of News ArticlesD) All of the aboveAns Solution: DSVM’s are highly versatile models that can be used for practically all real world problems ranging from regression to clustering and handwriting recognitions. We build the N regression with N bootstrap sample2. Since data is fixed and SVM doesn’t need to search in big hypothesis space. Machine learning techniques differ from statistical techniques in that machine learning methodsa) typically assume an underlying distribution for the data.b) are better able to deal with missing and noisy data.c) are not able to explain their behavior.d) have trouble with large-sized datasets.Ans : Solution B. Y = f (X) 7. 41. WEKA. In such case, is it right toconclude that V1 and V2 do not have any relation between them?A) TRUEB) FALSESolution: (B)Pearson correlation coefficient between 2 variables might be zero even when they have arelationship between them. Removing columns with dissimilar data trendsD. The adjusted multiple coefficient of determination accounts fora) the number of dependent variables in the modelb) the number of independent variables in the modelc) unusually large predictorsd) none of the aboveAns : Solution D, 19. a function that maps an input to an output based on example input-output the class value. PCA always performs better than t-SNE for smaller size data.D. Both methods can be used for regression taskA) 1B) 2C) 3D) 4E) 1 and 4Solution: EBoth algorithms are design for classification as well as regression task. Which of the following option is true?A) Linear Regression errors values has to be normally distributed but in case of Logistic Regression it isnot the caseB) Logistic Regression errors values has to be normally distributed but in case of Linear Regression it isnot the caseC) Both Linear Regression and Logistic Regression error values have to be normally distributedD) Both Linear Regression and Logistic Regression error values have not to be normally distributedSolution:A, 53. Naïve Bayes classifier 40. Below are two different logistic models with different values for β0 and β1. any conclusions from that information. Some times, feature normalization is not feasible in case of categorical variables3. In the previous question after increasing the complexity you found that training accuracy was still 100%. What would you think will happen?A) Increasing the complexity will over fit the dataB) Increasing the complexity will under fit the dataC) Nothing will happen since your model was already 100% accurateD) None of theseSolution: AIncreasing the complexity of the data would make the algorithm overfit the data. It has substantially high time complexity of order O(n3)4. What will happen when you fit degree 2 polynomial in linear regression?A) It is high chances that degree 2 polynomial will over fit the dataB) It is high chances that degree 2 polynomial will under fit the dataC) Can’t sayD) None of theseSolution: (B)If a degree 3 polynomial fits the data perfectly, it’s highly likely that a simpler model(degree 2 polynomial) might under fit the data. 57. But testing accuracy increases if feature is found to be significant, 4. Input and output data are labelled for classification to provide a learning basis for future data processing. So LinearRegression is sensitive to outliers. statistically independent of one another given the class value. This subject gives knowledge from the introduction of Machine Learning terminologies and types like supervised, unsupervised, etc. 1 onlyB. unsupervised learning. K-means is not deterministic and it also consists of number of iterations.a) Trueb) FalseAnswer: aExplanation: K-means clustering produces the final estimate of cluster centroids. 22. Naïve Bayes and Support Vector Machine. 1, 2 and 4Ans : Solution D, 8. Less accurate and trustworthy method. When the C parameter is set to infinite, which of the following holds true?A) The optimal hyperplane if exists, will be the one that completely separates the dataB) The soft-margin classifier will separate the dataC) None of the aboveSolution: AAt such a high level of misclassification penalty, soft margin will not hold existence as there will be no room for error. For a multiple regression model, SST = 200 and SSE = 50. It is also simply referred to as the cost of misclassification. 1 and 2B. 6. Individual tree is built on full set of observationsA) 1 and 3B) 1 and 4C) 2 and 3D) 2 and 4Ans Solution: ARandom forest is based on bagging concept, that consider faction of sample and faction of feature for building the individual trees. Kernel function map low dimensional data to high dimensional space2. Now, you are using Ridge regression with penalty x.Choose the option which describes bias in best manner.A) In case of very large x; bias is lowB) In case of very large x; bias is highC) We can’t say about biasD) None of theseAns Solution: (B)If the penalty is very large it means model is less complex, therefore the bias would be high. Which of the following evaluation metrics can be used to evaluate a model while modeling a continuous output variable?A) AUC-ROCB) AccuracyC) LoglossD) Mean-Squared-ErrorSolution: (D)Since linear regression gives output as continuous values, so in such case we use mean squared error metric to evaluate the model performance. Another name for an output attribute.a) predictive variableb) independent variablec) estimated variabled) dependent variableAns : Solution B, 23. Suppose you are using a bagging based algorithm say a RandomForest in model building. 1. information being processed. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). It contains a model that is able to predict with the help of a labeled dataset. What is Reinforcement learning?a) All data is unlabelled and the algorithms learn to inherent structure from the input datab) All data is labelled and the algorithms learn to predict the output from the input datac) It is a framework for learning where an agent interacts with an environment and receivesa reward for each interactiond) Some data is labelled but most of it is unlabelled and a mixture of supervised andunsupervised techniques can be used.Ans: Solution C, 7. 21. classification problems. What does this value tell you?a) The attributes are not linearly related.b) As the value of one attribute increases the value of the second attribute also increases.c) As the value of one attribute decreases the value of the second attribute increases.d) The attributes show a curvilinear relationship.Ans : Solution C, 35. The attributes have 3, 2, 2, and 2 possible values each. The third model is overfitting more as compare to first and second.5. Now, say for training 1 time in one vs all setting the SVM is taking 10 second. [True-False] Standardisation of features is required before training a Logistic Regression.A) TRUEB) FALSESolution: BStandardization isn’t required for logistic regression. Sanfoundry Global Education & Learning Series – Neural Networks. Machine Learning subject, having subject no. 2. Kernel function map low dimensional data to high dimensional space2. Can a Logistic Regression classifier do a perfect classification on the below data? It is robust to outliersOptions:A. The process of forming general concept definitions from examples of concepts to belearned.a) Deductionb) abductionc) inductiond) conjunctionAns : Solution C, 9. The second model is more robust than first and third because it will perform best on unseen data.4. Supervised learning C. Reinforcement learning Ans: B. 28. We can also compute the coefficient of linear regression with the help of an analyticalmethod called “Normal Equation”. 44. Attributes are Which of the following applied on warehouse? ... determine a best set of input attributes for supervised learning; evaluate the likely performance of a supervised learner model; 1 and 3B. Simple regression assumes a __________ relationship between the input attribute and outputattribute.a) Linearb) Quadraticc) reciprocald) inverseAns : Solution A, 37. None of the aboveAns A. Consider V1 as x and V2 as |x|. type of machine learning in which the response variable is known. Unsupervised learning is a type of machine learning What Is The Internet Of Things and How IOT Works, Antsle Review: Virtual Machine Appliance For Developers, Top 10 Apps For Small Scale Business Entrepreneurs, 9 Ways to Fix Wifi Keeps Disconnecting and Recon­nect­ing Issue. Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. to its various techniques like clustering, classification, etc. Which of the following algorithm are not an example of ensemble learning algorithm?A) Random ForestB) AdaboostC) Extra TreesD) Gradient BoostingE) Decision TreesSolution: EDecision trees doesn’t aggregate the results of multiple trees so it is not an ensemble algorithm. We don’t have to choose the learning rate2. As part of DataFest 2017, we organized various skill tests so that data scientists can assess themselves on these critical skills. The class has 3 possible values. 59. Random Forest is use for regression whereas Gradient Boosting is use for Classification task4. In bagging trees, individual trees are independent of each other2. TRUEB. Note: You can use only X1 and X2 variables where X1 and X2 can take only two binary values(0,1).A) TRUEB) FALSEC) Can’t sayD) None of theseSolution: BNo, logistic regression only forms linear decision surface, but the examples in the figure are not linearly separable. Which of the following methods do we use to best fit the data in Logistic Regression?A) Least Square ErrorB) Maximum LikelihoodC) Jaccard distanceD) Both A and BSolution: BLogistic regression uses maximum likely hood estimate for training a logistic regression. But testing accuracy increases if feature is found to be significant56. Decision Tree. A higher degree(Right graph) polynomial might have a very high accuracy on the train population but is expected to fail badly on test dataset. Suppose you have same distribution of classes in the data. Supervised learning differs from unsupervised clustering in that supervised learning requiresa) at least one input attribute.b) input attributes to be categorical.c) at least one output attribute.d) output attributes to be categorical.Ans : Solution B, 13. The possibility of overfitting exists as the criteria used for training the … For a low cost, you aim for a smooth decision surface and for a higher cost, you aim to classify more points correctly. What do you conclude after seeing this visualization?1. What can be said about employee salary and years worked?a) There is no relationship between salary and years worked.b) Individuals that have worked for the company the longest have higher salaries.c) Individuals that have worked for the company the longest have lower salaries.d) The majority of employees have been with the company a long time.e) The majority of employees have been with the company a short period of time.Ans : Solution B, 34. Suppose, You applied a Logistic Regression model on a given data and got a training accuracy X and testing accuracy Y. Logistic regression is a ________ regression technique that is used to model data having a_____outcome.a) linear, numericb) linear, binaryc) nonlinear, numericd) nonlinear, binaryAns : Solution D, 40. The multiple coefficient ofdetermination isa) 0.25 b) 4.00c) 0.75d) none of the aboveAns : Solution B, 21. This clustering algorithm merges and splits nodes to help modify nonoptimal partitions.a) agglomerative clusteringb) expectation maximizationc) conceptual clusteringd) K-Means clusteringAns : Solution D, 44. 19. 2. This clustering algorithm initially assumes that each data instance represents a single cluster. Individual tree is built on all the features3. 24. analysis tool. We do feature normalization so that new feature will dominate other2. None of theseAns Solution: (A) If a columns have too many missing values, (say 99%) then we can remove such columns. FALSEAns Solution: (A)LDA is an example of supervised dimensionality reduction algorithm. 3. If there exists any relationship between them,it means that the model has not perfectly captured the information in the data. For data points to be in a cluster, they must be in a distance threshold to a core point2. comments to make you happy and comments to make you sad Question Context 37-38:Suppose, you got a situation where you find that your linear regression model is under fittingthe data.37. The most popularly used dimensionality reduction algorithm is Principal Component Analysis (PCA). Suppose, above decision boundaries were generated for the different value of regularization. What do you expect will happen with bias and variance as you increase the size of training data?A) Bias increases and Variance increasesB) Bias decreases and Variance increasesC) Bias decreases and Variance decreasesD) Bias increases and Variance decreasesE) Can’t Say FalseSolution: (D)As we increase the size of the training data, the bias would increase while the variance would decrease. c. input attribute. 4. It’s a similarity functionA) 1B) 2C) 1 and 2D) None of theseAns Solution: CBoth the given statements are correct. Choose the options that are correct regarding machine learning (ML) and artificial intelligence (AI),(A) ML is an alternate way of programming intelligent machines. A) UnderfittingB) Nothing, the model is perfectC) OverfittingSolution: CIf we’re achieving 100% training accuracy very easily, we need to check to verify if we’re overfitting our data. Here, β0 is intercept and β1 is coefficient.A) β1 for Green is greater than BlackB) β1 for Green is lower than BlackC) β1 for both models is sameD) Can’t SaySolution: Bβ0 and β1: β0 = 0, β1 = 1 is in X1 color(black) and β0 = 0, β1 = −1 is in X4 color (green)Context 58-60 Below are the three scatter plot(A,B,C left to right) and hand drawn decision boundaries for logistic regression. Supervised learning is the basis of deep learning. 16. 14) Following is an example of active learning:

Transamerica 401k Login, Linda Cristal Cause Of Death, Stephen Davies Guitar, Smart Trousers Women's, Happy Ukulele Chords, Cambridge Igcse English Literature Past Papers, Ecosystem Energy Solutions,

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.