×

Close

- Machine Learning - ML
- 2018
- PYQ
**Biju Patnaik University of Technology BPUT - BPUT**- Information Technology Engineering
- B.Tech
**1465 Views**- 15 Offline Downloads
- Uploaded 11 months ago

Registration No : Total Number of Pages : 03 6th Semester Regular Examination 2017-18 MACHINE LEARNING BRANCH : IT Time : 3 Hours Max Marks : 100 Q.CODE : C450 Answer Part-A which is compulsory and any four from Part-B. The figures in the right hand margin indicate marks. Q1 a) b) c) d) e) Part – A (Answer all the questions) Answer the following questions: multiple type or dash fill up type : A feature F1 can take certain value: A, B, C, D, E, & F and represents grade of students from a college. Which of the following statement is true in following case? A) Feature F1 is an example of nominal variable. B) Feature F1 is an example of ordinal variable. C) It doesn’t belong to any of the above category. D) Both of these Which of the following is true when you choose fraction of observations for building the base learners in tree based algorithm? A) Decrease the fraction of samples to build a base learners will result in decrease in variance B) Decrease the fraction of samples to build a base learners will result in increase in variance C) Increase the fraction of samples to build a base learners will result in decrease in variance D) Increase the fraction of samples to build a base learners will result in Increase in variance How to select best hyperparameters in tree based models? A) Measure performance over training data B) Measure performance over validation data C) Both of these D) None of these Let’s say, you are using activation function X in hidden layers of neural network. At a particular neuron for any given input, you get the output as “0.0001”. Which of the following activation function could X represent? A) ReLU B) tanh C) SIGMOID D) None of these uppose we have a dataset which can be trained with 100% accuracy with help of a decision tree of depth 6. Now consider the points below and choose the option based on these points. Note: All other hyper parameters are same and other factors are not affected. Depth 4 will have high bias and low variance Depth 4 will have low bias and low variance A) Only 1 B) Only 2 C) Both 1 and 2 D) None of the above B.Tech. PIT6J008 (2 x 10)

f) g) h) i) j) Q2 a) b) c) d) e) f) g) h) i) j) Q3 a) b) Q4 a) b) Which of the following will be true about k in k-NN in terms of Bias? A) When you increase the k the bias will be increases B) When you decrease the k the bias will be increases C) Can’t say D) None of these Which of the following statements is true for k-NN classifiers? A) The classification accuracy is better with larger values of k B) The decision boundary is smoother with smaller values of k C) The decision boundary is linear D) k-NN does not require an explicit training step If the sequence has the property that whenever two samples are in the sample cluster at level K, they remain together at all higher levels, then sequence is said to be : A) Hierarchical clustered B) On-line clustered C) Tree-cluster D) All of the above In K-means clustering, K indicates A) Number of cluster centres B) Number of samples C) Both A and B D) None of these The shape of the cluster is determined by A) Mean vector B) Covariance matrix C) Both A and B D) None of these Answer the following questions: Short answer type : Explain the term pattern and classification with examples. What is a Naïve-Bayes classifier? What is a feature vector? Differentiate supervised learning and semi-supervised learning What does a classifier that uses linear discriminant functions is called? What is the difference between parametric and non-parametric pattern recognition methods? What factors determine the center and shape of a cluster? What is Hadamard transform? Explain. What is multilayer perceptron? Explain the concept of criterion functions for clustering. Part – B (Answer any four questions) What are the important objectives of machine learning? Discuss different important examples of machine Learning. Explain mistake bound model of learning for find-S and halving algorithm. Describe in brief : 1) Hypothesis space search in decision tree 2) Inductive bias in decision tree learning Write a short note with diagrams on Decision trees, which are nonlinear, nonmetric classifiers. (2 x 10) (10) (5) (10) (5)

Q5 a) b) Q6 a) b) Q7 a) b) Q8 a) b) Q9 a) b) Consider a multilayer feed forward neural network. Enumerate and explain steps in back propagation algorithm use to train network. Draw the diagram single layer two input – one output perceptron. State its weight update equation. (10) Why is back propagation algorithm so called? What is the significance of its activation function in relation to its cost function? Explain Bayesian belief network and conditional independence with example. (10) State the Bayes Rule and explain how it is applied to pattern classification problems. Show that in a multiclass classification task the Bayes decision rule minimizes the error probability. Give an account of Radial basis function. (10) What is the difference between classification and clustering? State and explain various techniques used for clustering. Describe the basic steps that must be followed in order to develop a clustering task. (10) Describe k-nearest neighbor algorithm. Why is it called instance based learning? Give an account of Gaussian Mixture model. (10) (5) (5) (5) (5) (5)

## Leave your Comments