12 points Q1) Write the full word “True” or “False”? If true, explain why in at most two sentences. If false, explain why or give a brief counterexample in at most two sentences.
- (True or False?) Logistic regression function glm (with family = binomial()) can be used to predict the class of a categorical variable with more than two categories.
No true, since glm binomial() function deals with a probability of success and failure only.
- (True or False?) Overfitting is more likely when the set of training data size is large.
The larger the data the more information we have, this leads to a more appropriate modeling.
(True or False?) The predictors in the k-variable model out of 𝑀 > 𝑘 identified by backward stepwise regression are the same k variables identified by forward stepwise regression.
This is not necessarily true. The two processes may not even result the same predictors.
(True or False?) In the KNN algorithm, increasing the number of neighbors “k” will make the boundary more complex.
The less number of neighbors considered the more complex the boundary region is going to be.
12 points Q2) Given the following density plots for the following predictors: Palmitic, Stearic,Oleic and Linoleic. Fill in the blanks. Answers may not be repeated.
- a) Which predictor is the worst in terms of distinguishing between all the three regions 1, 2 and 3. _____ Stearic _________
- b) Which predictor is the best in terms of distinguishing regions 2 and 3 combined from region 1. ___ Palmitic ___________
- c) Which predictor is the best in terms of distinguishing between all the three regions 1, 2 and 3. _____ Oleic ___________
- d) Which predictor is the best in terms of distinguishing regions 1 and 2 combined from region 3. ____ Linoleic___________
16 points Q3) Part A 4 points: Consider the datasets data1 in figure 3(A) and data2 in figure
In each of these datasets there are two classes, ’+’ and ’o’.
- Each class has the same number of points.
Each data point has two real valued features, the X and Y coordinates.
For each of these datasets, draw the decision boundary that a Gaussian Naive Bayes classifier will learn. Describe your boundary function that best separates the two classes.
12 points Q3) Part B: You are a reviewer for the International Mega-Conference on Algorithms for Radical Learning of Outrageous Stuff, and you read papers with the following experimental setups. Would you accept or reject each paper? Provide a one sentence justification. (This conference has short reviews.)
- accept/reject “My algorithm is better than yours. Look at the training error rates!
The small training errors may be because of overfitting.
- accept/reject “My algorithm is better than yours. My tuning parameter has 16 decimal places! 16 decimal places value of a tuning parameter is a very specific value of a tuning parameter which may lead to overfitting.
- accept/reject “My algorithm is better than yours. My best KNN performance happened when k=1 which is a very simple model! When k=1 in KNN we have the most complex boundary and the model is not simple.
- accept/reject “My algorithm is better than yours. My model includes high degree polynomials of predictors, even though all the highest degrees of those polynomials are not statistically significant.
Since the coefficients of the highest degrees are not significant, then we should simply our polynomials.
EasyDue™ 支持PayPal, AliPay, WechatPay, Taobao等各种付款方式!
E-mail: firstname.lastname@example.org 微信:easydue