Tasks | Regression (week 1):

First, using the book review data, let’s see whether ratings can be predicted as a function of review length, or by using temporal features associated with a review.

1. (CSE158 only) What is the distribution of ratings and review lengths in the dataset? Report the number of 1-, 2-, 3-star (etc.) ratings, and show the relationship with length (e.g. via a scatterplot) (1 mark).

2. Train a simple predictor that estimates rating from review length, i.e.,
star rating ‘ 0 + 1  [review length in characters]

Report the values 0 and 1, and the Mean Squared Error of your predictor (on the entire dataset) (1 mark).

3. Extend your model to include (in addition to the length) features based on the time of the review. You can parse the time data as follows:

import dateutil.parser
> t = dateutil.parser.parse(d[‘date_added’])
> t.weekday(), t.year # etc.

Using a one-hot encoding for the weekday and year, write down feature vectors for the first two examples (1 mark).

4. Train models that

• use the weekday and year values directly as features, i.e.,
star rating ‘ 0 + 1  [review length in characters] + 2  [t.weekday()] + 3  [t.year]

• use the one-hot encoding from Question 3. Report the MSE of each (1 mark).

5. Repeat the above question, but this time split the data into a training and test set. You should split the data randomly into 50%/50% train/test fractions. Report the MSE of each model separately on the training and test sets.

6. (CSE258 only) Show that for a trivial predictor, i.e., y = 0, the best possible value of 0 in terms of the Mean Absolute Error is the median of the label y. Hint: compute the derivative of the model’s MAE and solve for 0

Tasks | Classification (week 2):

In this question, using the beer review data, we’ll try to predict ratings (positive or negative) based on characteristics of beer reviews. Load the 50,000 beer review dataset, and construct a label vector by considering whether a review score is four or above, i.e.,

y = [d[‘review/overall’] >= 4 for d in dataset]

7. Fit a logistic regressor that estimates the binarized score from review length, i.e.,

p(rating is positive) ‘ (0 + 1  [length])

Using the class weight=’balanced’ option, report the True Positive, True Negative, False Positive,
False Negative, and Balanced Error Rates of the predictor (1 mark).

8. Plot the precision@K of your classifier for K = f1 : : : 10000g (i.e., the x-axis of your plot should be K, and the y-axis of your plot should be the precision@K) (1 mark).

9. Our precision@K plot from Question 8 only measures precision with regard to the positive class. For this type of binary classiffication, we may be equally interested in the classifier’s accuracy for both the positive and negative classes. Recompute conffidence scores for your classifier so that the `most conffident’ predictions include either the most confident positive or the most confident negative predictions (i.e., probability closest to 1 or probability closest to zero).

3 The precision@K now measures whether

the classifier has the correct label (either `positive’ or `negative’) among the K most confident entries. Report this precision@K for K 2 f1; 100; 10000g and include a plot as in Question 8. EasyDue™ 支持PayPal, AliPay, WechatPay, Taobao等各种付款方式!

E-mail: easydue@outlook.com  微信:easydue

EasyDue™是一个服务全球中国留学生的专业代写公司