LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 6

Reply
 
Thread Tools Display Modes
  #1  
Old 02-13-2013, 08:49 PM
vikasatkin vikasatkin is offline
Caltech
 
Join Date: Sep 2011
Posts: 39
Default Discussion of Lecture 11 "Overfitting"

Links: [Lecture 11 slides] [all slides] [Lecture 11 video]

Question: (Slide 10/23) It seems, that this situation is not typical, because the data points are clashed together. Should not we space the points evenly? Would we get a different result in that case?

Answer: To make sure, that the general result is not a fluctuation, the experiment was produced multiple times for each value of noise level and order of the polynomial. In each run points were chosen independently according to the uniform distribution. You can see this result on the slide 13/23. You may see some coincidences on the example on slide 10/23, but they were probably averaged out.

So the short answer is: you may interpret the figure on the slide 10/23 as an illustration and the slide 13/23 as the final result.
Reply With Quote
  #2  
Old 02-13-2013, 09:00 PM
vikasatkin vikasatkin is offline
Caltech
 
Join Date: Sep 2011
Posts: 39
Default Discussion of Lecture 11 "Overfitting"

Question: (Slide 11/23) How did you generate the polynomials? How did you choose the coefficients?

Answer: Here is the technical description of the process of generating the target function and the dataset (which may be useful, if you want to reproduce the pictures from the slide 13/23). It was actually described in the "Learning From Data" book on p.123 (section 4.1.2 "Catalysts for Overfitting").

The process of generating the target function depends on two parameters: Q_f (degree of the generated polynomials) and \sigma^2 (noise level). Of course, you also need N --- amount of points in the dataset.

1. Take Legendre polynomials P_0,\dots,P_{Q_f}. Note, that they are normalized according to their value at x=1 (i.e. P_q(1)=1), not their average square.
2. Choose coefficients a_0,\dots,a_{Q_f} independently according to the standard normal distribution.
3. Generate N points (pick them randomly from [-1,1], independently from each other).
4. For every point x_i generate the noise \epsilon(x_i).

The target is given by y=c\sum_{q=0}^{Q_f} a_q P_q(x) + \epsilon(x). Here c is a normalization constant, which depends only on Q_f. It is chosen in such a way, that the mean square value of f(x)=c\sum_{q=0}^{Q_f} a_q P_q(x) is equal to 1 (mean with respect to both x and choices we made during this process: \mathbb{E}_{a,x} ((f(x))^2) = 1). On can compute, that
c = \left( \sum_{q=0}^{Q_f} \frac1{2q+1} \right)^{-1/2}.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 10:38 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.