- **Homework 6**
(*http://book.caltech.edu/bookforum/forumdisplay.php?f=135*)

- - **Discussion of Lecture 11 "Overfitting"**
(*http://book.caltech.edu/bookforum/showthread.php?t=3995*)

Discussion of Lecture 11 "Overfitting"Links: [Lecture 11 slides] [all slides] [Lecture 11 video]Question: (Slide 10/23) It seems, that this situation is not typical, because the data points are clashed together. Should not we space the points evenly? Would we get a different result in that case?Answer: To make sure, that the general result is not a fluctuation, the experiment was produced multiple times for each value of noise level and order of the polynomial. In each run points were chosen independently according to the uniform distribution. You can see this result on the slide 13/23. You may see some coincidences on the example on slide 10/23, but they were probably averaged out.So the short answer is: you may interpret the figure on the slide 10/23 as an illustration and the slide 13/23 as the final result. |

Discussion of Lecture 11 "Overfitting"Question: (Slide 11/23) How did you generate the polynomials? How did you choose the coefficients?Answer: Here is the technical description of the process of generating the target function and the dataset (which may be useful, if you want to reproduce the pictures from the slide 13/23). It was actually described in the "Learning From Data" book on p.123 (section 4.1.2 "Catalysts for Overfitting").The process of generating the target function depends on two parameters: (degree of the generated polynomials) and (noise level). Of course, you also need --- amount of points in the dataset. 1. Take Legendre polynomials . Note, that they are normalized according to their value at (i.e. ), not their average square. 2. Choose coefficients independently according to the standard normal distribution. 3. Generate points (pick them randomly from , independently from each other). 4. For every point generate the noise . The target is given by . Here is a normalization constant, which depends only on . It is chosen in such a way, that the mean square value of is equal to 1 (mean with respect to both and choices we made during this process: ). On can compute, that |

All times are GMT -7. The time now is 04:04 AM. |

Powered by vBulletin® Version 3.8.3

Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.