Re: Hw 6 q1
Thanks you guys, I reviewed the lecture video and that cleared up my ideas. :)

Re: Hw 6 q1
"H prime a subset of H" Does mean any subset of H?

Re: Hw 6 q1

Re: Hw 6 q1
Lecture 11 (overfitting) has been my favorite to date. I can’t wait for Lecture 12 (regularization) and 13 (validation) to see how the issue of overfitting is tackled. I thought I was understanding the material; however, I read Q1 in HW6 and could not answer it outright. I realized that I am still somewhat confused and would appreciate some clarification.
I think that one of my issues is how the flow of the lectures slides into situations where the target function is known and then shifts into situations where the target function is not known (realworld cases). I am not stating this as a criticism. It is just that I still don’t know how to clearly “read the signals” that we are moving from one regime (f known) to the other (f not known). For example, to calculate variance and bias (deterministic noise), we need to know the target function. However, in realworld cases we don’t know the target function so it would be impossible to calculate the variance and bias. In Q1, it says that “f is fixed”. This is a case where f is known. I am unclear by what it means by f being fixed. Would not being fixed mean a “moving target”? Are variance and bias useful concepts in realworld cases or are they only of an academic nature, perhaps as a stepping stone to better understand the underlying concepts of machine learning? I hope that these questions come out sounding right and that I will receive some responses. This issue of overfitting has been the most enlightening thing that I have learned in this course and I just wish to understand it really well. Thank you. Juan 
Re: Hw 6 q1
Got it! Thank you, professor.

Re: Hw 6 q1
Just want to check if I have the idea right here:
Deterministic noise means the bias, the difference between the correct target hypothesis, and the possible hypotheses for this hypothesis set. If H' is smaller than H, so it will be in general be less able to get close to the target hypothesis and the deterministic noise will be bigger. At least it can't be less. However, though the noise is bigger, there is another effect that will often work in the opposite direction. The larger hypothesis set may give us the dubious ability to fit the deterministic noise better. Since we have more hypotheses to choose from, we may fit more of the noise with the larger hypothesis set, and end up worse off. Does that sound right? 
Re: Hw 6 q1
Quote:

Re: Hw 6 q1
Quote:
This is what made me think when I first saw this issue that it was necessary to have some knowledge about the distribution of the possible functions in order to allow the possibility of assessing the quality of a particular machine learning algorithm for function approximation in a real application. However, I now believe that using the technique of crossvalidation gives an objective way of studying out of sample performance for function approximation that should allow probabilistic conclusions roughly analogous to Hoeffding. [I am familiar with this technique from the optimization of hyperparameters when using SVMs] One of the great things about doing this course is to get to grips with issues like this. In fact I was using the C hyperparameter without really knowing what it was before we got to regularization in the lectures! I hope I've got the right end of the stick now. :) 
All times are GMT 7. The time now is 08:20 PM. 
Powered by vBulletin® Version 3.8.3
Copyright ©2000  2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. AbuMostafa, Malik MagdonIsmail, and HsuanTien Lin, and participants in the Learning From Data MOOC by Yaser S. AbuMostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.