Thread: Hw 6 q1
View Single Post
  #14  
Old 05-10-2013, 04:49 AM
jcmorales1564 jcmorales1564 is offline
Member
 
Join Date: Apr 2013
Posts: 12
Default Re: Hw 6 q1

Lecture 11 (overfitting) has been my favorite to date. I can’t wait for Lecture 12 (regularization) and 13 (validation) to see how the issue of overfitting is tackled. I thought I was understanding the material; however, I read Q1 in HW6 and could not answer it outright. I realized that I am still somewhat confused and would appreciate some clarification.

I think that one of my issues is how the flow of the lectures slides into situations where the target function is known and then shifts into situations where the target function is not known (real-world cases). I am not stating this as a criticism. It is just that I still don’t know how to clearly “read the signals” that we are moving from one regime (f known) to the other (f not known). For example, to calculate variance and bias (deterministic noise), we need to know the target function. However, in real-world cases we don’t know the target function so it would be impossible to calculate the variance and bias. In Q1, it says that “f is fixed”. This is a case where f is known. I am unclear by what it means by f being fixed. Would not being fixed mean a “moving target”? Are variance and bias useful concepts in real-world cases or are they only of an academic nature, perhaps as a stepping stone to better understand the underlying concepts of machine learning?

I hope that these questions come out sounding right and that I will receive some responses. This issue of overfitting has been the most enlightening thing that I have learned in this course and I just wish to understand it really well.

Thank you.

Juan
Reply With Quote