LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 1

Reply
 
Thread Tools Display Modes
  #1  
Old 02-08-2013, 08:13 PM
shirin shirin is offline
Junior Member
 
Join Date: Feb 2013
Posts: 2
Default Doubt from lecture 2(Is learning feasible?)

At 31 minutes mark professor has assumed that the input samples come from a probability distribution. My question is why do we make this assumption? Because throughout the lecture we haven't make use of this assumption anywhere.
Reply With Quote
  #2  
Old 02-08-2013, 09:17 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: Doubt from lecture 2(Is learning feasible?)

The assumption made it possible to invoke Hoeffding inequality. Without a probability distribution, one cannot talk about the probability of an event (the left-hand-side of the inequality). The specifics of the probability distribution don't matter here, any distribution will do.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 02-09-2013, 09:49 PM
shirin shirin is offline
Junior Member
 
Join Date: Feb 2013
Posts: 2
Default Re: Doubt from lecture 2(Is learning feasible?)

Can't I start talking about hypothesis analogy without making this assumption?

I mean if i say that a hypothesis is analogous to a bin and then I say that for any hypothesis there is a probability that that it will make a wrong classification in the bin and in the sample with probability \mu & \vu.

And then go ahead with hooeffding's inequality.

In doing so do I really need that assumption?
Reply With Quote
  #4  
Old 02-09-2013, 10:01 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: Doubt from lecture 2(Is learning feasible?)

Quote:
Originally Posted by shirin View Post
Can't I start talking about hypothesis analogy without making this assumption?

I mean if i say that a hypothesis is analogous to a bin and then I say that for any hypothesis there is a probability that that it will make a wrong classification in the bin and in the sample with probability \mu & \vu.

And then go ahead with hooeffding's inequality.

In doing so do I really need that assumption?
The introduction of a probability is not needed to make the analogy between a hypthesis and a bin, but it is needed to invoke Hoeffding inequality on the bin (and the hypothesis). Think of it this way. If I choose 3000 voters according to a deterministic criterion (say the richest 3000 people in the country) and poll them about who they are going to vote for, this sample will not indicate how the population as a whole will vote. If I introduce a probability distribution (say each voter in the population is as likely to be chosen for the poll as every other voter), then I can apply statistical results like Hoeffding to infer from a random sample of 3000 people how the population as a whole will vote.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
Reply

Tags
is learning feasible?

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 04:50 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.