LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 1

Reply
 
Thread Tools Display Modes
  #1  
Old 07-11-2012, 03:48 PM
samirbajaj samirbajaj is offline
Member
 
Join Date: Jul 2012
Location: Silicon Valley
Posts: 48
Default PLA - Need Guidance

Greetings!

I am working on the Perceptron part of the homework, and having spent several hours on it, I'd like to know if I am proceeding in the right direction:

1) My implementation converges in 'N' iterations. This looks rather fishy. Any comments would be appreciated. (Otherwise I may have to start over :-( maybe in a different programming language)

2) I don't understand the Pr( f(x) != g(x) ) expression -- what exactly does this mean? Once the algorithm has converged, presumable f(x) matches g(x) on all data, so the difference is zero.


Thanks.

-Samir
Reply With Quote
  #2  
Old 07-11-2012, 05:13 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,476
Default Re: PLA - Need Guidance

Quote:
Originally Posted by samirbajaj View Post
I don't understand the Pr( f(x) != g(x) ) expression -- what exactly does this mean? Once the algorithm has converged, presumable f(x) matches g(x) on all data, so the difference is zero
On all data, yes. However, the probability is with respect to {\bf x} over the entire input space, not restricted to {\bf x} being in the finite data set used for training.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 07-12-2012, 07:30 AM
jakvas jakvas is offline
Member
 
Join Date: Jul 2012
Posts: 17
Default Re: PLA - Need Guidance

If we try to evaluate Pr(f(x)!=g(x)) experimentaly how many random verification points should we use to get a significant answear?

I am tempted to believe that Hoeffding's inequality is applicable in this case to a single experiment but since we are averaging out over very many experiments I'm not sure on how to choose the amount of those verification data points (I ultimately worked with 10000 per experiment just to be sure).
Reply With Quote
  #4  
Old 07-12-2012, 09:56 AM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,476
Default Re: PLA - Need Guidance

Quote:
Originally Posted by jakvas View Post
I am tempted to believe that Hoeffding's inequality is applicable in this case to a single experiment but since we are averaging out over very many experiments I'm not sure on how to choose the amount of those verification data points (I ultimately worked with 10000 per experiment just to be sure).
Indeed, the average helps smooth out statistical fuctuations. Your choice of 10000 points is pretty safe.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #5  
Old 07-16-2012, 09:19 PM
jtwang jtwang is offline
Junior Member
 
Join Date: Jul 2012
Posts: 1
Default Re: PLA - Need Guidance

How would you determine f(x) == g(x) exactly - since the set of possible hypotheses is infinite (3 reals), wouldn't Pr(f(x) != g(x)) == 1? Obviously you could choose some arbitrary epsilon but then that wouldn't be "exactly."
Reply With Quote
  #6  
Old 07-16-2012, 09:39 PM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,476
Default Re: PLA - Need Guidance

Quote:
Originally Posted by jtwang View Post
How would you determine f(x) == g(x) exactly - since the set of possible hypotheses is infinite (3 reals), wouldn't Pr(f(x) != g(x)) == 1? Obviously you could choose some arbitrary epsilon but then that wouldn't be "exactly."
f({\bf x})=g({\bf x}) is per point {\bf x}. It may be true for some {\bf x}'s and false for others, hence the notion of probability that it's true (probability with respect to {\bf x}). We are not saying that f is identically equal to g.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #7  
Old 01-08-2013, 02:15 PM
dobrokot dobrokot is offline
Junior Member
 
Join Date: Jan 2013
Posts: 3
Default Re: PLA - Need Guidance

Quote:
Originally Posted by jakvas View Post
I'm not sure on how to choose the amount of those verification data points (I ultimately worked with 10000 per experiment just to be sure).
Hoeffding inequality given in same lesson can help to choose number of points. g(x)!=f(x) can be thinked as red marble
Reply With Quote
  #8  
Old 01-09-2013, 07:18 AM
nroger nroger is offline
Member
 
Join Date: Jan 2013
Posts: 10
Default Re: PLA - Need Guidance

I still don't understand this Pr() function. Given two (linear) functions f and g, what is the Pr() of f and g?
Thanks...Neil
Reply With Quote
  #9  
Old 01-09-2013, 08:12 AM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,476
Default Re: PLA - Need Guidance

Quote:
Originally Posted by nroger View Post
I still don't understand this Pr() function. Given two (linear) functions f and g, what is the Pr() of f and g?
Thanks...Neil
This is the probability of an event, the event in the case discussed in this thread being that f({\bf x})\ne g({\bf x}), which means you pick {\bf x} at random according to the probability distribution over the input space {\cal X} and evaluate "the fraction of time" that f does not give the same value as g for the {\bf x} you pick.

BTW, anyone who wants to refresh some of the prerequisite material for the course, here are some recommendations:

http://book.caltech.edu/bookforum/showthread.php?t=3720
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #10  
Old 01-12-2013, 02:21 PM
sricharan92 sricharan92 is offline
Junior Member
 
Join Date: Jan 2013
Posts: 1
Default Re: PLA - Need Guidance

Sir

If I understand correctly, we are using N = 10 training points out of a randomly generated x points according to the target function f for perceptron learning and Pr(f(x) != g(x)) should be calculated considering all x points and not just the training data. Am I right ?
Reply With Quote
Reply

Tags
convergence, iterations, perceptron, pla

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 08:32 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.