LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 3

Reply
 
Thread Tools Display Modes
  #1  
Old 04-23-2013, 09:54 AM
heeler heeler is offline
Junior Member
 
Join Date: Apr 2013
Posts: 1
Default *ANSWER* Q10 maybe

On question 10 I found it hard to visualise how the pattern changed until I realised I could conformally map the problem given into one of the examples. This left me wondering if this is generally true for learning models.

I used the r = \sqrt{x_1^2 + x_2^2} to transform the problem from \mathbb{R}^2 to \mathbb{R}.

It seems like the connection is there but I wanted to ask in case I was leading myself astray.
Reply With Quote
  #2  
Old 04-23-2013, 09:59 AM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: *ANSWER* Q10 maybe

Quote:
Originally Posted by heeler View Post
On question 10 I found it hard to visualise how the pattern changed until I realised I could conformally map the problem given into one of the examples. This left me wondering if this is generally true for learning models.

I used the r = \sqrt{x_1^2 + x_2^2} to transform the problem from \mathbb{R}^2 to \mathbb{R}.

It seems like the connection is there but I wanted to ask in case I was leading myself astray.
This is true here because the circles are concentric. In general, it may not be possible to reduce the learning model to an equivalent one-dimensional version.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 04-23-2013, 10:09 AM
Elroch Elroch is offline
Invited Guest
 
Join Date: Mar 2013
Posts: 143
Default Re: *ANSWER* Q10 maybe

Quote:
Originally Posted by yaser View Post
This is true here because the circles are concentric. In general, it may not be possible to reduce the learning model to an equivalent one-dimensional version.
There's something about this problem that seems to make a lot of us think "can this really be right?" I think we agree it is. One way I thought of it was to observe that the hypothesis set can never separate any two points at the same radius. This means you only need to consider one representative point at each radius, without loss of generality.

This has an analogy to an idea in topology that when points share all the same neighbourhoods they are effectively like the same point, and a quotient space can be formed which merges all the unseparable points with each other. I feel there may be the potential for more connections between hypothesis sets and topology to be drawn, although there are major differences as well as similarities.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 12:22 AM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.