LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 1 - The Learning Problem (http://book.caltech.edu/bookforum/forumdisplay.php?f=108)
-   -   The concept "h is fixed before you generate the data set" is extremely vague (http://book.caltech.edu/bookforum/showthread.php?t=4879)

Fromdusktilldawn 03-20-2019 09:11 PM

The concept "h is fixed before you generate the data set" is extremely vague
 
Can someone please explain to me the concept of "h is fixed before you generate the data set" as appears on page 22 of the text?

As it stands, this is an extremely vague statement. What does it mean by "fixed", what does it mean by "generate"?

Here is a typical modern machine learning pipeline for most students.

Find some data somewhere, typically Kaggle (you don't generate it whatsoever, someone else does it for you through unknown means)

Observe the data, get a sense of its dimensionality, number of data. If data is too large, cannot even load into a computer. Therefore parameters associated with this data MUST be known in order to do machine learning.

Based on the data, categorize it into a typical problem. For example, classification, prediction, etc.

Pick a hypothesis h known to do well for the problem. Say SVM. Tune the hypothesis h so that it can at least accept the data. For example, the dimensionality of the weights in the hypothesis is obtained from the dimensionality of the data. Otherwise, a dimension mismatch error will be thrown by MATLAB and no machine learning can be done.

Train your hypothesis h, parameterized by the weights w, until h achieves the lowest in-sample error. Call that the final hypothesis g.

Use final hypothesis g on test set.

In this pipeline, data is not generated, it is given. h is not fixed, it is adjusted based on the data (type of data, dimensionality of data). If we do not know the data at all, we cannot possibly construct a hypothesis. It would be akin to using a low-pass filter for 1D signals when your data is actually a continuous stream of 3D video frames. The data must be given prior to constructing h, and h must be adjusted based on the problem at hand. This is not a "before", it is clearly an "after".

Why does it seem that this typical learning pipeline do not fit into the learning model described in the book? What does it mean by "h is fixed before you generate the data set" in a practical sense?

htlin 03-23-2019 06:24 AM

Re: The concept "h is fixed before you generate the data set" is extremely vague
 
Good question. Yes, the statement on page 22 does not fit into the actual learning scenario yet, as explained in your words and similarly on page 23. If you read on, you'll gradually see how we move closer to the actual scenario. What page 22 tries to say is that the fixed h (i.e. a readily-colored bin) is the assumption that the bin model needs. The closest real-world scenario is perhaps when someone hands you a hypothesis before anyone looks at the data (generated by someone else, say, on Kaggle). If you assume that the data generator gathers/generates the data i.i.d. from some distribution, you can *test* the hypothesis using the results on page 22.

Hope this helps.


All times are GMT -7. The time now is 01:54 AM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.