View Single Post
#2 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595 Re: Learning Approach vs. Function Approximation

This is a good question. The general conclusion you made is correct, that more or less the same problem with different lingo is addressed in function approximation arising from the statistics community and supervised learning arising in the learning community. But if you read a statistics book on function approximation, it will look very different from the text related to this forum. So while the problem is the same, (input-output examples to learn a funcction f), the approaches in these two fields are different.

Largely, the difference is in the assumptions made and the nature of the results. In the statistics approach one usually makes distributional assumptions on the nature of the data and then derives how a particular model like the linear model will behave. Function approximation in statistics typically only discusses regression problems. In learning, we make very mild assumptions and obtain different types of results, and we have a particular focus on classification.

Section 1.2 gives a short discussion of different types of learning that may be helpful.

Quote:
 Originally Posted by cygnids I'm trying to firm my sense of differences in the way functions are generally sought using the learning paradigm vs. the classical function approximation approaches. Ie, is it the same, or are there differences in thought/approach. As I see it, seeking linear models in supervised learning seems no different from similar methods in function approximation (variations in error measures and their minimization aside). In both instances we use input-output sets to accomplish the task, ie find an approximant. In learning, we call the underlying generative process the target function, and after learning, the resulting approximant, the hypothesis. Approximation theory has it's own lingo. I suspect there must be more to this than my superficial characterization. So, in general, do approaches adopted in supervised learning run in parallel to approaches in function approximation? Are there some philosophical differences? If someone would kindly suggest how & why I should view them as separate developments, I'd much appreciate it. Thank you.
__________________
Have faith in probability