I'm thinking about...
Imagine that I've got a set of variables

I want to make regression on. But I can only observe other different set of variables

. With

not known and the

being hidden variables.
I guess that I can make assumptions (which can be tested using validation later with different sets of

) about the form of the functions f and then derive certain nonlinear transformations that correspond to those f and which relate the x's and the y's.
Then I can make nonlinear regression to the x's. But it seems a bit twisted since I'm applying nonlinear transformations twice.
does anyone know about a particular theory on machine learning that copes with this kind of problems? Or simply what I should do is considering a set of nonlinear transformations big enough so that it contains the transformations needed for adjusting to

and to transform them later to the

?
Thanks a lot