LFD Book Forum Linear Regression and x0
 User Name Remember Me? Password
 Register FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
04-14-2012, 01:44 PM
 tcristo Member Join Date: Apr 2012 Posts: 23
Linear Regression and x0

For our linear regression implementation, do we include x0 in as part of our original X matrix? If we do, that means our original vector X=(x0, x1, x2) would then be transposed to XT=(x2, x1, x0).
#2
04-14-2012, 01:48 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Linear Regression and x0

Quote:
 Originally Posted by tcristo For our linear regression implementation, do we include x0 in as part of our original X matrix? If we do, that means our original vector X=(x0, x1, x2) would then be transposed to XT=(x2, x1, x0).
Transposition changes column to row and vice versa, but not the order of the elements within the column/row.
__________________
Where everyone thinks alike, no one thinks very much
#3
04-14-2012, 02:01 PM
 tcristo Member Join Date: Apr 2012 Posts: 23
Re: Linear Regression and x0

Quote:
 Originally Posted by yaser Transposition changes column to row and vice versa, but not the order of the elements within the column/row.
That makes more sense to me! So I assume we are including our constant (x0) in our matrix since we want the final results to provide three weight values?
#4
04-14-2012, 10:17 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Linear Regression and x0

Quote:
 Originally Posted by tcristo That makes more sense to me! So I assume we are including our constant (x0) in our matrix since we want the final results to provide three weight values?
Exactly.

In the mathematical formulation, is an integral part of and is an integral part of .
__________________
Where everyone thinks alike, no one thinks very much
#5
04-15-2012, 09:20 AM
 mathprof Invited Guest Join Date: Apr 2012 Location: Bakersfield, California Posts: 36
Re: Linear Regression and x0

Without x0=1, our line (or hyperplane) would be constrained to go through the origin, which would very rarely be useful.

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 08:30 AM.

 Contact Us - LFD Book - Top

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.