LFD Book Forum SVM equation from Slides
 User Name Remember Me? Password
 Register FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
04-04-2013, 10:18 AM
 udaykamath Junior Member Join Date: Jan 2013 Posts: 9
SVM equation from Slides

Prof Yaser.
Even though its not related to book chapter but the slide but thought it made sense here in this discussion.

In SVM slides Lecture 14

On page 13 we had just converted the problem to minimization and had

minimize Lagrangian(alpha)=....

On page 14 we have

maximize Lagrangian w.r.t alpha subject to...

How did this change?

Also page 15

we convert the maximize problem to minimize problem by inverting the sign.

So the jump from first minimize to maximize of lagrangian is not clear.

Thanks
Uday Kamath
#2
04-04-2013, 01:12 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: SVM equation from Slides

Quote:
 Originally Posted by udaykamath In SVM slides Lecture 14 On page 13 we had just converted the problem to minimization and had minimize Lagrangian(alpha)=.... On page 14 we have maximize Lagrangian w.r.t alpha subject to... How did this change?
Thank you for asking. In slide 13, minimization is w.r.t. some of the variables, and for the rest of the variables it is maximization (per Lagrange/KKT method). The minimization has already been done in the derivation in slide 13, so only the maximization part remains in slide 14.
__________________
Where everyone thinks alike, no one thinks very much
#3
04-04-2013, 02:38 PM
 udaykamath Junior Member Join Date: Jan 2013 Posts: 9
Re: SVM equation from Slides

thanks for your answer, so Lagrange(w,b,alpha) minimization means w.r.t to minimize w and b and maximize w.rt. alpha.

Also, you mention in the video that the KKT condition, the first one, of replacing the min yn(wXn+b) =1 is equivalent to using the inequality using the slack time square and adjusting. You mention that you will explain that in the Q&A, but no one asked that in Q&A and was wondering if you can here or some place give and explanation of the slack square thing and how min gets changed to non minimum with adding equality.

Thanks again for the wonderful lectures and book!
Forever indebted!
Uday
#4
04-04-2013, 03:17 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: SVM equation from Slides

Quote:
 Originally Posted by udaykamath thanks for your answer, so Lagrange(w,b,alpha) minimization means w.r.t to minimize w and b and maximize w.rt. alpha. Also, you mention in the video that the KKT condition, the first one, of replacing the min yn(wXn+b) =1 is equivalent to using the inequality using the slack time square and adjusting. You mention that you will explain that in the Q&A, but no one asked that in Q&A and was wondering if you can here or some place give and explanation of the slack square thing and how min gets changed to non minimum with adding equality. Thanks again for the wonderful lectures and book! Forever indebted! Uday
The slack argument is probably available online in writeups about KKT. The basic idea is to add a squared variable to one side of an inequality to make it equality, and because it is squared, there are no resrictions on the value of itself.
__________________
Where everyone thinks alike, no one thinks very much

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 02:22 PM.

 Contact Us - LFD Book - Top

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.