![]() |
|
#1
|
|||
|
|||
![]()
Hello, I have this answer for the Exercise 4.6 but I'm not sure if it's right?
Because ![]() ![]() ![]() ![]() ---------------------------------------- Edit: I have just remembered that the growth function has already taken care of the issue many hypotheses representing the same hyperplane (and this issue does not affect the ![]() ![]() |
#2
|
|||
|
|||
![]()
I have the same question. Can someone help here?
From my understanding having small weights is not perfect for sign(s), since this will lead to a signal that is often around 0 and thus a small change of just one input has a high chance to lead to a completely different output, if the sign changes. So it would be better to have big weights, thus the signal is always pushed to the big number region and the sign is more stable. But I maybe I'm just wrong here. |
#3
|
||||
|
||||
![]()
Correct again.
So let us differentiate between the theory of machine learning and its implementation on finite precision computers. In theory, if you have an infinite precision machine, then the size of the weights does not matter because it is a mathematical fact that, for positive ![]() ![]() In finite precision, you typically want the weights to be around 1 and the inputs rescaled to be around 1 too (this is called input preprocessing and you can read about it in e-Chapter 9). Quote:
__________________
Have faith in probability |
#4
|
|||
|
|||
![]()
Thanks for this clarification. It helps a lot for understanding.
|
#5
|
||||
|
||||
![]()
Yes, the soft order constraint does not impact classification. Better regularize with the hard order constraint, or use the soft order constraint with the "regression for classification" algorithm.
Quote:
__________________
Have faith in probability |
#6
|
|||
|
|||
![]()
Thank you very much for your reply!
|
#7
|
|||
|
|||
![]()
Thanks for the helpful discussion. Follow up is why does the hard constraint imply that the weights will be larger?
|
![]() |
Thread Tools | |
Display Modes | |
|
|