Quote:
Originally Posted by yongxien
Hi I can solve the problem but I cannot understand how does this show that the perceptron algorithm will converge. Can somone explains to me what does the proof shows? I mean what does each step of the problems mean? Thanks
|
The proof essentially shows that the (normalized) inner product between

and the separating weights will be larger and larger in each iteration. But the normalized inner product is upper bounded by

and cannot be arbitrarily large. Hence PLA will converge.