![]() |
#1
|
|||
|
|||
![]() |
#2
|
||||
|
||||
![]()
Ture. If we were applying batch mode, permutation would not change anything since the weight update is done at the end of the epoch and takes all the examples into consideration regardless of the order they were presented. In Stochastic gradient descent, however, the update is done after each example, so the order changes the outcome. These permutations ensure that the order is randomized so we get the benefits of randomness that were mentioned briefly in Lecture 9.
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]() Quote:
|
![]() |
Thread Tools | |
Display Modes | |
|
|