![]() |
#1
|
|||
|
|||
![]()
In question 6 on the homework, we are asked to compute the variance across all data sets.
If we are sampling uniformly from the interval [-1, 1] for the calculation of g_bar, as well as for each data set (g_d), why would the variance be anything but a very small quantity? In the general case, when the data sets are not drawn from a uniform distribution, a non-zero variance makes sense, but if there is sufficient overlap in the data sets, it makes intuitive sense that the variance should be close to zero. I ask this because my simulation results support the above (potentially flawed) theory. Please answer the question in general terms -- I don't care about the homework answer -- I was merely using that as an example. Thanks for any input. -Samir |
#2
|
||||
|
||||
![]() Quote:
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() This argument holds for any probability distribution, uniform or not, that is used to generate the different ![]()
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
Thank you ... now that you explain it that way, it makes perfect sense. (Not sure what I was thinking...)
-Samir |
![]() |
Tags |
variance |
Thread Tools | |
Display Modes | |
|
|