![]() |
Why would variance be non-zero?
In question 6 on the homework, we are asked to compute the variance across all data sets.
If we are sampling uniformly from the interval [-1, 1] for the calculation of g_bar, as well as for each data set (g_d), why would the variance be anything but a very small quantity? In the general case, when the data sets are not drawn from a uniform distribution, a non-zero variance makes sense, but if there is sufficient overlap in the data sets, it makes intuitive sense that the variance should be close to zero. I ask this because my simulation results support the above (potentially flawed) theory. Please answer the question in general terms -- I don't care about the homework answer -- I was merely using that as an example. Thanks for any input. -Samir |
Re: Why would variance be non-zero?
Thank you ... now that you explain it that way, it makes perfect sense. (Not sure what I was thinking...)
-Samir |
All times are GMT -7. The time now is 04:28 AM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.