Unanswered Questions
1,367 questions with no upvoted or accepted answers
13
votes
0
answers
14k
views
How to normalize data prior to computation of covariance matrix
In all my self-study, I have come across many different ways in which people seem to normalize their data, prior to the computation of the covariance matrix. I am confused as to what ways are 'correct'...
9
votes
0
answers
2k
views
Taylor Series and Multivariate Delta Method
I asked this question on https://math.stackexchange.com/ but did not get any answer. Sorry for cross posting.
I'm trying to understand delta method for matrices and vectors to find the variance-...
8
votes
0
answers
473
views
What does the second moment tell us that variance does not?
What does the second moment tell us that variance does not?
I can wrap my brain around what the first moment tells us, and I can wrap my brain around what the variance tells us, but interpreting the ...
8
votes
0
answers
1k
views
Applying a variance-stabilizing transform to a fitted function (rather than data)
Outline
I'm working with data corrupted by a mixed Poisson-Gaussian noise model (for example with images gathered in astronomy or electron microscopy), and have been using the generalized Anscombe ...
7
votes
0
answers
110
views
Keeping track of the variance of a Metropolis-Hastings estimator
Let $(E,\mathcal E,\lambda)$ and $(E',\mathcal E',\lambda')$ be measure spaces, $p,q$ be probability densities on $(E,\mathcal E,\lambda)$, and $\varphi:E'\to E$ be bijective and $(\mathcal E',\...
7
votes
0
answers
259
views
why use diagonal $\Sigma$ when working with Bayes decision theory?
My prof. said in the class that for Bayes decision rule, the likelihood is Gaussian and in practice, we will almost always work with a diagonal $\Sigma$. Why is that? I know that a diagonal $\Sigma$ ...
7
votes
0
answers
2k
views
Bias Variance tradeoff from a Bayesian perspective
I know the general question about bias variance has been asked before. I understand the frequentist approach and the concept of model selection and the impact of bias and variance on "accuracy" of a ...
7
votes
0
answers
201
views
Estimating population size, minimum variance estimators
I am trying to understand what can be proved about minimum variance estimators. I am a little confused by Cramér–Rao and how to apply it even to really simple examples or if it's even the right tool ...
6
votes
0
answers
1k
views
Calculating a sample size based on the target width of a confidence interval with stratification
I am reviewing a sampling design devised by a colleague and completely fail to understand it, although I am not a novice in statistics (but not a huge expert either). The said colleague is no longer ...
6
votes
1
answer
378
views
Comparing variances of forecast errors
I am forecasting a weekly commodity price series. I use a rolling window for estimating my model, and from each window I make point forecasts for one and two steps ahead.
I want to investigate ...
6
votes
0
answers
1k
views
Asymmetric confidence intervals on bootstrap estimates
I've performed bootstrapping on my leastsq parameters and now I have a load of data from which I can get the mean and standard deviation for each parameter. Lovely.
But when I look at a histogram of ...
5
votes
0
answers
521
views
What's the intuition behind the fact that sample mean and sample variance are independent when sampling from a normal population?
Let $X_1, \dotsc,X_n$ be i.i.d. from $N(\mu,\sigma^2)$, then we know that sample mean $\bar X\equiv \frac{1}{n}\sum_{i=1}^nX_i$ and $S^2=\frac{1}{n-1}(X_i-\bar X)^2$ are independent. Obviously, they ...
5
votes
0
answers
814
views
Error bars in repeated k-fold cross-validation
Suppose I want to compute the expectation of the loss $L$ based on Repeated K-fold Cross-Validation (KFCV). Just to be precise by repeated KFCV I mean the following: I repeat the $K$ cross-validation ...
5
votes
0
answers
205
views
Why is homoscedasticity (homogeneity of variance) important in neural network layers?
I'm studying the famous Xavier initialization paper (Understanding the Difficulty of Training Deep Feedforward Neural Networks (Glorot and Bengio, 2010)) and had a question.
When they explain the ...
5
votes
1
answer
1k
views
Relation between bias and R-square
I am trying to understand relation between bias and R-squared value in linear regression.
High bias means that the model is underfit. By this I am assuming that the R-square d will be less.
So my ...