Questions tagged [weighted-least-squares]

This tag is for questions relating to weighted least squares, a generalization of ordinary least squares and linear regression in which the errors covariance matrix is allowed to be different from an identity matrix.

Weighted least squares (WLS) or, weighted linear regression is an extension of Ordinary Least Squares regression. It occurs when all the off-diagonal entries of $Ω$ (the correlation matrix of the residuals) are null; the variances of the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity).

  • Weighted least squares has several advantages over other methods, including:

$a)~$It’s well suited to extracting maximum information from small data sets.

$b)~$It is the only method that can be used for data points of varying quality.

Disadvantages include:

$a)~$It requires that you know exactly what the weights are. Estimating weights can have unpredictable results, especially when dealing with small samples. Therefore, the technique should only be used when your weight estimates are fairly precise. In practice, precision of weight estimates usually isn’t possible.

$b)~$Sensitivity to outliers is a problem. A rogue outlier given an inappropriate weight could dramatically skew your results.

For more details please find the following references: https://www.itl.nist.gov/div898/handbook/pmd/section1/pmd143.htm https://en.wikipedia.org/wiki/Weighted_least_squares https://www.statisticshowto.com/weighted-least-squares/ https://online.stat.psu.edu/stat501/lesson/13/13.1

63 questions
5
votes
2 answers

Taylor approximation is not optimal

My professor gave a lecture on an orthogonal polynomial based approximation and its advantage over the Taylor series expansion. And his statement was ``in weighted $L_2$ space, Taylor series expansion is not optimal in inner product sense, whereas…
5
votes
2 answers

How can I get the gradient of the normal equation for weighted linear regression?

The normal equation for weighted linear regression looks like this: $$J(\theta) = (X\theta - y)^TW(X\theta - y),$$ where $X\in\Re^{m\times n}$, $\theta\in\Re^{n\times n}$, $y\in\Re^{m\times 1}$, $W\in\Re^{m\times m}$, and $W$ is a diagonal matrix,…
4
votes
1 answer

Right pseudo-inverse and Generalized Least Squares

The left pseudoinverse of $(A^TA)^{-1}A^T$ solves the problem of $\text{min} ||b-Ax||^2$. i.e. $x=(A^TA)^{-1}A^Tb$ is the solution to above problem. And there is a well-know property that if we add a precision matrix $\Omega^{-1}$ as the…
4
votes
1 answer

Understanding Galerkin method of weighted residuals

I have a puzzlement regarding the Galerkin method of weighted residuals. The following is taken from the book A Finite Element Primer for Beginners, from chapter 1.1. If I have a one dimensional differential equation $A(u)=f$, and an approximate…
3
votes
0 answers

Best way to remove a local maxima from a piecewise linear function

Let $x_1,...x_K \geq 0$ and $f$ be the piecewise-linear function given by $f(k)=x_k$ for every $1 \leq k \leq K$. Denote by $m$ the number of modes (i.e. local maxima) of $f$. Let's associate with every $x_k$ a weight $\mu_k \geq 0$. Now consider…
3
votes
2 answers

Linear Fit when Data has Uncertainty

I am attempting to find the slope and y-intercept (along with their uncertainty) from a set of data. In this case, I am graphing Gamma Energy (MeV) vs. Peak Centroid (Channel). Here is my data: Gamma Energy (MeV): 1.17, 1.33, 0.032, 0.662, 0.511,…
2
votes
0 answers

Weighted Least Square with infinite weights

I am considering a weighted least square problem with data $X \in \mathbb{R}^{n \times p}$, (diagonal) weight matrix $W \in \mathbb{R}^{n \times n}$ and responses $y \in \mathbb{R}^n$, i.e. finding $$\beta^*= \text{argmin}_{\beta \in \mathbb{R}^p}…
2
votes
0 answers

Noise Covariance Estimation for Linear Regression (Seemingly Unrelated Regressions)

Considering following linear model \begin{equation} y_t = X_t f_t + \varepsilon_t, \qquad t=1,\cdots, T \end{equation} where $y_t\in\Re^{300\times 1}$ and $X_t\in\Re^{300\times 60}$ are two given sequences for $t=1,\cdots, T$. Error term…
2
votes
0 answers

Weighted least squares problem with a equal quadratic constraint

We need to solve the following least square problem $$\min_x (Y-Ax)^TW(Y-Ax)$$ $$s.t. x^TA^TAx=1$$ $$c^Tx=0$$ in a closed form, where $Y \in \mathbb{R}^{n\times 1}$, $A \in \mathbb{R}^{n\times n}$, $W \in \mathbb{R}^{n\times n}$, and $c \in…
2
votes
1 answer

Proof of Weighted Least Square Solution

When calculating Weighted Least Square Solution, after taking the derivative, we will have the following equation: $X^\top WX\beta=X^\top Wy$ where $X_{n\times m}$ is the data matrix, with $n\geq m$ and $X$ in full rank; $W$ is the weight matrix,…
2
votes
0 answers

Obtaining all solutions of a linear equation by weighted generalized inverse

Consider a linear equation $$ Ax=b,\quad b\in\operatorname{col}(A). $$ The vector $b$ lies in the column space of $A$ so the solution of this linear equation exists. It is known that any solution has the form $$ x = A^\dagger b + (I-A^\dagger…
2
votes
1 answer

Iteratively reweighted least squares for LASSO problem

I'm trying to solve the following (here simplified) problem (here 1D, and $x>0$): $$ \arg \min_{x} \frac{1}{2} \left\| A x - b \right\|_{2}^{2} + \frac{1}{2} \left\| x \right\|_{1}$$ I need to solve the problem with IRLS and both terms need to have…
2
votes
1 answer

Finding point on a plane closest to a point in $\mathbb{R}^n$ using least squares method?

Suppose S is a 2d plane in $\mathbb{R}^3$ s.t. it is the set of all vectors in $\mathbb{R}^3$ with $ax_1+bx_2=0$ (a,b not equal to 0). Let $b=(x_1,x_2,x_3)^T$ be any vector in $\mathbb{R}^3$. How can I use least squares fitting to find the point in…
2
votes
1 answer

Matrix notation for weighted sum of squares

While going through page 1 of Lecture 24: Weighted and Generalized Least Squares [PDF], I got the following questions. Weighted sum of squares is defined as below: $$ \sum_{i = 0}^{n}{w_i(Y_i - X_ib)^2}$$ And this could be written in matrix…
2
votes
2 answers

The derivative and extremum of a matrix function

$$f(W)=(Ax-b)^TW(Ax-b)=x^TA^TWAx-2b^TWAx+b^TWb$$ where $f(W)$ is a function of $W$, $A$ is a known matrix, $x$ and $b$ are vectors ($b$ is known). How to get $\frac{\partial f}{\partial W}$?
1
2 3 4 5