MATLAB Function Reference |

Least squares solution in the presence of known covariance

**Syntax**

`x = lscov(A,b)`

x = lscov(A,b,w) x`[x,stdx,mse] = lscov(...)`

`[x,stdx,mse,S] = lscov(...)`

**Description**

`x = lscov(A,b)`

returns the ordinary least squares solution to the linear system of equations` A*x = b`

, i.e., `x`

is the n-by-1 vector that minimizes the sum of squared errors `(b - A*x)'*(b - A*x)`

, where `A`

is m-by-n, and `b`

is m-by-1. `b`

can also be an m-by-k matrix, and `lscov`

returns one solution for each column of `b`

. When `rank(A) < n`

, `lscov`

sets the maximum possible number of elements of `x`

to zero to obtain a "basic solution".

`x = lscov(A,b,w)`

, where `w`

is a vector length m of real positive weights, returns the weighted least squares solution to the linear system `A*x = b`

, that is, `x`

minimizes `(b - A*x)'*diag(w)*(b - A*x)`

. `w`

typically contains either counts or inverse variances.

`x = lscov(A,b,V)`

, where `V`

is an m-by-m real symmetric positive definite matrix, returns the generalized least squares solution to the linear system `A*x = b`

with covariance matrix proportional to `V`

, that is, `x`

minimizes `(b - A*x)'*inv(V)*(b - A*x)`

.

More generally, `V`

can be positive semidefinite, and `lscov`

returns `x`

that minimizes `e'*`

e, subject to `A*x + T*e = b`

, where the minimization is over `x`

and `e`

, and `T*T' = V`

. When `V`

is semidefinite, this problem has a solution only if `b`

is consistent with `A`

and `V`

(that is, `b`

is in the column space of `[A T]`

), otherwise `lscov`

returns an error.

By default, `lscov`

computes the Cholesky decomposition of `V`

and, in effect, inverts that factor to transform the problem into ordinary least squares. However, if `lscov`

determines that `V`

is semidefinite, it uses an orthogonal decomposition algorithm that avoids inverting `V`

.

`x = lscov(A,b,V,alg)`

specifies the algorithm used to compute `x`

when `V`

is a matrix. `alg`

can have the following values:

`'chol'`

uses the Cholesky decomposition of`V`

.`'orth'`

uses orthogonal decompositions, and is more appropriate when`V`

is ill-conditioned or singular, but is computationally more expensive.

`[x,stdx] = lscov(...)`

returns the estimated standard errors of `x`

. When `A`

is rank deficient, `stdx`

contains zeros in the elements corresponding to the necessarily zero elements of `x`

.

`[x,stdx,mse] = lscov(...)`

returns the mean squared error.

`[x,stdx,mse,S] = lscov(...)`

returns the estimated covariance matrix of `x`

. When `A`

is rank deficient, `S`

contains zeros in the rows and columns corresponding to the necessarily zero elements of `x`

. `lscov`

cannot return `S`

if it is called with multiple right-hand sides, that is, if `size(B,2) > 1`

.

The standard formulas for these quantities, when `A`

and `V`

are full rank, are

`x = inv(A'*inv(V)*A)*A'*inv(V)*B`

`mse = B'*(inv(V) - inv(V)*A*inv(A'*inv(V)*A)*A'*inv(V))*B./(m-n)`

`S = inv(A'*inv(V)*A)*mse`

`stdx = sqrt(diag(S))`

However, `lscov`

uses methods that are faster and more stable, and are applicable to rank deficient cases.

`lscov`

assumes that the covariance matrix of `B`

is known only up to a scale factor. `mse`

is an estimate of that unknown scale factor, and `lscov`

scales the outputs `S`

and `stdx`

appropriately. However, if `V`

is known to be exactly the covariance matrix of `B`

, then that scaling is unnecessary. To get the appropriate estimates in this case, you should rescale `S`

and `stdx`

by `1/mse`

and `sqrt(1/mse)`

, respectively.

**Algorithm**

The vector `x`

minimizes the quantity `(A*x-b)'*inv(V)*(A*x-b)`

. The classical linear algebra solution to this problem is

but the `lscov`

function instead computes the QR decomposition of `A`

and then modifies `Q`

by `V`

.

**See Also**

The arithmetic operator `\`

**Reference**

[1] Strang, G., *Introduction to Applied Mathematics*, Wellesley-Cambridge,
1986, p. 398.

ls | lsqnonneg |

© 1994-2005 The MathWorks, Inc.