Resources

Vandenberghe Lectures

Notation

variable dimension name
ith predictor
ith state
ith sample weight
weights
all predictors
all states
identity matrix with columns entries being data point weights

Weighted Gaussian Linear regression

The log-likelihood of dataset with weighted samples which is modeled by a linear gaussian function is given by:

$$L(\mathcal{D};\boldsymbol{\theta}) \triangleq \sum_{i=1}^N \log p(\mathbf{y}_i|\mathbf{x}_i;\boldsymbol{\theta}) \, \alpha_i$$

where is a Gaussian probability density function:

with parameters .

Expansion of the log-likelihood

First without considering the weights we simplify

Simplifying with the weights:

1D Maximum likelihood

Given that we are in the 1D case ,

Set the derivaties with respect to the parameters to zero, and , and solve for and :

maximise weights

maximise variance