@@ -100,7 +100,7 @@ y = \boldsymbol{\beta}^T \mathbf{x} + \epsilon
100100\end{equation}
101101
102102where $\epsilon \sim \mathcal{N}(0, \sigma^2)$ for each sample. When the true
103- $\boldsymbol{\beta}$ is thought to be sparse (i.e., some subset of the $\beta$
103+ $\boldsymbol{\beta}$ is thought to be sparse (i.e., some subset of the $\boldsymbol{\ beta} $
104104are exactly zero), an estimate of $\boldsymbol{\beta}$ can be found by solving a
105105constrained optimization problem of the form
106106
@@ -124,7 +124,7 @@ intersection (compressive) operations and model estimation through union
124124selection profiles that are more robust and parameter estimates that have less bias. This can be
125125contrasted with a typical Lasso fit wherein parameter selection and estimation are performed
126126simultaneously. The Lasso procedure can lead to selection profiles that are not robust
127- to data resampling and estimates that are biased by the penalty on $\beta$. For
127+ to data resampling and estimates that are biased by the penalty on $\boldsymbol{\ beta} $. For
128128UoI~ Lasso~ , the procedure is as follows (see Algorithm 1 for a more detailed pseudocode):
129129
130130* ** Model Selection:** For each $\lambda_j$ in the Lasso path, generate estimates on $N_S$
@@ -148,7 +148,7 @@ and the degree of feature expansion via unions (quantified by $N_E$) can be bala
148148prediction accuracy for the response variable $y$.
149149
150150\begin{algorithm}[ t]
151- \caption{\textsc{UoI- Lasso}}
151+ \caption{\textsc{UoI$ _ \textsc{ Lasso}$ }}
152152 \label{alg: uoi }
153153 \hspace* {\algorithmicindent} \textbf{Input}:
154154 $X \in \mathbb{R}^{N\times p}$ design matrix \\
0 commit comments