In part 4 about LLE, you define the weight matrix $W$ as a matrix that is $\mathbb{R}^{k \times n}$. However, $W$ is actually a sparse $n \times n$ where, for a given point $x_i$, all entries $W_{ij}$ that are not in the $k$ nearest neighbors of $x_i$ are zero.
This confused some of the math for me.
For more clarity, consider checking out this paper's section about LLE. They distinguish between $W$ and $\tilde{W}$ in a way that may be helpful.