Bare bones of Matrix Completion – III (4 mins read)

This will  hopefully the last post in the matrix completion problems. But since the MBI inequality is widely used, I decided to squeeze in one more application of Community Detection problem in this as well (yay!) in the future post. So first the matrix completion. problem, we will see how a simple algorithm can yield exact results. To that end, since the problem as stated is ill posed, we will first make some assumptions on the matrix that we are completing, namely, the low rank assumption (we will discuss the case where the matrix we are completing is square).

Assume X be a random matrix with \text{rank}(X)=r<<n. The algorithm to completing the missing entries is as follows: 1. Set the entries that are missing to be 0; 2. Compute the best low rank approximation Y for X. Then we can prove the following theorem.

Theorem: Assuming that the entries are missing at random observed with probability p, we can show that \mathbb{E} \frac{1}{n}||\hat{X}-X||_F\leq C\log n\sqrt{\frac{rn}{m}}\|X\|_{\infty} where \hat{X} is the best rank r approximation ofp^{-1}Y, m-\# observed entries and \|X\|_{\infty}:=\max_{i,j}|X_{i,j}|\delta_{ij}=\text{Ber}(p);\mathbb{E}\delta_{ij}=\frac{m}{n^2}, fill the unknown entries with 0:Y_{ij}=\delta_{ij}X_{ij}.  In essence, matrix completion works if we observe m\approx \log^2(n)rn entries.

Proof: 1. ||\hat{X}-X||\leq ||\hat{X}-p^{-1}Y||+||p^{-1}Y-X|| , here note that the first term is always less than (or equal to) the second term since \hat{X} is the best possible matrix. So we have that,

||\hat{X}-X||\leq ||\hat{X}-p^{-1}Y||+||p^{-1}Y-X||\leq 2||p^{-1}Y-x||=\frac{2}{p}||Y-pX||. Here Y-pX is the matrix with entries, \left(Y-pX \right)_{ij}=\left( \delta_{ij}-p\right)X_{ij} independent with mean 0. Hence by observing the fact that \mathbb{E}\|Y-pX\|\lesssim\log n\left(\mathbb{E}\max_i\|(Y-pX)_i \right)+\mathbb{E}\max_j\|(Y-pX)^j\|_2\implies \mathbb{E}\| (Y-pX)_i\|_2^2=\mathbb{E}\sum_{j=1}^n\left( \delta_{ij}-p\right)^2X_{ij}^2\leq pn\|X\|_{\infty}^2\implies \mathbb{E}\|Y-pX\|\leq \sqrt{pn}\|X\|_{\infty}\log n. This further implies that ||\hat{X}-X||\lesssim \sqrt{n/p}\|X\|_{\infty}\log n.

2. Now operator norm can be easily bounded since \text{rank}(\hat{X}-X)\leq 2r. So we have that \mathbb{E}\|\hat{X}-X\|_F\leq \sqrt{2r}\mathbb{E}\|\hat{X}-X\|\lesssim \log n\sqrt{\frac{nr}{p}}\|X\|_{\infty}, dividing n both the sides gives us the result. \square

The proof as of now just consists of equations and math, I’ll try to add more explanations soon.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s