Lecture 33: 04/18/2012

Another go at perfect matchings but this time, we want to look at bipartite graphs. The difference is we don’t want to use Tutte’s theorem to help us. So what can we do?

We may proceed using an idea similar to the Tutte matrix. However, since the graph is bipartite, the Tutte matrix will have many zeroes. Therefore, we may simplify out representation of the adjacency matrix.

At the end we saw a neat application of using the inverse of our simplified matrix to tell us about the matchings in the graph.

I’m uploading the pictures of the chalk boards, but as a special treat, we get to look at Prof. Blum’s personal notes. If these are helpful please let me or the professor know.

Lecture 32: 04/16/2012

More Matching. It’s starting to seem like perfect matchings are everywhere in computer science.

Lots of examples with graphs and Tutte matrices. We also looked at what the determinant of a Tutte matrix actually tells you about a graph when you compute it all out. Surprisingly, it tells you more than just perfect matchings. If will tell you if there is a Hamiltonian cycle in a graph.

So what? Well now we know that computing a determinant symbolically is at least as hard as determining if a graph has a Hamiltonian cycle, which is NP-complete. That’s hard, let me tell you.

Lecture 31: 04/13/2012

A recap of Wednesday’s material at the beginning. We changed our definition of the degree of a multivariable polynomial slightly to make the proof of the Schwartz-Zippel Theorem a bit simpler.

Let f(x_1, …, x_n) be a polynomial in n variables from some field F.

A deterministic algorithm works like this:
Fix d to be the degree of f.

Choose d+1 values of each x_i for 1 <= i 1, and evaluate f(x) for x \in S. If f(x) = 0, then there’s at most a 1/k probability the function is identically zero. If we repeat this test a few times using elements selected uniformly at random from S, we get a pretty high probability of obtaining the correct answer.

If we extend to multivariate functions, we still choose random elements from a set S of size k*d from the field F where the polynomial is defined. However, we now evaluate f(x_1, …, x_n) for a randomly chosen tuple from S^n. If f(x_1, …, x_n) = 0, we get the same bound of at most 1/k probability of f being identically zero.

Now for another cool application of testing polynomials. Suppose we have a graph G and we want to know if there exists a perfect matching.

A matching is a 1-regular subgraph of G. A perfect matching is a 1-regular spanning subgraph of G. All this means is we pair up vertices where every pair of vertices share an edge, and every vertex appears in exactly one pair.

How do we do this? Well we can look for obvious signs of not having a perfect matching, such as having an odd number of vertices. Ok, but what if I have an even number of vertices? Try all the matchings! Brute force is always the way to go (just kidding).

There’s a deterministic algorithm that runs in O(m sqrt(n)) time, but can we do better using randomization? Yes!

We assign every edge a variable and form an adjacency matrix of the graph G. However, let’s orient the edges arbitrarily and every time we have a 1 or -1, we replace it with the variable of that edge.

Now, we compute the determinant symbolically and if the polynomial is non-zero, it tells us the perfect matchings in the graph. However, computing determinants symbolically is hard, so we can accomplish the same thing by substituting random values of the variables and checking if the polynomial is zero.