Lecture 23: 03/21/2012

We’re done with Treaps today.

At first we covered some of the intuition as to why the expected time to find an element m in a random treap is O(log n).

The analogy used is that of a dart game. You throw darts at the board numbered 1 to n and the first dart is the root of the treap. Then you ignore everything to the left or everything to the right, depending on whether m is to the right or left of the first dart, respectively, and repeat the process until you hit m.

In expectation, you’re throwing out roughly half of the elements with each dart, so the expected time to find m is O(log n).

Formally, we proved the bound by solving a recurrence relation for the expected cost to search for a particular element in the random treap.

On Friday we’re covering Planar graphs, which will motivate the Planar separator theorem. Make sure to skim through the chapter to get the basic definitions and come up with some really good questions.

Enjoy the beautiful weather!

Lecture 22: 03/19/2012

Welcome back everyone.

Today we reviewed what we talked about during the last lecture. If you recall, the only thing left was for us to prove the O(log n) bound on each of the treap operations, so we started that proof today.

Take a look at the rest of the proof in Kozen and we’ll wrap it up Wednesday, so bring any questions you have to class.

Lecture 21: 03/05/2012

Today’s topic: Treaps (or Tree-heap).

What are they? A cross between a binary search trees and heaps. They are binary trees where nodes are “in-order”, meaning that a node k is greater than anything in its left subtree and less than everything in its right subtree.

Furthermore, every node is assigned a priority p(k), such that p(parent(k)) < p(k) (assuming that 1 is the highest priority and smaller number is higher priority), and the root has the highest priority.

At first you might think that a treap might not exist, given a set of keys S, and a priority assignment for S. However, you can simply insert the keys into a new treap T, ordered by their priorities, as you would in a normal binary search tree. If the priorities are all unique, then we get a unique treap:

The highest priority key must be the root, then recursively build the unique treaps for the elements less than k and greater than k and attach them to k as the left and right subtrees, respectively.

How are they like BSTs? insert(x, T), member(x, T), and delete(x, T) all run in expected O(log n) time.

insert(x, T): do a normal insert into T as a leaf as you would for a BST, then randomly generate a priority for x, and percolate it up until the heap property is restored

member(x, T): same as for BST

delete(x, T): look for x as you would for a BST, then once we find it, rotate it down to a leaf by rotating x with its higher priority child, then we trim it off the tree
we must rotate it with the higher priority child so that we don’t mess up the heap property

All that’s left is the analysis. We’ll see that all of it is in expectation. We saw examples of treaps that are highly unbalanced, but if we average over all possible treaps on the same set of keys, the expected depth will be O(log n).