*Algorithms + Data Structures = Programs*

The notion of algorithm was first formalized by **Church** and **Turing** in the 1930s (http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis).

Algorithms are computational models that are replacing **mathematical models**.

Given a set of **N** objects, we can define two operations:

**Union Command**- connect two objects*p*and*q*(create a path between*p*and*q*)**Connected Query**- are two objects*p*and*q*connected (is there a path from*p*to*q*)

Properties of the **Union-Find** algorithm:

**Reflexive**-*p*is connected to*p*(itself)**Symmetric**- if*p*is connected to*q*, then*q*is connected to*p***Transitive**- if*p*is connected to an object*q*and*q*is connected to*r*, then*p*is connected to*r*

**CONNECTED COMPONENTS** - a maximal set of objects that are mutually connected.

We can implement our data type and the set of operations as a Python class:

An eager approach to implement the *Quick-Find* algorithm would be to use an array of **ids**. The operations we can define on our data structure are:

**FIND**-*p*and*q*are connected if they have the same**id****UNION**- merge components by changing all entities that equal*id[p]*to*id[q]*

Here we have an example on how our data structure looks after performing **UNION** commands:

0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|

id |
0 | 1 | 1 | 8 | 8 | 0 | 0 | 1 | 8 | 8 |

**Quick-Find** algorithm implementation (eager approach):

Resulting **Cost Model**:

algorithm | initialize | union | find |
---|---|---|---|

Quick-Find | N | N | 1 |

**Defect:** takes quadratic time (N^2) to process **N** union commands on a set of **N** objects

**Other approaches and/or improvements to Quick-Find will follow in Part II.**

**Simon**: What happens if they board us?

**Zoë**: If they take the ship, they will rape us to death, eat our flesh, and sew our skins in to their clothing, and if we're very, very lucky, they’ll do it in that **order**.

From "*Serenity*", episode 11 of *Firefly*

**Merge Sort** is a comparison-based sorting algorithm invented by *John von Neumann* in 1945.

Conceptually, a **merge sort** works as follows:

- Divide the unsorted list into n sublists, each containing 1 element (a list of 1 element is considered sorted).
- Repeatedly merge sublists to produce new sublists until there is only 1 sublist remaining. This will be the sorted list.

Needless to say, this algorithm is implemented using a design pardigm called **Divide-and-Conquer**.

The algorithm's complexity is *O(n lg n)* (loglinear). Kirk out!

**Twice**, adv. Once too often. - Ambrosse Bierce, *The Devil's Dictionary*

From Chapter 8 of **Python Algorithms** by *Magnus Lie Hetland*

**Memoization** implies caching the return values of a function, in particular, a recursive one. If a call is made more than once with the same arguments, the result is simply returned from the cache.

The *memo* function implemented above can be used as a wrapper when defining our recursive algorithms:

You should try calling *fib(100)* with and without **memoization**. Kirk out!