Introduction
"Are You Smarter Than a Senior?" is a project inspired by a challenge posed by one of my professors at Carnegie Mellon University: “99% of graduating seniors in computer science can’t even write QuickSort. What are we doing?”
To prove my capabilities, I implemented several fundamental algorithms from scratch in Python, including QuickSort, InsertionSort, BubbleSort, MergeSort, Breadth-First Search (BFS), Depth-First Search (DFS), and Dijkstra's Algorithm. For each algorithm, I provided detailed problem definitions, input/output descriptions, and a formal cost analysis under the Word-RAM model.
What is the Word-RAM Model?
In theoretical computer science, the Word-RAM model is a computational framework used to analyze the time complexity of algorithms. It assumes:
- Words (data units) are of a fixed size, typically matching the architecture of the machine (e.g., 32-bit or 64-bit).
- Basic operations (e.g., addition, subtraction, comparisons) on these words take constant time, \( O(1) \).
- Memory access is also performed in constant time.
This model allows us to evaluate the efficiency of algorithms while abstracting away machine-specific details, focusing instead on logical operations and memory access patterns.
Algorithms Implemented
1. QuickSort
Input: An unsorted array.
Output: A sorted array.
Goal: Divide the array into partitions around a pivot, then recursively sort each partition.
Complexity: Average-case \( O(n \log n) \), worst-case \( O(n^2) \) (though often mitigated by choosing a random pivot).
2. InsertionSort
Input: An unsorted array.
Output: A sorted array.
Goal: Insert elements into their correct position, shifting larger elements to the right.
Complexity: Worst-case \( O(n^2) \).
3. BubbleSort
Input: An unsorted array.
Output: A sorted array.
Goal: Repeatedly swap adjacent elements if they are in the wrong order.
Complexity: Worst-case \( O(n^2) \).
4. MergeSort
Input: An unsorted array.
Output: A sorted array.
Goal: Recursively divide the array into halves, sort each half, then merge them back together.
Complexity: \( O(n \log n) \) in the average and worst case.
5. BFS and DFS
Input: A graph represented as an adjacency list.
Output: A traversal or search order of the graph nodes.
Goal: BFS explores nodes level by level, while DFS explores as deep as possible before backtracking.
Complexity: \( O(V + E) \), where \( V \) is the number of vertices and \( E \) is the number of edges.
6. Dijkstra's Algorithm
Input: A weighted graph and a source node.
Output: The shortest path from the source to all other nodes.
Goal: Repeatedly select the unvisited node with the smallest known distance, updating distances of its neighbors.
Complexity: \( O((V + E) \log V) \), assuming a binary heap or priority queue.
Reflection
Implementing these algorithms strengthened my understanding of computational complexity, data structures, and algorithmic design. By analyzing each under the Word-RAM model, I gained deeper insight into how seemingly simple operations can impact runtime at scale.
Beyond the theory, this project reminds me of my early enthusiasm for problem-solving and my commitment to mastering the fundamentals. It's a proud milestone in my academic journey—one that continues to shape my passion for computer science and software development.