Time and Space Complexity
Big O Notation
To understand the notation used to describe the time and space complexity of algorithms, we first need to learn a bit about Big O Notation.
Big O Notation is a mathematical notation used to describe the upper bound of an algorithm's time or space complexity. It provides a high-level understanding of the algorithm's efficiency in terms of the growth rate of its resource usage (time or space) as the input size increases. The primary goal is to give a worst-case scenario of how the algorithm performs, which helps in comparing the efficiency of different algorithms.
Asymptotic Behavior: Big O notation describes the asymptotic behavior of an algorithm. It focuses on the term that grows the fastest as the input size increases and ignores constant factors and lower-order terms.
Worst-Case Scenario: Typically, Big O notation expresses the worst-case scenario, ensuring that an algorithm won't perform worse than the specified complexity.
Upper Bound: It gives an upper bound on the time or space required by the algorithm, ensuring that the algorithm will run within the specified limit for large inputs.
Origins of Big O Notation
Big O notation was introduced by the German mathematician Paul Bachmann in the late 19th century, in his book "Analytische Zahlentheorie," published in 1894. The notation was later popularized by Edmund Landau in his work on number theory and mathematical analysis. Both Bachmann and Landau used Big O notation to describe the asymptotic behavior of functions, especially in the context of number theory.
In computer science, Big O notation became widely used in the mid-20th century, particularly with the rise of algorithm analysis and computational complexity theory. It provided a way to classify algorithms according to their efficiency and performance, which was crucial as computer scientists sought to understand and optimize algorithms for various applications.
Time Complexity
Time Complexity measures the amount of time an algorithm takes to complete as a function of the length of the input. It is usually expressed using Big O notation, which provides an upper bound on the growth rate of the runtime of the algorithm. Common time complexities include:
O(1): Constant Time
The runtime is constant and does not change with the size of the input.
Example: Accessing a specific element in an array.
function getElement(arr, index) {
return arr[index];
}
const array = [1, 2, 3, 4, 5];
console.log(getElement(array, 2)); // Output: 3
O(log n): Logarithmic Time
The runtime increases logarithmically with the input size.
Example: Binary search in a sorted array.
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (arr[mid] === target) {
return mid;
} else if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1; // Element not found
}
const sortedArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
console.log(binarySearch(sortedArray, 7)); // Output: 6
O(n): Linear Time
The runtime increases linearly with the input size.
Example: Finding the maximum element in an array
function findMax(arr) {
let max = arr[0];
for (let i = 1; i < arr.length; i++) {
if (arr[i] > max) {
max = arr[i];
}
}
return max;
}
const array = [1, 3, 7, 2, 5];
console.log(findMax(array)); // Output: 7