# Time and Space Complexity

## Big O Notation​

To understand the notation used to describe the time and space complexity of algorithms, we first need to learn a bit about Big O Notation.

Big O Notation is a mathematical notation used to describe the upper bound of an algorithm's time or space complexity. It provides a high-level understanding of the algorithm's efficiency in terms of the growth rate of its resource usage (time or space) as the input size increases. The primary goal is to give a worst-case scenario of how the algorithm performs, which helps in comparing the efficiency of different algorithms.

Asymptotic Behavior: Big O notation describes the asymptotic behavior of an algorithm. It focuses on the term that grows the fastest as the input size increases and ignores constant factors and lower-order terms.

Worst-Case Scenario: Typically, Big O notation expresses the worst-case scenario, ensuring that an algorithm won't perform worse than the specified complexity.

Upper Bound: It gives an upper bound on the time or space required by the algorithm, ensuring that the algorithm will run within the specified limit for large inputs.

### Origins of Big O Notation​

Big O notation was introduced by the German mathematician Paul Bachmann in the late 19th century, in his book "Analytische Zahlentheorie," published in 1894. The notation was later popularized by Edmund Landau in his work on number theory and mathematical analysis. Both Bachmann and Landau used Big O notation to describe the asymptotic behavior of functions, especially in the context of number theory.

In computer science, Big O notation became widely used in the mid-20th century, particularly with the rise of algorithm analysis and computational complexity theory. It provided a way to classify algorithms according to their efficiency and performance, which was crucial as computer scientists sought to understand and optimize algorithms for various applications.

## Time Complexity​

Time Complexity measures the amount of time an algorithm takes to complete as a function of the length of the input. It is usually expressed using Big O notation, which provides an upper bound on the growth rate of the runtime of the algorithm. Common time complexities include:

### O(1): Constant Time​

The runtime is constant and does not change with the size of the input.

Example: Accessing a specific element in an array.

``function getElement(arr, index) {  return arr[index];}const array = [1, 2, 3, 4, 5];console.log(getElement(array, 2)); // Output: 3``

### O(log n): Logarithmic Time​

The runtime increases logarithmically with the input size.

Example: Binary search in a sorted array.

``function binarySearch(arr, target) {  let left = 0;  let right = arr.length - 1;  while (left <= right) {    let mid = Math.floor((left + right) / 2);    if (arr[mid] === target) {      return mid;    } else if (arr[mid] < target) {      left = mid + 1;    } else {      right = mid - 1;    }  }  return -1; // Element not found}const sortedArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];console.log(binarySearch(sortedArray, 7)); // Output: 6``

### O(n): Linear Time​

The runtime increases linearly with the input size.

Example: Finding the maximum element in an array

``function findMax(arr) {  let max = arr[0];  for (let i = 1; i < arr.length; i++) {    if (arr[i] > max) {      max = arr[i];    }  }  return max;}const array = [1, 3, 7, 2, 5];console.log(findMax(array)); // Output: 7``

### O(n log n): Linearithmic Time​

The runtime increases in a linearithmic fashion.

Example: Merge sort

``function mergeSort(arr) {  if (arr.length <= 1) {    return arr;  }  const mid = Math.floor(arr.length / 2);  const left = mergeSort(arr.slice(0, mid));  const right = mergeSort(arr.slice(mid));  return merge(left, right);}function merge(left, right) {  let result = [];  let leftIndex = 0;  let rightIndex = 0;  while (leftIndex < left.length && rightIndex < right.length) {    if (left[leftIndex] < right[rightIndex]) {      result.push(left[leftIndex]);      leftIndex++;    } else {      result.push(right[rightIndex]);      rightIndex++;    }  }  return result.concat(left.slice(leftIndex)).concat(right.slice(rightIndex));}const array = [5, 3, 8, 1, 2, 7];console.log(mergeSort(array)); // Output: [1, 2, 3, 5, 7, 8]``

### O(n^2): Quadratic Time​

The runtime increases quadratically with the input size. Example: Bubble sort, insertion sort.

``function bubbleSort(arr) {  let n = arr.length;  for (let i = 0; i < n - 1; i++) {    for (let j = 0; j < n - 1 - i; j++) {      if (arr[j] > arr[j + 1]) {        // Swap        let temp = arr[j];        arr[j] = arr[j + 1];        arr[j + 1] = temp;      }    }  }  return arr;}const array = [5, 1, 4, 2, 8];console.log(bubbleSort(array)); // Output: [1, 2, 4, 5, 8]``

### O(2^n): Exponential Time​

The runtime doubles with each additional element in the input.

Example: Generating the power set of a set. The power set of a set SS is the set of all subsets of SS, including the empty set and SS itself.

``function powerSet(set) {  const subsets = [];  function generateSubsets(index, currentSubset) {    if (index === set.length) {      subsets.push([...currentSubset]);      return;    }    // Exclude the current element    generateSubsets(index + 1, currentSubset);    // Include the current element    currentSubset.push(set[index]);    generateSubsets(index + 1, currentSubset);    currentSubset.pop();  }  generateSubsets(0, []);  return subsets;}const set = [1, 2, 3];console.log(powerSet(set));// Output: [[], [1], [2], [1, 2], [3], [1, 3], [2, 3], [1, 2, 3]]``

### O(n!): Factorial Time​

The runtime grows factorially with the input size.

Example: Generating all permutations of a list.

``function permute(str) {  let results = [];  if (str.length === 1) {    results.push(str);    return results;  }  for (let i = 0; i < str.length; i++) {    let firstChar = str[i];    let charsLeft = str.substring(0, i) + str.substring(i + 1);    let innerPermutations = permute(charsLeft);    for (let j = 0; j < innerPermutations.length; j++) {      results.push(firstChar + innerPermutations[j]);    }  }  return results;}console.log(permute('abc')); // Output: ["abc", "acb", "bac", "bca", "cab", "cba"]``

## Space Complexity​

Space Complexity measures the amount of memory an algorithm uses as a function of the length of the input. It also uses Big O notation. Common space complexities include:

### O(1): Constant Space​

The memory usage is constant and does not change with the input size. Example: Using a fixed number of variables.

### O(n): Linear Space​

The memory usage increases linearly with the input size. Example: Storing an array of size n.

### O(n^2): Quadratic Space​

The memory usage increases quadratically with the input size. Example: Using a 2D array to store a graph.

## Analyzing Algorithms​

To analyze the time and space complexity of an algorithm, follow these steps:

1. Identify the input size: Determine the size of the input, usually denoted as n.

2. Count the basic operations: Identify the fundamental operations (e.g., comparisons, assignments) and how they scale with the input size.

3. Determine the growth rate: Express the growth rate using Big O notation.

## Practical Tips​

• Focus on worst-case scenarios: Big O notation typically describes the worst-case scenario.
• Compare different algorithms: Use time and space complexity to compare the efficiency of different algorithms.
• Optimize for common cases: Sometimes, it's important to consider the average case and optimize for the input sizes and distributions that are most common in your application.

# Automated Amazon Reports

Automatically download Amazon Seller and Advertising reports to a private database. View beautiful, on demand, exportable performance reports.