Big O Notation Calculator

Input:

Result:  




    What is Big O Notation?

    Big O notation is a mathematical notation used to describe the time complexity of an algorithm. It provides a way to express the upper bound or worst-case scenario of the amount of time an algorithm will take to complete as a function of the size of the input.

    In simple terms, Big O notation is a way to measure how well an algorithm scales as the size of the input grows. It is an important tool for analyzing the efficiency of algorithms and comparing their performance. The notation is represented by the letter "O" followed by a function that represents the upper bound of the time complexity, such as O(n) or O(n^2).

    For example, an algorithm that takes O(n) time has a linear time complexity, meaning that the amount of time it takes to run increases linearly with the size of the input. An algorithm that takes O(n^2) time has a quadratic time complexity, meaning that the amount of time it takes to run increases exponentially with the size of the input.

    Big O notation is widely used in computer science to analyze and compare the efficiency of algorithms and to help developers make informed decisions about which algorithms to use in their programs.

    Common Big O Notations

    • O(1) Constant Time
    • O(log n) Logarithmic Time
    • O(n) Linear Time
    • O(n log n) Log-Linear Time
    • O(n^2) Quadratic Time
    • O(2^n) Exponential Time
    Copied to Clipboard