In big-O analysis, input size is assumed to be *n*. For example *n* can represent the number of elements in an array. In other problems, *n* may represent the number of nodes in a linked list, or the number of bits in a data type, or the number of entries in a hash table, and so on. After figuring out what *n* means in terms of the input, you have to determine how many times the *n* input items are “examined” in terms of *n*. An examination might be something like adding or comparing an input value to a constant.

When *n* input items are each examined once the result is *n* examinations. This is considered O(*n*), usually referred to as *linear* *time*: The time required to run the algorithm increases linearly with the number of input items.

Big-O analysis yields the *asymptotic* running time, the limit of the running time as *n* gets very large so you eliminate all but the highest-order term, the term that is largest as *n* gets very large. So something like O(n + 2) would actually be O(n) because the difference between n and n+2 is so tiny when n is very large.

The fastest-possible running time for any run-time analysis is O(1), commonly referred to as *constant* *running time*. An algorithm with constant running time always takes the same amount of time to execute, regardless of the input size.

## Leave a Reply