This naive algorithm runs in O(n2) time in Big O terms.
Is this true? The naive algorithm's outer loop (i) counts n - 1, while the inner loop (j) begins at i + 1 (so counts progressively less than n - 1).
Not a CS major, is this roughly equivalent (for large values of n) to O(n2) or is it, as it appears, something less?
Big O is just complexity classes, describing how number of abstract computations scale (that's the key word) with input size (input list length). Generally speaking, if you could analytically express number of computations as a function of input size, Big O takes the largest term and drops all factors.
It does not necessarily describe performance of the algorithm. `20n2^+5n` and `2n^2 + 9001n` are both O(n^2)
Not necessarily true. It does indeed describe performance of the algorithm. It just compares scenarios with coarser granularity. You can tell from the very start that a O(1) algorithm is expected to outperform a O(N²) alternative.
My algorithms class taught to think of it not as "describing performance" in an absolute sense, but as "describing how performance changes as the size of the input data increases".
It is not necessarily true that an O(1) algorithm will outperform an O(n^2) alternative on a particular set of data. But it is true that an O(1) algorithm will outperform an O(n^2) alternative as the size of the input data increases.
Yes, that's the definition of asymptotic computational complexity. That's the whole point of these comparisons. It's pointless to compare algorithms when input size is in the single digit scale.
You could have an O(N^2) algorithm outperform an O(N) on the scale of 10,000 (or whatever scale you want to imagine). The big-O notation compares only asymptotic behaviour, and sometimes the lower power factors are overwhelming.
This sometimes doesn't work out in practice because the scaling involved runs into a limitation your big-O model didn't account for. Typical examples are: The size of the machine registers, physical RAM, addressable storage, or transmission speeds.
If your O(1) algorithm takes an hour for any input, and the competition is O(n) it may seem like there must be cases where you're better, and then you realise n is the size in bytes of some data in RAM, and your competitors can do 4GB per second. You won't be competitive until we're talking about 15TB of data and then you remember you only have 64GB of RAM.
Big-O complexities are not useless, but they're a poor summary alone, about as useful as knowing the mean price of items at supermarkets. I guess this supermarket is cheaper? Or it offers more small items? Or it has no high-end items? Or something?
Counter-examples: https://en.m.wikipedia.org/wiki/Galactic_algorithm
All that different complexity classes show is that there is some n for which one outperforms the other. In practice this n can be extremely large.
it's the summation of numbers from 1 to n which is n(n+1)/2. This reduces to quadratic complexity because big O notation ignores all coefficients and terms that scale slower
这今天
You're right that it's not exactly n^2. For the i-th element we perform (n - i - 1) comparisons (indexing from 0). This adds up to a total of (n - 1) * n / 2 comparisons. (see https://en.wikipedia.org/wiki/Triangular_number)
In the end it doesn't make a difference for big-O analysis because it's used to describe the behavior when n approaches infinity, where the quadratic factor takes over.
The "optimisation" of starting the inner loop at `j = i + 1` is done to avoid testing every pair of objects twice.
[Ed: I should add that it also prevents testing an object against itself.]
The algorithm is O(n^2) because every pair will be checked once.
I don't know if this will make more sense to you but Big O is like limit calculations from calculus.