|Newer page:||version 5||Last edited on Sunday, August 24, 2003 8:15:36 am||by AristotlePagaltzis|
|Older page:||version 4||Last edited on Sunday, August 24, 2003 12:11:49 am||by StuartYeates||Revert|
@@ -1,13 +1,17 @@
-[BigONotation] is used to describe the complexity of
(aka ComplexityTheory) . Big Oh specifically defines
the __upper bound__ of
of an algorithm
. Big Oh
notation is generally written with, well, a big O . For example:
Means an algorithm is ''linear'':
of processing increases linearly as
of the input increases. The symbol ''n'' here is the symbol denoting the size of the input. Hence:
O(n log n)
Means the algorithm is ''log linear''. Quicksort is
an example of a log linear
+[BigONotation] is used to describe the complexity of (aka ComplexityTheory) the the of . Big notation is generally written with, well, a big O the of the of an algorithm .
If you write a function that is O(
n *n) it is probably going to be quite slow with a large input
size . A O(1) function is independant
of the input size. This would be something that only ever executes
the same amount of code, no matter the input size
+ n size of the input the .
[BigONotation] ignores significant, RealWorld efficiency issues, mainly because it ``throws away
'' constant factors
. Constant factors include
a virtual machine
) , forking, linking in external librarys ([C]/[C++]) and compiling
to intermediate code (as many InterpretedLanguages do)
the only way
to describe the complexity
of an algorithm
is also little oh notation
, theta notation
, they are rarely
, if ever, used. BigONotation is
the standard way
to describe how
a function scales
+ ''constant .
+ : a
+ ( ) to . is the
+ to of , is , and . ,
+ , if the to a to input size .
[ An informal introduction to
+[ O notation . . . .
''Someone might want
to merge this
and [ BigOhNotation
+ to and [ ] . ''