Algorithmic complexity

The Kolmogorov Complexity or Algorithmic Complexity of a set of data is the shortest possible description of data. It’s an inverse measure of compressibility. How much more data is needed to make the shortest description, more complex and random the data is. This is also, one of the best accounts of randomness so far[1]. As the numbers of patterns in the date increases, more ways to compress it, describe it shortly there are, and there are less Kolmogorov Complexity and less randomness. As the number of patterns in the date decreases, less ways to compress it or describe it shortly, so there are more Kolmogorov complexity and randomness. In the limit situation, there isn’t a single way to describe the data which takes less space than the data itself, the data is incompressible. [2]

More formally, the Kolmogorov complexity C(x) of a set x, is the size in bits of the shortest binary program (in a fixed programming language) that prints the set x as its only output. If C(x) is equal or greater than the size of x in bits, x is incompressible. [3]

This notion can be used to state many important results in computational theory. The most famous, perhaps, are Chaitin's incompleteness theorem , a version of Gödel’s incompleteness theorem.

References

1. SIPSER, M. (1983) "A complexity theoretic approach to randomness". In Proceedings of the 15th ACM Symposium on the Theory of Computing, pages 330{335. ACM, New York.