Machine Learning refers to the general field of study that deals with automated statistical learning and pattern detection by non-biological systems. It can be seen as a classical sub-domain of artifical intelligence that specifically deals with data analysis, modeling and prediction through the knowledge extracted from the previous (training) samples. As a multi-disciplinary area, it has borrowed concepts and ideas ranging from pure mathematics to cognitive science, all the while trying to exhaustively describe learning systems.
Most common algorithms
When considering the most used algorithms in Machine Learning, we can take on multiple approaches to describe its subdivisions. It's possible to create a simple taxonomy based on their most prominent characteristics and implementations, such as the suggested by Lotte and colleagues:
Firstly, the most widely distinction is usually made between the generative or unsupervised (k-NN, k-means clustering, …) vs discriminative or supervised (Support Vector Machines, Linear Discriminant Analysis, …) ones – while the first is able to spontaneously generate different categories based purely on the data structure, the second kind is only able of distinguishing previously learned classes (through the feeding of correctly identified data). This is probably the most prominent distinction between Machine Learning methods.
These same algorithms can be seen as static (such as simple Neural Networks like Perceptrons), disregarding the temporal\sequential characteristics of the data, or dynamic (Hidden Markov Chains or Recurrent Neural Networks, for instance), able to account for those temporal dynamics and treating time series.
The third and last big difference refers to its sensitivity to the variance within the data, that is, the algorithm’s ability to model the training data - either very closely or more losely (which in turn influences its ability to generalize).
The use of Machine Learning has been widespread since it’s formal definition in the 50’s. The ability to explore the data structure and make predictions based on previous behavior has been extensively used in areas such as market analysis, natural language processing or even brain-computer interfaces. Amazon’s titles suggestion, for instance, is an example of a deep and recursive system for modeling previous buys and generating possible hypothesis from that data.
Besides the technological advantages of this ability to probe large amounts of data and aid in research as simple tools, the development and study of machine learning methods has also lead to substantial insights into the human cognitive organization. At the same time, although limited in its essence, it seems machine learning will give important contributions to the development of an artifical general intelligence.
Further Reading & References
- Stanford's introduction to Machine Learning
- Ghahramani, Z. (2004).Unsupervised Learning. In: Bousquet, O., von Luxburg, U. and Raetsch, G. Advanced Lectures in Machine Learning. Lecture Notes in Computer Science, 3176, 72-112. Berlin: Springer-Verlag
- Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B. (2007). A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces. Journal of Neural Engineering, 4, 1-13
- Ng, A. & Jordan, M. (2002). On generative versus discriminative classifiers: a comparison of logistic regression and naive Bayes. Proc. Advances in Neural Information Processing
- Rabiner, L. R., (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77, 257–286
- Rubinstein, Y. D., & Hastie T. (1997). Discriminative versus informative learning. Proc. 3rd Int. Conf. on Knowledge Discovery and Data Mining