User:The Transhumanist/Sandbox51
The following outline is provided as an overview of and topical guide to machine learning:
Machine learning – subfield of computer science[1] (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".[2] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[3] Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.
What type of thing is machine learning?[edit]
- An academic discipline
- A branch of science
- An applied science
- A subfield of computer science
- A branch of artificial intelligence
- A subfield of soft computing
- A subfield of computer science
- An applied science
Branches of machine learning[edit]
Subfields of machine learning[edit]
- Computational learning theory – studying the design and analysis of machine learning algorithms.[4]
- Grammar induction
- Meta learning
Cross-disciplinary fields involving machine learning[edit]
Cross-disciplinary fields involving machine learning
Applications of machine learning[edit]
Applications of machine learning
- Biomedical informatics
- Computer vision
- Customer relationship management –
- Data mining
- Email filtering
- Inverted pendulum – balance and equilibrium system.
- Natural language processing
- Pattern recognition
- Recommendation system
- Search engine
Machine learning hardware[edit]
Machine learning tools[edit]
Machine learning frameworks[edit]
Proprietary machine learning frameworks[edit]
Proprietary machine learning frameworks
- Amazon Machine Learning
- Microsoft Azure Machine Learning Studio
- DistBelief – replaced by TensorFlow
- Microsoft Cognitive Toolkit
Open source machine learning frameworks[edit]
Open source machine learning frameworks
Machine learning libraries[edit]
Machine learning library (list)
Machine learning algorithms[edit]
Types of machine learning algorithms[edit]
- Almeida–Pineda recurrent backpropagation
- ALOPEX
- Almeida–Pineda recurrent backpropagation
- Backpropagation
- Bootstrap aggregating
- CN2 algorithm
- Constructing skill trees
- Dehaene–Changeux model
- Diffusion map
- Dominance-based rough set approach
- Dynamic time warping
- Error-driven learning
- Evolutionary multimodal optimization
- Expectation–maximization algorithm
- FastICA
- Forward–backward algorithm
- GeneRec
- Genetic Algorithm for Rule Set Production
- Growing self-organizing map
- HEXQ
- Hyper basis function network
- IDistance
- K-nearest neighbors algorithm
- Kernel methods for vector output
- Kernel principal component analysis
- Leabra
- Linde–Buzo–Gray algorithm
- Local outlier factor
- Logic learning machine
- LogitBoost
- Loss functions for classification
- Manifold alignment
- Minimum redundancy feature selection
- Mixture of experts
- Multiple kernel learning
- Non-negative matrix factorization
- Online machine learning
- Out-of-bag error
- Prefrontal cortex basal ganglia working memory
- PVLV
- Q-learning
- Quadratic unconstrained binary optimization
- Query-level feature
- Quickprop
- Radial basis function network
- Randomized weighted majority algorithm
- Reinforcement learning
- Repeated incremental pruning to produce error reduction (RIPPER)
- Rprop
- Rule-based machine learning
- Skill chaining
- Sparse PCA
- State–action–reward–state–action
- Stochastic gradient descent
- Structured kNN
- T-distributed stochastic neighbor embedding
- Temporal difference learning
- Wake-sleep algorithm
- Weighted majority algorithm (machine learning)
Machine learning methods[edit]
Machine learning method (list)
- Instance-based algorithm
- K-nearest neighbors algorithm (KNN)
- Learning vector quantization (LVQ)
- Self-organizing map (SOM)
- Regression analysis
- Regularization algorithm
- Classifiers
Dimensionality reduction[edit]
- Canonical correlation analysis (CCA)
- Factor analysis
- Feature extraction
- Feature selection
- Independent component analysis (ICA)
- Linear discriminant analysis (LDA)
- Multidimensional scaling (MDS)
- Non-negative matrix factorization (NMF)
- Partial least squares regression (PLSR)
- Principal component analysis (PCA)
- Principal component regression (PCR)
- Projection pursuit
- Sammon mapping
- t-distributed stochastic neighbor embedding (t-SNE)
Ensemble learning[edit]
- AdaBoost
- Boosting
- Bootstrap aggregating (Bagging)
- Ensemble averaging – process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model. Frequently an ensemble of models performs better than any individual model, because the various errors of the models "average out."
- Gradient boosted decision tree (GBRT)
- Gradient boosting machine (GBM)
- Random Forest
- Stacked Generalization (blending)
Meta learning[edit]
Reinforcement learning[edit]
- Q-learning
- State–action–reward–state–action (SARSA)
- Temporal difference learning (TD)
- Learning Automata
Supervised learning[edit]
- AODE
- Association rule learning algorithms
- Case-based reasoning
- Gaussian process regression
- Gene expression programming
- Group method of data handling (GMDH)
- Inductive logic programming
- Instance-based learning
- Lazy learning
- Learning Automata
- Learning Vector Quantization
- Logistic Model Tree
- Minimum message length (decision trees, decision graphs, etc.)
- Probably approximately correct learning (PAC) learning
- Ripple down rules, a knowledge acquisition methodology
- Symbolic machine learning algorithms
- Support vector machines
- Random Forests
- Ensembles of classifiers
- Ordinal classification
- Information fuzzy networks (IFN)
- Conditional Random Field
- ANOVA
- Quadratic classifiers
- k-nearest neighbor
- Boosting
- Bayesian networks
- Hidden Markov models
Artificial neural network[edit]
- Autoencoder
- Backpropagation
- Boltzmann machine
- Convolutional neural network
- Deep learning
- Hopfield network
- Multilayer perceptron
- Perceptron
- Radial basis function network (RBFN)
- Restricted Boltzmann machine
- Recurrent neural network (RNN)
- Self-organizing map (SOM)
- Spiking neural network
Bayesian[edit]
- Bayesian knowledge base
- Naive Bayes
- Gaussian Naive Bayes
- Multinomial Naive Bayes
- Averaged One-Dependence Estimators (AODE)
- Bayesian Belief Network (BBN)
- Bayesian Network (BN)
Decision tree algorithms[edit]
- Decision tree
- Classification and regression tree (CART)
- Iterative Dichotomiser 3 (ID3)
- C4.5 algorithm
- C5.0 algorithm
- Chi-squared Automatic Interaction Detection (CHAID)
- Decision stump
- Conditional decision tree
- ID3 algorithm
- Random forest
- SLIQ
Linear classifier[edit]
- Fisher's linear discriminant
- Linear regression
- Logistic regression
- Multinomial logistic regression
- Naive Bayes classifier
- Perceptron
- Support vector machine
Unsupervised learning[edit]
- Expectation-maximization algorithm
- Vector Quantization
- Generative topographic map
- Information bottleneck method
Artificial neural networks[edit]
Association rule learning[edit]
Hierarchical clustering[edit]
Cluster analysis[edit]
- BIRCH
- DBSCAN
- Expectation-maximization (EM)
- Fuzzy clustering
- Hierarchical Clustering
- K-means algorithm
- K-means clustering
- K-medians
- Mean-shift
- OPTICS algorithm
Anomaly detection[edit]
Semi-supervised learning[edit]
- Active learning – special case of semi-supervised learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points.[5] [6]
- Generative models
- Low-density separation
- Graph-based methods
- Co-training
- Transduction
Deep learning[edit]
- Deep belief networks
- Deep Boltzmann machines
- Deep Convolutional neural networks
- Deep Recurrent neural networks
- Hierarchical temporal memory
- Deep Boltzmann Machine (DBM)
- Stacked Auto-Encoders
Other machine learning methods and problems[edit]
- Anomaly detection
- Association rules
- Bias-variance dilemma
- Classification
- Clustering
- Data Pre-processing
- Empirical risk minimization
- Feature engineering
- Feature learning
- Learning to rank
- Occam learning
- Online machine learning
- PAC learning
- Regression
- Reinforcement Learning
- Semi-supervised learning
- Statistical learning
- Structured prediction
- Unsupervised learning
- VC theory
Machine learning research[edit]
History of machine learning[edit]
Machine learning projects[edit]
Machine learning organizations[edit]
Machine learning organizations
Machine learning conferences and workshops[edit]
- Artificial Intelligence and Security (AISec) (co-located workshop with CCS)
- Conference on Neural Information Processing Systems (NIPS)
- ECML PKDD
- International Conference on Machine Learning (ICML)
Machine learning publications[edit]
Books on machine learning[edit]
Machine learning journals[edit]
Persons influential in machine learning[edit]
- Alberto Broggi
- Andrei Knyazev
- Andrew McCallum
- Andrew Ng
- Armin B. Cremers
- Ayanna Howard
- Barney Pell
- Ben Goertzel
- Ben Taskar
- Bernhard Schölkopf
- Brian D. Ripley
- Christopher G. Atkeson
- Corinna Cortes
- Demis Hassabis
- Douglas Lenat
- Eric Xing
- Ernst Dickmanns
- Geoffrey Hinton – co-inventor of the backpropagation and contrastive divergence training algorithms
- Hans-Peter Kriegel
- Hartmut Neven
- Heikki Mannila
- Jacek M. Zurada
- Jaime Carbonell
- Jerome H. Friedman
- John D. Lafferty
- John Platt – invented SMO and Platt scaling
- Julie Beth Lovins
- Jürgen Schmidhuber
- Karl Steinbuch
- Katia Sycara
- Leo Breiman – invented bagging and random forests
- Lise Getoor
- Luca Maria Gambardella
- Léon Bottou
- Marcus Hutter
- Mehryar Mohri
- Michael Collins
- Michael I. Jordan
- Michael L. Littman
- Nando de Freitas
- Ofer Dekel
- Oren Etzioni
- Pedro Domingos
- Peter Flach
- Pierre Baldi
- Pushmeet Kohli
- Ray Kurzweil
- Rayid Ghani
- Ross Quinlan
- Salvatore J. Stolfo
- Sebastian Thrun
- Selmer Bringsjord
- Sepp Hochreiter
- Shane Legg
- Stephen Muggleton
- Steve Omohundro
- Tom M. Mitchell
- Trevor Hastie
- Vasant Honavar
- Vladimir Vapnik – co-inventor of the SVM and VC theory
- Yann LeCun – invented convolutional neural networks
- Yasuo Matsuyama
- Yoshua Bengio
- Zoubin Ghahramani
See also[edit]
- Outline of artificial intelligence
- Outline of robotics
- Accuracy paradox
- Action model learning
- Activation function
- Activity recognition
- ADALINE
- Adaptive neuro fuzzy inference system
- Adaptive resonance theory
- Additive smoothing
- Adjusted mutual information
- Aika (software)
- AIVA
- AIXI
- AlchemyAPI
- AlexNet
- Algorithm selection
- Algorithmic inference
- Algorithmic learning theory
- AlphaGo
- AlphaGo Zero
- Alternating decision tree
- Apprenticeship learning
- Competitive learning
- Concept learning
- Decision tree learning
- Distribution learning theory
- Eager learning
- End-to-end reinforcement learning
- Error tolerance (PAC learning)
- Explanation-based learning
- Feature
- GloVe
- Hyperparameter
- IBM Machine Learning Hub
- Inferential theory of learning
- Learning automata
- Learning classifier system
- Learning rule
- Learning with errors
- M-Theory (learning framework)
- Machine learning control
- Machine learning in bioinformatics
- Margin
- Multi-task learning
- Multilinear subspace learning
- Multimodal learning
- Multiple instance learning
- Multiple-instance learning
- Never-Ending Language Learning
- Offline learning
- Parity learning
- Population-based incremental learning
- Predictive learning
- Preference learning
- Proactive learning
- Proximal gradient methods for learning
- Semantic analysis
- Similarity learning
- Sparse dictionary learning
- Stability (learning theory)
- Statistical learning theory
- Statistical relational learning
- Tanagra
- Transfer learning
- Version space learning
- Waffles
- Weka
- ^ a b http://www.britannica.com/EBchecked/topic/1116194/machine-learning This tertiary source reuses information from other sources but does not name them.
- ^ Phil Simon (March 18, 2013). Too Big to Ignore: The Business Case for Big Data. Wiley. p. 89. ISBN 978-1-118-63817-0.
- ^ Ron Kohavi; Foster Provost (1998). "Glossary of terms". Machine Learning. 30: 271–274.
- ^ http://www.learningtheory.org/
- ^ Settles, Burr (2010), "Active Learning Literature Survey" (PDF), Computer Sciences Technical Report 1648. University of Wisconsin–Madison, retrieved 2014-11-18
- ^ Rubens, Neil; Elahi, Mehdi; Sugiyama, Masashi; Kaplan, Dain (2016). "Active Learning in Recommender Systems". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha (eds.). Recommender Systems Handbook (2 ed.). Springer US. doi:10.1007/978-1-4899-7637-6. ISBN 978-1-4899-7637-6.
{{cite book}}
: External link in
(help)|last2=