User:Mpennin/sandbox

From Wikipedia, the free encyclopedia

Geoffrey Hinton[edit]

Geoffrey Hinton FRS (born December 6, 1947) is a British-born informatician most noted for his work on the mathematics and applications of neural networks, and their relationship to information theory. Hinton has contributed significantly to the scientific community in the fields of neural computation and cognitive science. His work in artificial intelligence has improved our understanding of how the human brain functions and more specifically, how it learns. Some of his contributions include the Boltzmann machine, backpropagation theory, distributed representations, the Helmholtz machine, and Product of Experts.[1] Currently, his main interest lies in unsupervised learning of intelligent agents and neural networks. Geoff Hinton comes from a family rich in scientific and mathematical study. He is the great-great-grandson of logician and philosopher George Boole and the son of Howard E. Hinton, an entomologist.

Geoffrey Hinton
Born (1947-12-06) 6 December 1947 (age 76)
Known forBackpropagation, Boltzmann machine, deep learning
AwardsIJCAI Award for Research Excellence, Rumelhart Prize
Scientific career
FieldsNeural computation


Biography[edit]

Early Life[edit]

Geoffrey Hinton was born on December 6, 1947 in Wimbledon, United Kingdom to parents Howard and Margaret Hinton. From the ages of 7 to 17 he attended Clifton College in Bristol, although he disliked it. Geoff first became passionate about Psychology at the age of 20, while he was attending Cambridge University. He describes his main interest as understanding how the brain computes, and more specifically how a large number of neurons can interact and learn to do all the things the human brain does.[2]

Education[edit]

Hinton graduated from Cambridge University in 1970 and received his Bachelor of Arts in experimental psychology. Shortly thereafter he obtained a PhD in Artificial Intelligence from the University of Edinburgh in 1978.[3] During his years at Edinburgh, he did much study on the topic of relaxation methods used by intelligent agents - visualizing and identifying a whole object out of smaller confusing objects, or “noise”. His thesis paper is entitled “Relaxation and its role in vision”.[4]

Career[edit]

Hinton did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University.[5] During the 1980s, Hinton did a great deal of research on parallel computing architecture in neural networks. It became apparent prior to much of his work that the brain does not compute in an entirely linear fashion.[6] Many aspects of intelligent behaviour cannot be feasibly carried out unless multiple operations take place in many different parts of the brain at the same time; this lead to his work on various AI machines. Hinton's goal was to understand how the brain learns. He began in 1983 with the Boltzmann machine, and from there on would amend this architecture, each time reaching a closer approximation.

The Boltzmann Machine[edit]

The Boltzmann machine was co-designed by Geoffrey Hinton and Terry Sejnowski in order to better understand how the human brain works using an artificial neural network. It is a network of neuron-like units connected symmetrically with one another. This network makes decisions in a stochastic manner about whether to be activated or not (on or off).[7] The binary nature of the units represents a true/false relationship with regards to the information presented to each neural unit. In essence, the Boltzmann machine is a learning machine. The Boltzmann machine, like a Hopfield network, is a network of units with an “energy” defined for the network; but unlike the Hopfield network, the binary units operate in a stochastic manner. The global energy, in a Boltzmann machine is identical in form to that of a Hopfield network:


Where:

  • is the connection strength between unit and unit .
  • is the state, , of unit .
  • is the threshold of unit .


There are two kinds of problems Boltzmann machines are used to solve: search problems and learning problems. In the former, the weights between any two nodes are fixed, and the stochastic process is implemented and used like a cost function.[8] The architecture samples vectors that are not high on the cost function. In the latter, the weights between any two nodes are not fixed, and the machine instead is shown a set of binary data vectors from which it must learn to generate a high probability and thus a stronger weight. Unfortunately, the speed of learning in a Boltzmann machine can be quite slow. This can be overcome by restricting the connectivity and simplifying the learning algorithm. What results is what’s called a Restricted Boltzmann Machine. The primary restriction in this architecture is that there cannot be a layer of hidden units which contain visible-visible or hidden-hidden connections.[9]

Backpropagation[edit]

Short for “backward propagation of errors”, backpropagation was theorized by Hinton and colleagues, and is a method common in training artificial neural networks (ANNs). Backpropagation is a goal-oriented system, in which the ANN works towards a desired output, learning from many inputs along the way. Backpropagation works similarly to the way a child would learn how to fully understand a particular category of something (i.e. dogs or cars). The procedure constantly adjusts the weights assigned to each nodal relationship to ensure that the neural network is always learning.[10] Backpropagation works in an ANN with an input layer of units, any number of intermediate layers, and an output layer of units. Initially, some stimulus is presented to the nodes in the input layer. These nodes then send activation energy by way of one or more of the hidden layers. The nodes in the hidden layers then send signals to the output layer, and the pattern of activation seen in the output layer is the networks response to the stimulus. The difference between the actual and desired outputs is called the “error signal” and is then transmitted back to the output layer.[11] The error signal is used to modify the weights of each nodal link if need be. The next time the network is exposed to that stimulus, the modified weights generate a response that is closer to the one desired. Through this process the architecture learns to recognize the object with which it is presented. Simply put, the goal of backpropagation is to find a function that best maps its inputs to the correct output. Backpropagation training has been a tremendous contribution to the field of neural computation and cognitive psychology. It was from backpropagation that the artificial neural network NETtalk was designed in order to read written English. NETtalk was unique, for its time, in that it was capable of learning to make the correct pronunciation of a given word after it had been supplied in only a few examples.[12]

Contrastive Divergence[edit]

Due to the inherent limitations of backpropagation training, such as its slow speed and fact that convergence is not guaranteed, something further needed to be done to better explain the way in which the brain learns. Hinton and collaborators thus invented fast learning “contrastive divergence” algorithms for a machine previously known as Harmonium, but which came to be known after Hinton as the restricted Boltzmann machine (RBM). RBMs are an augmented version of the original Boltzmann machine, in that their neurons must form a bipartite graph.[13] There cannot be any connections between two visible units or two hidden units, which means that they are not recurrent networks, and thus makes them more efficient. The RBM can be trained supervised and monitored by a researcher or unsupervised and doing all the learning on its own, depending on the task at hand.

Deep Learning[edit]

The next step towards a better approximation of how the brain learns is through a concept called deep learning. Deep learning, in the context of artificial neural networks, is a sub-field of machine learning, in which there is a hierarchy of representations ranging from low-level concepts to high-level concepts.[14] It involves a number of assumptions about the human brain. Firstly, the representations seen by the observer result from the interactions of multiple factors, some of which are in the conscious and some in the subconscious. The brain makes generalizations in an attempt to learn about what may not be directly presented to it. Secondly, deep learning assumes that these factors are organized into many layers with varying levels of composition and abstraction.[15]

Geometric Transformations[edit]

In 2010, Hinton came to the conclusion that the brain learns in a slightly different way than he had previously thought. He realized, through extensive research, that the brain identifies images and objects using geometric transformation to internally visualize and manipulate the stimulus.[16]

Google Acquisition of DNN Research[edit]

In March of 2013, Google struck a deal with Geoffrey Hinton to buy his company DNN research, which he had set up with two of his graduate students: Alex Krizhevsky and Ilya Sutskever.[17] DNN research was founded within the computer science department of the University of Toronto and specializes in object recognition. Hinton says that he will now be dividing his time between his work with Google and his work with the university.

Honours and Awards[edit]

Geoffrey Hinton has been granted membership to a number of organizations, including the Royal Society, the Royal Society of Canada and the Association for the Advancement of Artificial Intelligence.[18] He was previously the president of the Cognitive Science Society and he is currently an honorary foreign member of the American Academy of Arts and Sciences. Hinton received the ITAC/NERC award for contributions to information technology in 1992, then in 1998 he won the IEEE Neural Network Pioneer award; in 2001 he received the first David E. Rumelhart prize, in 2005 the IJCAI award for research excellence, the NSERC Herzberg Gold Medal in 2010 (Canada’s most prestigious award in Science and Engineering) and the Killam prize for Engineering in 2012.[19]

See Also[edit]

References[edit]

  1. ^ "Geoffrey E. Hinton's Biographical Sketch". University of Toronto. Retrieved 19 March 2013.
  2. ^ "NSERC Presents 2 Minutes with Geoffrey Hinton". NSERCTube. Retrieved 19 March 2013.
  3. ^ "Geoffrey E. Hinton's Biographical Sketch". University of Toronto. Retrieved 19 March 2013.
  4. ^ "Publications by Year". University of Toronto. Retrieved 19 March 2013.
  5. ^ "Geoffrey E. Hinton's Biographical Sketch". University of Toronto. Retrieved 19 March 2013.
  6. ^ Hinton, Geoffrey (9). "Massively Parallel Architectures for AI: NETL, Thistle and Boltzmann Machines" (PDF). Retrieved March 20, 2013. {{cite journal}}: Check date values in: |date= and |year= / |date= mismatch (help); Cite journal requires |journal= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help); Unknown parameter |month= ignored (help)
  7. ^ Friedenberg, Jay (2012). Cognitive Science: An Introduction to the Study of Mind. London, UK: SAGE Publications. p. 197. ISBN 978-1-4129-7761-6.
  8. ^ Hinton, Geoffrey. "Boltzmann Machine". Scholarpedia. Retrieved 20 March 2013.
  9. ^ Hinton, Geoffrey (9). "Massively Parallel Architectures for AI: NETL, Thistle and Boltzmann Machines" (PDF). {{cite journal}}: Check date values in: |date= (help); Cite journal requires |journal= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help); Unknown parameter |month= ignored (help)
  10. ^ Friedenberg, Jay (2012). Cognitive Science: An Introduction to the Study of Mind. London, UK: SAGE Publications. p. 194. ISBN 978-1-4129-7761-6. {{cite book}}: More than one of |pages= and |page= specified (help)
  11. ^ Hinton, Geoffrey (June 1986). "Experiments on Learning by Backpropagation" (PDF): 1. {{cite journal}}: Cite journal requires |journal= (help); More than one of |pages= and |page= specified (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)CS1 maint: date and year (link)
  12. ^ "NETtalk Test". YouTube. Retrieved 20 March 2013.
  13. ^ Hinton, Geoffrey (1993). "Keeping Neural Networks Simple by Minimizing the Description Length of the Weights" (PDF). Retrieved 19 March 2013. {{cite journal}}: Cite journal requires |journal= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)
  14. ^ Gulcehre, Caglar. "Deep Learning". Retrieved 19 March 2013.
  15. ^ Hinton, Geoffrey (2006). "A Fast Learning Algorithm for Deep Belief Nets" (PDF). Neural Computation. 18 (7): 1527–1554. doi:10.1162/neco.2006.18.7.1527. PMID 16764513. Retrieved 20 March 2013. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  16. ^ Bengio, Yoshua. "The Deep Learning Saga". YouTube. Retrieved 18 March 2013.
  17. ^ Tamim (13 March 2013). "Google buys DNN Research, Canada". Qatar Chronicle. Retrieved 19 March 2013.
  18. ^ "Geoffrey E. Hinton's Biographical Sketch". University of Toronto. Retrieved 19 March 2013.
  19. ^ "Geoffrey E. Hinton's Biographical Sketch". The University of Toronto. Retrieved 19 March 2013.

External Links[edit]

[1] [2] [3] [4] [5]