COGNITION & LANGUAGE
Raymond Mooney: Grounded Language Learning
Thursday February 07, 2013 | 03:00
-04:00 PM
| Cordura 100
Grounded Language Learning
Raymond J. Mooney
University of Texas at Austin
Machine learning has become the best approach to building
computational systems that comprehend human language. However, current
systems require a great deal of laboriously constructed
human-annotated training data. Ideally, a computer would be able to
acquire language like a child by being exposed to linguistic input in
the context of a relevant but ambiguous environment, and thereby "ground"
its semantics in perception and action. As a step in this direction,
we have developed systems that learn to sportscast simulated robot
soccer games and to follow navigation instructions in virtual
environments by simply observing sample human linguistic behavior.
This work builds on our earlier work on supervised learning of
semantic parsers that map natural language into a formal meaning
representation. In order to apply such methods to learning from
observation, we have developed methods that estimate the meaning of
sentences from just their ambiguous perceptual context.
Bio:
Raymond J. Mooney is a Professor in the Department of Computer Science
at the University of Texas at Austin. He received his Ph.D. in 1988
from the University of Illinois at Urbana/Champaign. He is an author
of over 150 published research papers, primarily in the areas of
machine learning and natural language processing. He was the President
of the International Machine Learning Society from 2008-2011, was
program co-chair for the 2006 AAAI Conference on Artificial
Intelligence, general chair of the 2005 Human Language Technology
Conference and Conference on Empirical Methods in Natural Language
Processing, and co-chair of the 1990 International Conference on
Machine Learning. He is a Fellow of the American Association for
Artificial Intelligence and the Association for Computing Machinery,
and the recipient of best paper awards from the National Conference on
Artificial Intelligence, the SIGKDD International Conference on
Knowledge Discovery and Data Mining, the International Conference on
Machine Learning, and the Annual Meeting of the Association for
Computational Linguistics. His recent research has focused on learning
for natural-language processing, grounding language in perception and
action, statistical relational learning, and active transfer learning.