Embodied Intelligence Through World Models
Date
Authors
Advisor
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Building intelligent machines carries the potential to advance technology and increase living standards around the world. Despite remarkable progress, today's learning algorithms require a large number of examples to succeed at prediction, generation, or control tasks. Unlike these algorithms, humans accumulate common sense knowledge by building internal models of how the world around them works. These world models allow them to predict the future outcomes of their potential actions, reducing the trial and error in the real world needed for learning successful behaviors. However, learning world models that are accurate enough for successful planning has been challenging for computers, especially from large unstructured inputs such as videos. This thesis focuses on the problem of equipping computers with the ability to imagine the future and make decisions based on their predictions. We introduce algorithms for learning world models from unstructured inputs that are accurate enough for successful planning and control. These algorithms outperform previous approaches without world models not only by requiring less experience to learn but also by achieving higher performance. We develop robust world models that make control algorithms widely applicable by practitioners without having to tune any knobs. The resulting learning efficiency and robustness allows training robots from scratch and online in the physical world. The presented algorithms have enabled new applications and research in digital and physical environments. They constitute one step on the roadmap of reverse-engineering the cognitive abilities of humans, with the goal of advancing automation and ultimately helping us gain a deeper understanding of ourselves.
Description
Keywords
Citation
ISSN
Related Outputs
Collections
Items in TSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
