Boston, MA, U.S.A. | July, 2017

Robotics: Science and Systems (RSS 2017) Workshop

New Frontiers for Deep Learning in Robotics

Scope  Programme 

Abstract

In this workshop a wide range of renowned experts will discuss deep learning techniques at the frontier of research that are not yet widely adopted, discussed, or well-known in our community. We carefully selected research topics such as Bayesian deep learning, generative models, or deep reinforcement learning for planning and navigation that are of high relevance and potentially groundbreaking for robotic perception, learning, and control. The workshop introduces these techniques to the robotics audience, but also exposes participants from the machine learning community to real-world problems encountered by robotics researchers that apply deep learning in their research.

This workshop is the successor of the very successful "Deep Learning in Robotics" workshop at last year’s RSS. Our goal is to bring researchers from the machine learning and robotics communities together to discuss and contrast the limits and potentials of new deep learning techniques, as well as propose directions for future joint research between our communities.

We encourage the community to submit questions for the speakers and panel before the workshop via web form. We hope to stimulate an interactive discussion this way. Question submission URL: https://goo.gl/forms/Eb6v2RHFwg9twzWz2

Programme

Deep Learning Workshop at Robotics: Science and Systems Conference

MIT, Boston, USA

July 15, 2017


We hope to stimulate an interactive discussion, you can submit questions online.
URL: https://goo.gl/forms/Eb6v2RHFwg9twzWz2

09:00
Welcome and Introduction
09:15
Invited Talk: Yann LeCun (Facebook, NYU)
The Challenges of Embodied Deep Learning Systems
09:45
Contributed Papers
Lightning Talks
10:30
Refreshment Break with Poster Session
11.00
Invited Talk: Yarin Gal (University of Cambridge)
Bayesian Deep Learning
11:30
Invited Talk: Josh Tenenbaum and Jiajun Wu (MIT)
Learning and Cognitive Robotics
12:00
Lunch Break
14:00
Invited Talk: David Cox (Harvard)
A Neuroscience Perspective on Deep Learning
14:30
Invited Talk: Chelsea Finn (UC Berkeley)
Learning Representations for Versatile Behavior
15:00
Refreshment Break
15:30
Poster Session
16:00
Invited Talk: Piotr Mirowski (DeepMind)
Learning to Navigate and Plan
16:30
Invited Talk: Aaron Courville (Université de Montréal)
Generative Models
17:00
Panel Discussion
17:30
Concluding Remarks

Contributed Papers

Authors Title
Balloch and Chenova An RGBD segmentation model for robot vision learned from synthetic data
Bateux, Marchand, Leitner, Chaumette and Corke Visual Servoing from Deep Neural Networks
Caley, Lawrence and Hollinger Deep Networks with Confidence Bounds for Robotic Information Gathering
Chebotar, Hausman, Zhang, Sukhatme, Schaal, and Levine Combining Model-Based and Model-Free Updates for Deep Reinforcement Learning
Gualtieri, ten Pas and Platt Category Level Pick and Place Using Deep Reinforcement Learning
Jonschkowski, Hafner, Scholz, and Riedmiller PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations
Lambert, Shaban, Liu and Boots Deep Forward and Inverse Perceptual Models for Tracking and Prediction
Paxton, Raman, Hager, and Kobilarov Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments
Pena, Forembski, Xu, and Moloney Benchmarking of CNNs for Low-Cost, Low-Power Robotics Applications
Rajeswaran, Ghotra, Ravindran and Levine Ensemble Policy Optimization for Learning Robust Policies
Rajeswaran, Lowrey, Todorov and Kakade Towards Generalization and Simplicity in Continuous Control
Schaff, Yunis, Chakrabarti and Walter Jointly Optimizing Placement and Inference for Beacon-based Localization
Sullivan and Lawson Reactive Ground Vehicle Control via Deep Networks
Sur and Amor Anticipating Noxious States from Visual Cues using Deep Predictive Models
Viereck, ten Pas, Platt, Saenko Learning a visuomotor controller for real world robotic grasping using simulated depth images
Xie, Wang, Markham and Trigoni Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning
Xu, Zhu, Garg, Gao, Fei-Fei and Savarese Hierarchical Task Generalization with Neural Programs

Sponsors


Osaro
Australian Centre for Robotic Vision




Call for Contributions

The workshop is complemented by contributed research papers that will be presented with 3 minute lightning talks and in an interactive poster session. We explicitly encourage the submission of papers describing work in progress, or containing preliminary results the authors with to discuss with the community. We invite contributions spanning the areas of deep learning, computer vision and robotics. We explicitly encourage the submission of papers describing work in progress, or containing preliminary results to discuss with the community. Submissions should follow the usual RSS guidelines for style and length (up to 6 pages). The accepted papers will be published on the workshop website.
In addition we encourage the community to submit questions for the speakers and panel before the workshop via web form. We hope to stimulate an interactive discussion this way. Question submission URL: https://goo.gl/forms/Eb6v2RHFwg9twzWz2

Topics

The topics of interest comprise, but are not limited to:
  • scene understanding
  • semi-supervised learning, low-shot learning
  • weakly supervised learning in the presence of noisy and unreliable labels
  • Bayesian deep learning and the importance of uncertainty and reliable confidence measures
  • deep networks as a sensor, sensor fusion with deep networks
  • active learning, incremental learning
  • generative models and their potentials for scene understanding and semi-supervised learning
  • novel weakly supervised or unsupervised training regimes
  • domain adaptation and transfer learning
  • generative models for reinforcement learning
  • inverse reinforcement learning, learning from visual demonstration
  • reinforcement learning for hierarchical tasks, complex tasks, non-Markovian tasks
  • case studies: when does state-of-the-art deep learning fail in robotics?
  • success stories: where did deep learning enable breakthroughs in robotics?
  • utilizing robotic technology to create novel datasets comprising interaction, active vision etc.
  • deep learning for embedded systems or platforms with limited computational power



Paper Submission Deadline

May 15, 2017
May 28, 2016 (extended) (anywhere on the planet)
Submit via eMail to deep-learning@roboticvision.org  

Submissions should follow the usual RSS guidelines for style and length (up to 6 pages, double blind is optional).
 

RSS Workshop Date

July 15, 2017




Recorded Streams (chronological(

MIT was not able to record the presentations, here are some Twitter live feeds we tried to capture. Sorry for the quality and some gaps!





Organisers