Deadline: Oct 31, 2016

International Journal of Robotics Research (IJRR) Special Issue

Limits and Potentials of Deep Learning in Robotics

Call for Papers 

Ann Arbor, MI, U.S.A. | June 18, 2016

Robotics: Science and Systems (RSS 2016) Workshop

Are the Sceptics Right?
Limits and Potentials of Deep Learning in Robotics

Scope  Call for Contributions  Programme  Papers 

Abstract

In this workshop a wide range of renowned experts will analyse why deep learning has not yet had the huge impact in robotics it had in neighboring research disciplines, and especially in computer vision. The workshop will identify the limits and potentials of current deep learning techniques in robotics, and will propose directions for future research to overcome those limits and realize the promising potentials.
Thank you everyone for attending and making this a great discussion! We have updated the webpage with the speaker's slides and thanks to the RSS local organizers we have also the video recordings online (YouTube Playlist)!

Event Description

Deep learning techniques have revolutionised many aspects of computer vision over the past three years and have been tremendously successful at tasks like object recognition and detection, scene classification, action recognition, and caption generation. Despite deep learning thriving in computer vision, it has not yet been nearly as impactful in robotic vision.

Although deep learning techniques are successfully applied by a few groups for tasks like visually guided robotic grasping and manipulation, they have not yet evolved into mainstream approaches that are generally adopted and applied. Furthermore, many renowned robotics researchers are outspoken sceptics and have questioned the applicability of deep learning techniques for various robotic scenarios in workshops and informal discussions during RSS, ICRA, and other venues during the past year.
In this workshop, world-renowned experts from the robotics, deep learning, and computer vision communities will elaborate on what limits the applicability of deep learning in various robotics scenarios, but also highlight the potential of deep learning where it has been successfully applied in robotics. The talks will outline open research questions that should be tackled by the community to overcome the identified limits. Furthermore, they will identify the key differences in the paradigms underlying typical applications in robotics and other areas where deep learning thrives. In a panel discussion with the invited experts and the audience, the workshop participants will further refine the proposed directions for future research. The invited speakers and members of the panel are experts in robotics, computer vision, or machine learning. We invited a well-balanced mix of early adopters of deep learning and outspoken sceptics in order to facilitate a critical discussion and to represent various viewpoints and opinions. The workshop is complemented by contributed research papers that will be presented with lightning talks and in a poster session.

Programme

Deep Learning Workshop at Robotics: Science and Systems Conference 2016

University of Michigan, Ann Arbor, USA

08:30 - 08:35
Welcome and Introduction
08:35 - 08:50
Invited Talk: John Leonard (MIT) and Larry Jackel (North C Technologies)
09:05 - 10:00
Contributed Papers: Lightning Talks
10:00 - 10:30
Refreshment Break with Poster Session
10:30 - 11:00

Invited Talk: Dieter Fox (Washington University) [SLIDES+VIDEOS, 500mb]
11:00 - 11:30

Invited Talk: Oliver Brock (TU Berlin) [SLIDES]
11:30 - 12:00

Invited Talk: Pieter Abbeel (UC Berkeley) [SLIDES]
12:00 - 14:00
Lunch Break
14:00 - 14:30

Invited Talk: Walter Scheirer (University of Notre Dame) [SLIDES]
14:30 - 15:00

Invited Talk: Raia Hadsell (Google DeepMind) [SLIDES]
15:00 - 15:30

Invited Talk: Ashutosh Saxena (Cornell and Stanford University) [SLIDES]
15:30 - 16:00
Refreshment Break with Poster Session
16:00 - 16:30
Poster Session
16:00 - 17:00
Panel Discussion
17:00
Concluding Remarks



Contributed Papers

Skydio Best Paper Award:
Peter Ondruska, Julie Dequaire, Dominic Zeng Wang, Ingmar Posner
End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks
Authors Title
Alexander Broad and Brenna Argall Geometry-Based Region Proposals for Accelerated Image-Based Detection of 3D Objects
Arunkumar Byravan and Dieter Fox SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks
Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich Deep Image Homography Estimation
S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, Geoffrey E. Hinton Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
Peter Ondruska, Julie Dequaire, Dominic Zeng Wang, Ingmar Posner End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks
Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
David Moloney, Dexmont Pena, Aubrey Dunne, Alireza Dehghani, Gary Baugh, Sam Caulfield, Kevin Lee, Xiaofan Xu, Maximilian Müller , Remi Gastaud, Ovidiu Vesa Low-Cost Visually Intelligent Robots with EoT
Austin Nicolai, Ryan Skeele, Christopher Eriksen, and Geoffrey A. Hollinger Deep Learning for Laser Based Odometry Estimation
Connor Schenck, Dieter Fox Detection and Tracking of Liquids with Fully Convolutional Networks
Marcus Gualtieri, Andreas ten Pas, Kate Saenko, Robert Platt Grasp Pose Detection in Dense Clutter Using Deep Learning
Abhinav Valada, Gabriel L. Oliveira, Thomas Brox, and Wolfram Burgard Towards Robust Semantic Segmentation using Deep Fusion
Jay Ming Wong, Takeshi Takahashi, Roderic A. Grupen Self-Supervised Deep Visuomotor Learning from Motor Unit Feedback

Sponsors

Call for Contributions

We invite contributions spanning the areas of deep learning, computer vision and robotics. The workshop's programme is complemented by contributed research papers that will be presented with lightning talks and in a poster session. We explicitly encourage the submission of papers describing work in progress, or containing preliminary results to discuss with the community. Submissions should follow the usual RSS guidelines for style and length (up to 8 pages). The papers will be reviewed and commented by the members of the program committee. The accepted papers will be published on the workshop website.
In addition we encourage the community to submit questions for the speakers and panel before the workshop via web form. We hope to stimulate an interactive discussion this way. Question submission URL: https://goo.gl/2YHpqE

Topics

The topics of interest for contributed papers comprise, but are not limited to:
  • limits of deep learning for robotics
  • case studies: when does state-of-the-art deep learning fail in robotics?
  • success stories: where did deep learning enable breakthroughs in robotics?
  • fundamental differences between typical computer vision tasks and robotic vision
  • deep learning for perception, action, and control in robotics contexts
  • reliable confidence measures for deep classifiers
  • exploitation of semantic information and prior knowledge for deep learning
  • deep learning in the context of open set classification
  • incremental learning, incorporation of human feedback for classification
  • utilizing robotic technology to create novel datasets comprising interaction, active vision etc.
  • deep learning for embedded systems or platforms with limited computational power



Paper Submission Deadline

May 8, 2016
May 22, 2016 (extended) (anywhere on the planet)
Submit via eMail to deep-learning@roboticvision.org  

Submission Acceptance Notification: papers will be reviewed and notifications will be sent within a week of submission, at the latest May 29, 2016
 

RSS Workshop Date

June 18, 2016




Organisers