Goodbye ROSCon 2017!
After a gorgeous and enlightening couple of days in Vancouver, we bid farewell to ROSCon 2017. We sold out ROSCon for the third year in a row, with over 475 attendees.
Thanks to everyone for coming and for your support! And thank you to our record-breaking 33 sponsors for the financial support that enabled the conference to grow!
Stay tuned for details on the next event.
We’re posting the slides as they come in from the speakers and we expect to have the videos posted by October 6th. As usual, all of that material is linked below in the program.
- Thank you to everyone who participated in another great ROSCon! Please monitor the usual channels for future announcements.
- Videos and slides have been uploaded and are linked below in the Program.
Attempting to load ROSCon Archive Listings.
ROSCon will happen September 21st-22nd, 2017 in Vancouver, Canada!
That’s the Thursday and Friday immediately preceding IROS, which is happening at the same venue, the Vancouver Convention Centre in Vancouver, Canada. So if you’re planning your travel to IROS, just add a couple of days to the trip so that you can join us for ROSCon in Vancouver, Canada.
Registration for ROSCon 2017 is open:
Note that the early registration deadline is August 1, 2017.
ROSCon 2017 is a chance for ROS developers of all levels, beginner to expert, to spend an extraordinary two days learning from and networking with the ROS community. Get tips and tricks from experts and meet and share ideas with fellow developers.
ROSCon is a developers conference, in the model of PyCon and BoostCon. Following the success of the past five annual ROSCons, this year’s ROSCon will be held in Vancouver, Canada. Similar to previous years, the two-day program will comprise technical talks and tutorials that will introduce you to new tools and libraries, as well as teach you more about the ones you already know. The bulk of the program will be 30-40 minute presentations (some may be longer or shorter).
We aim for ROSCon to represent the entire ROS community, which is global and diverse. Whoever you are, whatever you do, and wherever you do it, if you’re interested in ROS, then we want you to join us at ROSCon. We encourage women, members of minority groups, and members of other under-represented groups to attend ROSCon. We expect all attendees to follow our code of conduct.
The ROSCon planning committee acknowledges that the barriers to attendance for traditionally under-represented groups may be many and varied, and we are striving throughout the planning process to make the event as inclusive and accessible as possible. This year we are proud to continue the ROSCon Diversity Scholarship Program to help make ROSCon 2017 more representative of the global ROS community.
We also welcome suggestions for what else we can do to encourage more participation. Contact us if you have ideas that you’d like to share.
If you don’t want to make a formal presentation, you should still bring your new project or idea to ROSCon and present it in a lightning talk.
There will be an opportunity to give a lightning talk at ROSCon. These are 2-3 minute talks, one session each day (3 minutes in the past but this may be reducing to 2.5 or 2 minutes this year). The way that it works is that you find the designated person during the morning coffee break to sign up for a slot to present that afternoon. It’s first come, first served, and it always fills up. As part of the sign-up, you have the option to provide 2-3 presentation slides that will be loaded onto a common laptop for presentation. Slides are not required, and there’s no particular format. Though it must work on the coordinators laptop so common portable formats such as pdfs are recommended. Videos are also ok but should be edited to fit inside the time limit and should not rely on the audio from the video. Given the time constraint, we would recommend against trying to do a live demo. Last years talks can be seen in this blog post or browse the recordings from past years.
There will also be open space for impromptu hacking sessions and informal presentations.
If you are looking for information on past ROSCons, including past programs, slides and videos of the presentations, see their separate websites.
Important dates to keep in mind for ROSCon 2017.
Call for Proposals circulated
April 24th, 2017
Proposal submission deadline
June 25th, 2017
Proposal acceptance notification
July 11th, 2017
Early registration deadline
August 1st, 2017
Late registration starts
August 31st, 2017
ROSCon 2017 in Vancouver, Canada.
September 21st-22nd, 2017
New this year: Diversity Scholarship Sponsorship
For the first time this year, we are soliciting sponsors to support the ROSCon Diversity Program, which is designed to enable participation in ROSCon by those typically underrepresented in the robotics community to make the conference a more fulfilling experience for all attendees. To get a sense of the impact of last year’s program, which funded travel and lodging costs for 15 scholars from 10 countries, we encourage you to watch this lightning talk from Ahmed Abdalla and Husam Salih, two students at the University of Khartoum in Sudan who are starting a robotics lab under extremely challenging conditions.
For information on this and all of our sponsorship opportunities, check the prospectus.
ROSCon 2017 will be held at the the Vancouver Convention Centre in Vancouver, Canada.
Vancouver has a Trip Planning website to help get around
Here’s a map of the location.
If you plan to explore Vancouver, be sure to “Show your Badge” to receive conference attendee discounts at a variety of locations and attractions.
ROSCon 2017 recognizes the value and productivity of partnerships and have teamed up with IROS 2017 to provide the greatest discounts at nearby hotels. Reservations must be made via the hotel-specific links included below. Discounted rates will be available until the specified cutoff deadline below, or once the room block is full, whichever occurs first.
Vancouver Marriott Pinnacle Downtown – Sold Out
1128 West Hastings Street
Vancouver, BC, V6E 4R5
Distance: 0.2 miles
Room Rate: $229 CAD/night (includes in-room internet access)
Click here to make your reservations online Wednesday Night Discount Sold Out
Reservations by phone: +1-800-207-4150 (reference IROS)
Reservation cut off: 31 August 2017 (5:00 pm Pacific)
Fairmont Pacific Rim
Only available starting the 21st.
1038 Canada Place Way
Vancouver, BC, V6C 0B9
Distance: 0.01 miles
Room Rate: $339 CAD/night (includes in-room internet access)
Click here to make your reservation online
Reservations by phone: +1-877-900-5350 (reference IROS)
Reservation cut off: 25 August 2017 (5:00 pm Pacific)
900 Canada Place Way
Vancouver, BC, V6C 3L5
Distance: 0.2 miles
Room Rate: $299 CAD/night (includes in-room internet access)
Click here to make your reservation online
Reservations by phone: +1-800-441-1414 (reference IROS)
Reservation cut off: 24 August 2017 (5:00 pm Pacific)
Fairmont Hotel Vancouver
900 West Georgia Street
Vancouver, BC, V6C 2W6
Room Rate: $249 CAD/night
Click here to make your reservation online
Reservations by phone: +1-866-540-4452 (reference IROS)
Reservation cut off: 24 August 2017 (5:00 pm Pacific)
For those of you who would like to look at other locations, we suggest searching on All The Rooms, which provides access to local accommodations including airbnb’s and hostels.
ROSCon 2017 and the Open Source Robotics Foundation do not guarantee or make warranties for All-Rooms providers and assume no liability by providing this resource.
The Convention Services Team at the Vancouver Convention Center, provided the childcare service listed below as a reference for attendees. ROSCon 2017 and the Open Source Robotics Foundation do not guarantee or make warranties for these providers and assume no liability by providing this reference.
Nannies on Call
Nannies on Call provide pre-screened nanny service that will come to your hotel room. In addition, they have childcare gear for rent if you decide to travel light.
Fees for on-call bookings in Vancouver: $32CAD per booking (agency fee) + $16CAD per hour. Additional fees may apply. Please visit the website to determine additional pricing.
*Rates are quoted as of 23 March and are subject to change. Please confirm with your selected provider.
If you intend to contract childcare services, in addition to contacting the care provider, please email Karly [at] ]MeetGreen.com. ROSCon is committed to creating an inclusive and accessible conference environment. By providing this information, it will allow the Conference to review attendee needs and evaluate where conference resources are deployed.
Weather in Vancouver can be vary significantly. Please be check the weather and be prepared before you depart.
For historical data you can see here.
The current weather is here.
Invitation letters for visas
If you require an invitation from the conference organizers to obtain a visa to enter Canada, please fill out this form. Please include your full name and mailing address (for inclusion in the letter; we’ll email you the signed letter).
Day 1 Event Locations
- Registration: Burrard Foyer, Level 2
- General Session: MR211-214, Level 2
- General Session Overflow Room: MR217-219, Level 2
- Refreshment Breaks, Poster Session, Reception, and Exhibitors: MR220-222, MR223-224, and Ocean Foyer, Level 2
- Lunch: Ballroom C/D, Level 1
Day 2 Event Locations
- Registration: Burrard Foyer, Level 2
- General Session: MR211-214, Level 2
- General Session Overflow Room: MR217-219, Level 2
- Refreshment Breaks and Exhibitors: MR220-222, MR223-224, and Ocean Foyer, Level 2
- Lunch: Ballroom C/D, Level 1
ROSCon 2017 Program
ROSCon 2017 will be a single track conference. Following review and discussion by the Program Committee, the presentations listed below were accepted for presentation at the conference. There will also be Lightning Talks and a poster session, as well as many opportunities to informally talk with other community members.
The program is subject to change.
There is a feed for China as well as a global feed. You need to enter a name and email address to watch, but no registration is required. Watch the presentations at the links below.
Day 1, September 21st
|7:30||Everyone||Registration open||Please arrive early to allow time to collect your badge and conference bag before the presentations start. We expect there to be a queue for registration on the first day.|
|9:00||Brian Gerkey and Tully Foote (Open Robotics)||Opening Remarks||Slides
|9:25||Shinpei Kato (The University of Tokyo)||Autoware: ROS-based OSS for Urban Self-driving Mobility||Autoware is open-source software (OSS) for urban self-driving mobility, empowered by ROS. It provides complete modules of perception, decision making, and control, which enables drive-by-wire vehicles to drive autonomously in public road environments. The current maintainer of Autoware is Tier IV, a Japanese academic startup company comprising professors and students. Automotive makers and suppliers now often use Autoware to build their research and development prototypes of self-driving mobility. Autoware has also been partly ported to ROS2. This talk will be of interest to any researchers, developers, and practitioners who are looking for opensource solution of self-driving mobility.||Slides
|10:05||Dirk Thomas and Mikael Arguedas (Open Robotics)||The ROS 2 vision for advancing the future of robotics development||Using a concrete use case, this talk will describe the vision of how ROS 2 users will design and implement their autonomous systems from prototype to production. It will highlight features, either available already in ROS 2 or envisioned to become available in the future, and how they can be applied toward building more capable, flexible, and robust robotic systems. While the presentation starts with a simple application, it later utilizes more advanced features like introspection and orchestration capabilities to empower the system for more complex scenarios and harden it to the point of a production-ready system.||Slides
|11:15||Gene Cooperman and Twinkle Jain (Northeastern University)||DMTCP: Fixing the Single Point of Failure of the ROS Master||The ROS master is well-known to be a single point of failure. The DMTCP open-source package for transparent checkpoint-restart was recently extended to support checkpointrestart for the ROS master. After a failure, the ROS master is rolled back and resumed from the last checkpoint. Checkpoints can be performed as often as every few seconds. The DMTCP plugin model also allows users to add plugins that model and restart their external devices in a state equivalent to that at checkpoint. Finally, we speculate on the potential of DMTCP’s distributed mode to support a global restore with appropriate plugins in the future.||Slides
|11:35||Jaime Martin Losa (eProsima)||ROS2 Fine Tuning||ROS2 has adopted DDS/RTPS as its middleware, increasing the performance and feature set over ROS1. DDS exposes many QoS parameters to adapt the middleware to very different scenarios, allowing easy configuration using XML files. This presentation will show how to set up ROS2 for several interesting scenarios.||Slides
|11:55||Tony B Wang and Jeff Gao (Jinan Tony Robotics)||RoboWare: A “Product Oriented Design” IDE for ROS developers||RoboWare is a development kit specifically designed for ROS. It provides an integrated development environment, which has general purpose IDE functions： code editing, building and debugging; It fully supports ROS, including the creation and management of workspace, packages, libraries, nodes, msg/srv/action/launch/yaml/urdf files, etc. RoboWare supports “POD (Product Oriented Design)” development, it has a graphical designer for robot hardware architecture, the design diagram can be automatically exported as a ROS workspace for further development. It also provides a GUI development framework, which has plenty of robot-related controls and is cross-platform||Slides
|12:15||Chris Lalancette (Open Robotics)||SLAM on Turtlebot2 using ROS2||This talk will focus on a larger application written entirely in ROS2. The application is a SLAM system (Google Cartographer) running on a Turtlebot2 using all underlying ROS2 components. The talk will describe the hardware and software setup of the robot, as well as porting and other challenges encountered while developing the application.||Slides
|12:20||Ruffin White (University of California, San Diego) Gianluca Caiazza (Ca’ Foscari University, Venice)||SROS: Current Progress and Developments||Introduced last year was a proof-of-concept implementation of SROS, an addition to the ROS ecosystem to support modern security. This talk will provide an update on developing REPS, with further details on proposed mechanics enabling application layer security for ROS. This includes Hardening APIs via full server/client validation, Standardized Policy Profile Syntax for access control of topics, services and parameters, and Integrated Policy Profile Autogeneration via auditing security log events. You’ll gain a greater familiarity of SROS, its inner workings and direction, enabling you to contribute and provide feedback for the effort to secure robotics subsystems for the future.||Slides
|12:25||Aravind Sundaresan and Leonard Gerard (SRI International)||Secure ROS: Imposing secure communication in a ROS system||Secure ROS is an update to ROS allowing secure communications while keeping ROS public API intact and allowing user code to be reused without modification. Policies are provided at execution time with a YAML file specifying authorized subscribers and publishers to topics, getters and setters to parameters, as well as providers and requesters of services. Policies are specified at the IP address level and enforced by Secure ROS. Combined with IPSec for cryptography, Secure ROS provides secure, authenticated and encrypted ROS communications. Modifications to the ROS code base is restricted to the ROS Master and client libraries (rospy and roscpp).||Slides
|12:30||Georgios Stavrinos and Stasinos Konstantopoulos (NCSR “Demokritos”)||The rostune package: Monitoring systems of distributed ROS nodes||rostune is a tool that helps ROS developers distribute their nodes in the most effective way. It collects and visualizes statistics for topics and nodes, such as CPU usage and network usage. In this talk we are going to present technical details about rostune and a characteristic use case from an on-going project developing a home assistance robot, where processing can be distributed between the robot’s on-board computer and computational units available at the home.||Slides
|13:55||Andreas Fregin, Markus Roth, Markus Braun, Sebastian Krebs, and Fabian Flohr (Daimler AG)||Building a Computer Vision Research Vehicle with ROS||Daimler (Mercedes-Benz) has a long history on research and development on ADAS systems and autonomous driving. Today’s increasing complex requirements on sensors, algorithms and fusion put high demands on the underlying software framework. In this talk, the group Pattern Recognition and Cameras of Daimler Research and Development showcase their latest research vehicle. Additionally, a detailed look on an implemented multi-sensor synchronization system is given. Findings and lessons learned as well as tool modifications and added functionality will be discussed as well. The audience will get insights on data handling in the context of high data throughput.||Slides
|14:15||Juraj Kabzan, Miguel De La Iglesia Valls, Huub Hendrikx, Victor Reijgwart, Manuel Dangel, Fabio Meier, Ueli Graf, and Efimia Panagiotaki (ETH Zürich, AMZ)||Autonomous Racing Car for Formula Student Driverless||As AMZ Racing Driverless, we’re competing in the first Formula Student Driverless competition with «flüela», an electric 4WD car with high wheel torque and a lightweight design (0-100km/h in 1.9s), developed by our team in 2015. To race autonomously, the car has been extended with a LiDAR, a self-developed stereo visual-inertial system, an IMU, a GPS and a velocity sensor. We chose to use ROS Indigo on our Master Slave computing system, as it provided a robust, flexible framework to interface the different components of our Autonomous System. Furthermore we made extensive use of its logging capabilities and powerful visualization and simulation tools.||Slides
|14:35||Chris Osterwood (Carnegie Robotics)||How to select a 3D sensor technology||System developers are faced with a new challenge when designing robots – which 3D perception technology to use? There are a wide variety of sensors on the market, which employ modalities including stereo, ToF cameras, LIDAR, and monocular 3D technologies. This talk will include an overview of various 3D sensor modalities, their general capabilities and limitations, a review of our controlled environment and field testing processes, and some surprising characteristics and limitations we’ve uncovered through that testing. There is no perfect sensor, but there is always a sensor which best aligns with application requirements – you just need to find it.||Slides
|14:55||Everyone||Lightning Talks I||Slides: 101 102 104 105 106 107 108 110 112 113 114||
|16:10||Ian Chen and Carlos Agüero (Open Robotics)||Vehicle and city simulation with Gazebo and ROS||Autonomous driving is becoming a popular area of robotics, attracting interests from the research community and industry alike. Open Robotics have received increasing demands for resources to help build vehicle simulations in Gazebo. In this presentation, we will describe our recent efforts on vehicle and city simulation. We have produced a collection of components, including 3D vehicle models, materials and plugins, a Road Network Description File library, and a procedural city generation tool. We will showcase a demo with ROS interface and rviz visualization, and describe how users can create their own vehicle simulations with these components||Slides
|16:30||Ian Chen and Louise Poubel (Open Robotics)||Space Robotics Challenge backstage: A glimpse at the challenges of running the competition||Over the past year, hundreds of teams competed in the qualifications for the NASA Space Robotics Challenge, and the top 20 teams competed in the final cloudbased competition. This talk will go over the software and infrastructure used to host the Space Robotics Challenge, which includes the use of ROS, Gazebo and CloudSim. We will also describe some of the technical challenges faced during the competition, including simulation modeling, performance tuning, and cloud deployment.||Slides
|16:50||Juan Camilo Gamboa Higuera, David Paul Meger, and Gregory Dudek (McGill University)||From simulation to the field: Learning to swim with the AQUA robot||In this session we will share our experience and describe our approach to learning-based control. We do this for underwater (marine) environments where we want to approximate some of the hydrodynamic factors in 6 degrees of freedom. Our work addresses “learning to swim” via the automatic synthesis of swimming controllers for the AQUA platform: a six legged autonomous underwater vehicle. First, we will describe our approaches for simulating the underwater dynamics of the AQUA robot. This description includes our modelling choices and the integration into the Gazebo simulator. Second, we will describe the software interfaces we developed, based on the ROS framework, for testing learning algorithms in the simulation environment. Finally, we will show how ROS facilitated the use of our software on physical robots, and discuss the current research that our software has enabled.||Slides
|17:10||Gajamohan Mohanarajah and Dhananjay Sathe (Rapyuta Robotics) Thomas Michael Bohnert (Zurich University of Applied Sciences/ZHAW)||How to accelerate application development using a cloud robotics platform||In this presentation, attendees will gain practical knowledge on how to use a cloud robotics Platform-as-a-Service (Paas) to significantly accelerate robot application development. Specifically, attendees will learn step-by-step how a robot application can be developed, remotely deployed, monitored, and debugged using a cloud robotics PaaS. This will be followed by a detailed technical breakdown of the rapyuta.io’s internals so attendees may understand the underlying architecture and design. Finally, a case study will compare a commercial robotics application deployment using Amazon’s Infrastructure-as-a-Service to one using the rapyuta.io cloud robotics PaaS to show the benefits and limitations of each approach.||Slides
|17:15||Yvonne Dittrich (IT University of Copenhagen) Gijs van der Hoorn (Delft University of Technology) Andrzej Wasowski (IT University of Copenhagen)||How ROS cares for Quality||As part of the EU H2020 project ROSIN promoting the usage of ROS for industrial applications, we investigate how the ROS community takes care of quality. The goal is to understand quality problems and to address them. We will report our preliminary findings based on a.) analysis of bug reports in ROS packages and ROS based projects; b.) interviews with both junior and core members of the ROS community; and c.) analysis of the ROS wiki and other available resources.||Slides
|17:20||Jordan Allspaw and Carlos J.R. Ibarra Lopez (University of Massachusetts Lowell)||ROS.NET Unity for Multiplatform applications||We introduce ROS.NET, a series of C# projects that allow a managed .NET application to communicate with traditional ROS nodes, we then present a wrapper for it that allows Unity applications to integrate with ROS. Unity is a game design tool which can be used as a 3d rendering engine and a physics engine. We present two applications of combining ROS and Unity, one in the form of a ROS Virtual Reality engine, usable for robot visualization and control, and another in the form of a Project Tango device driver, which can also be used for visualization and control, and which we plan to augment for 3d scanning and reconstruction.||Slides
|17:25||Perrine Aguiar and Ruben Smits (Intermodalics)||Using Google Tango with ROS||For developers who want to extend their robot with new sensors for indoor positioning and 3D perception, Intermodalics created the Tango ROS Streamer App. This Android app for Tango compatible devices provides real-time 3D pose estimates using Tango’s visual-inertial odometry (VIO) algorithms, camera images and point clouds into the ROS ecosystem. The app is already freely available on the PlayStore and its code is fully open source.||Slides
|17:30||Levi Armstrong (Southwest Research Institute)||Robotic Path Planning for Geometry-Constrained Processes||This proposal covers the development of a framework for the automated generation of efficient tool path plans from 3D geometry for industrial processes such as painting or sanding. The work is organized into three main software modules. The first module analyzes 3D data to extract features salient to the desired process. The second module works on these features to generate tool paths that optimally perform the process on individual features. The final module, sequence planning, determines the optimal ordering for processing the entire part.||Slides
|17:35||Marco Esposito and Salvatore Virga (Technical University of Munich)||easy_handeye: hand-eye calibration for humans||Hand-Eye calibration is a “necessary evil” for enabling the interaction between a robot and its environment, including humans. Determining the precise geometric transformation between the coordinate systems of the robot and the utilized camera(s) is as annoying as it is important in order to avoid errors of multiple centimeters already at a meter distance. easy handeye is a new ROS package that aims at facilitating the computation and management of Hand-Eye calibration, while keeping the library completely generic with respect to hardware and encouraging the user to employ best practices known to date.||Slides
|17:40||Darby Taehoon Lim, Yoonseok Pyo, and Leon Ryuwoon Jung (ROBOTIS)||Introducing OpenManipulator; the full open robot platform||This announcement will talk about an OpenManipulator, one of TurtleBot3 Friends. The previous TurtleBot series was able to perform the manipulation function through ‘TurtleBot Arm’. In TurtleBot3, the function will be ‘OpenManipulator’. ROS-enabled OpenManipulator is a full open robot platform consisting of OpenSoftware, OpenHardware and OpenCR(Embedded board). It is expected that ROS users will be able to upgrade TurtleBot3 with ease. Our goal is to support most of the functionality we need as a service, academic, research and educational robot platform through TurtleBot 3 and OpenManipulator.||Slides
|17:45||Ilia Baranov and Tony Baltovski (Clearpath Robotics)||Turtlebot Euclid - A better intro to ROS||Turtlebot Euclid is a new project by Clearpath Robotics in partnership with iRobot and Intel. The talk will present a look at the platform and design principles that went into it. This includes learnings from the previous generations of Turtlebots, advances in sensors and computational power, and access to better community software. The Turtlebot Euclid shows a clear commitment by large industrial partners to ROS, with the Euclid module shipping by default with ROS Kinetic. Demonstration of the ease of use for new students from mobile devices and remote usage will also be shown.||Slides
|18:00||Everyone||Reception and Poster Session Sponsored by Fetch Robotics|
Day 2, September 22nd
|9:00||Martin Pecka (Czech Technical University in Prague) Sergio Caccamo (Kungliga Tekniska högskolan (KTH Stockholm))]Renaud Dube (ETH Zurich) Vladimír Kubelka (Czech Technical University in Prague)||ROS for Search and Rescue Robotics: Tools and Lessons learned during TRADR||Search and Rescue Robotics is an extremely challenging and broad area of robotics that has recently been experiencing enormous progress. In the last 3 years, the EU project TRADR investigated many aspects of the aforementioned field. With this talk, we would like to share with the ROS community the experience acquired in the development of our system based on advanced use of ROS, in testing and using various hardware, and in dealing with end-users that compose the human-robot teams during search and rescue missions.||Slides
|9:40||Ingo Lütkebohle (Bosch Corporate Research)||Determinism in ROS – or when things break /sometimes / and how to fix it…||ROS’s foundational style, the asynchronous, loosely coupled compute graph, is great for re-use and distribution, but there’s a catch: Nothing guarantees execution ordering. This means, the order in which callbacks and timers are executed can change even when inputs are the same. In many important cases, this leads to different results, and – subtly or not so subtly – changes the robot’s behavior. As an example, in the common move_base node, we found reaction times changing between 50 and 200ms, while pure computation time was only 20ms. I will show why this happens, and how to address it, both in the move_base and in general.||Slides
|10:20||Michael Naderhirn, Mischa Köpf, and Josef Mendler (Kontrol GmbH)||Model-based Design for Safety Critical Controller Design with ROS and Gazebo||This presentation gives an overview about our “Kontrol” development environment for safety critical controllers using ROS and Gazebo. We first analyze existing standards for safety critical controllers for different applications and present the results of an extensive industry survey which concludes that 70-80% of the development costs are spent during the serial development phase. To overcome this burden, we present our approach of a model-based development environment which significantly reduces this cost. We show how ROS and Gazebo can be integrated into one development tool. Finally, we demonstrate the automatic code generation for ROS and ROS2 nodes using Scilab.||Slides
|11:30||Justin Huang and Maya Cakmak (University of Washington)||Reactive web interfaces with Polymer and ROS||This talk will introduce a set of web components, built on top of Robot Web Tools, that make it easy to build complex, ROS-integrated web applications without writing much code. Using the Polymer library with these components helps to make applications that are accessible and mobile-friendly. We will show how to use these components and show some common web programming patterns. Additionally, we will showcase some complex web applications we have developed with these tools, including a programming by demonstration interface, a web-based version of RViz, and a ROS graph explorer utility.||Slides
|11:50||Juan Ignacio Ubeira and Julián Cerruti (Ekumen)||Developing Android robots||Android devices have become powerful and popular computing systems around the world. Their processing capabilities, along with the hardware integration of sensors, communication peripherals and GPUs make them interesting alternative platforms to build a robot on. Recent incorporation of AR technology in mobile devices like Google Tango improves their perception of the physical environment, opening a new range of possibilities. In this session we will show how to build an Android-powered robot that we call Tangobot - an autonomously navigating robot consisting of a Tango phone and a Kobuki base - as a ready to use starting point to build other Android-powered robots.||Slides
|12:10||Takeshi Ohkawa (Utsunomiya University) Yutaro Ishida (Kyushu Institute of Technology) Yuhei Sugata (Utsunomiya University) Hakaru Tamukoh (Kyushu Institute of Technology)||ROS-Compliant FPGA Component Technology - FPGA installation into ROS||Imagine what happens if you can use FPGA as a ROS node! FPGA is known as a power-efficient hardware platform for applications including image recognition processing as well as deep neural networks. However, the development cost of FPGA-based system is high to integrate FPGA into robot systems. In order to solve this problem, we propose ROS-compliant FPGA component technology, which is effective in easy integration of FPGA device into robot systems. Two demonstration systems are exhibited: 1) ROS-Compliant FPGA component of feature extraction processing from camera image and 2) restaurant service robot using FPGA in ROS system.||Slides
|13:50||Sebastian Pütz (Osnabrück University) Jorge Santos Simón (Magazino GmbH)||Introducing a highly flexible navigation framework: Move Base Flex||We introduce Move Base Flex (MBF) as a backwards-compatible replacement for move_base. MBF can use existing plugins for move_base, and provides an enhanced version of the same ROS interface. It exposes action servers for planning, controlling and recovering, providing detailed information of the current state and the plugin’s feedback. An external executive logic can use MBF and its actions to perform smart and flexible navigation strategies. Magazino has successfully deployed MBF at customer facilities to control TORU robots in highly dynamical environments. Furthermore, MBF enables the use of other map representations, e.g. meshes||Slides
|14:10||David V Lu (Locus Robotics)||Fundamentals of Local Planning||This talk explores the fundamental concepts of local planners, and how different assumptions and implementations affect the robot behavior. We also present a modular local planner which breaks the monolithic dwa_local_planner into cleaner independent components that can be loaded as plugins. By mapping the fundamental concepts of local planning to different modules, our planner presents a more gradual learning curve, a higher degree of customizability and increased debugging/introspection power.||Slides
|14:30||Luca Marchionni (PAL Robotics)||How to design ROS-powered robots||At PAL Robotics, we use the ROS framework to design software that is modular, configurable and testable. All of our robots, from small mobile bases to human-sized bipeds, are the result of a process of continual review, adaption and improvement. In this presentation, we will reveal some of the lessons we’ve learned when developing our software and how to design modular components for our robots. The control software architecture, based on OROCOS and ros_control, will be presented together with the ros_controllers we’re currently using. We will focus, in particular, on Whole Body Control as an efficient redundancy resolution controller that allows operators to generate real-time motions on anthropomorphic robots.||Slides
|14:50||Everyone||Lightning Talks II||Slides: 201 202 203 204 205 206 208 210 211 212 213 214 215||
|16:05||Matt Robinson (Southwest Research Institute) Mirko Bordignon (Fraunhofer IPA) Min Ling Chan (ARTC - A*Star)||ROS-Industrial: private and public funding at work to advance ROS worldwide||ROS-Industrial matured into a worldwide initiative involving more than 50 organizations across its three regional consortia, including OEMs and primary actors in the manufacturing landscape. This strong private backing is complemented by the support of generous public funding. The talk will illustrate the current lines of action leveraging such funding and aiming to supplement the communitybased spirit of ROS with the traits of a “digital industrial platform for robotics”. They include: the establishment of a development team for the Asia Pacific region; training events with a curriculum specifically developed for automation professionals; broader involvement of system integrators; ad-hoc technical developments with mixed funding schemes.||Slides
|16:25||Michael ‘v4hn’ Görner, Philipp Ruppel, and Norman Hendrich (Hamburg University, Group TAMS)||Upgrading MoveIt!||MoveIt! is the main mobile-manipulation framework ready-to-use in ROS and chances are that you use it to control your robotic arm. This talk will give you a number of reasons to upgrade your MoveIt! setup and it will point out a number of ways to upgrade your overall MoveIt! experience, both using the framework off-the-shelf, as well as utilizing third-party components. In particular, a new kinematics plugin bioik is presented that allows for combinations of various constraint types and drastically improves upon previous plugins in terms of flexibility, quality of approximate solutions, and performance.||Slides
|16:45||Adam Allevato (Open Robotics) Karsten Knese (Bosch)||Using ROS2 for Vision-Based Manipulation with Industrial Robots||This talk describes the development of two ROS 2 manipulation pipelines using industrial robots. Within the latest ROS 2 release, beta2, key features such as namespaces, lifecycle nodes and composition allow a control architecture where controllers can be dynamically loaded (on runtime) and managed externally. ROS 2 multi-node executors also allow us to build an low overhead 3D computer vision pipeline. We describe how these concepts are leveraged to control a UR5 industrial robot (Linux) and an ABB YuMi robot (Windows). We investigate the methodology, challenges, and best practices involved in developing these systems in ROS 2.||Slides
|17:05||Ugo Cupcic (Shadow Robot)||We built an open sandbox for training robotic hands to grasp things||Ugo Cupcic, Chief Technical Architect of the Shadow Robot Company, will be drawing upon his work developing robotic graspers to discuss public simulation sandboxes for training robotic hands to grasp objects. You’ll learn to: 1) Spawn (and build) a simulation easily in the cloud (or locally - in a OS agnostic way thanks to Docker) with web interfaces for easy visualisation 2) Use Keras to easily train a model for grasp stability prediction 3) Use the sandbox to reinforce a demonstrated grasp using bayesian learning and replay it on the real robot||Slides
|17:25||Ryan Gariepy||Closing Remarks||Slides
Call For Proposals
Proposal submission deadline: June 25th, 2017
Presentations on all topics related to ROS are invited. Examples include introducing attendees to a ROS package or library, exploring how to use tools, manipulating sensor data, and applications for robots.
Women, members of minority groups, and members of other under-represented groups are encouraged to submit presentation proposals to ROSCon.
Proposals will be reviewed by a program committee that will evaluate fit, impact, and balance.
We cannot offer presentations that are not proposed! If there is a topic on which you would like to present, please propose it. If you have an idea for an important topic that you do not want to present yourself, please post it for discussion at ROS Discourse.
All ROS-related work is invited. Topics of interest include:
- Best practices
- New packages
- Robot-specific development
- Robot simulation
- Safety and security
- Embedded systems
- Product development & commercialization
- Research and education
- Enterprise deployment
- Community organization and direction
- Testing, quality, and documentation
- Robotics competitions and collaborations
To get an idea of the content and tone of ROSCon, check out the slides and videos from previous years.
A session proposal must include:
- Presenter (name and affiliation)
- Recommended duration: Short (~20 minutes) or Long (~40 minutes)
- Summary [maximum 100 words]: to be used in advertising the presentation
- Description [maximum 1000 words]: outline, goals (what will the audience learn?), pointers to packages to be discussed
Please be sure to include in your proposal enough information for the program committee to evaluate the importance and impact of your presentation. Links to publicly available resources, including code repositories and demonstration videos, are especially helpful.
- Elizabeth Croft
- Tully Foote
- Roberta Friedman
- Ryan Gariepy
- Brian Gerkey
- Deanna Hood
- Niharika Arora (Fetch Robotics)
- Elizabeth Croft (UBC)
- Denise Eng (Clearpath Robotics)
- Tully Foote (Open Robotics)
- Ryan Gariepy (Clearpath Robotics)
- Brian Gerkey (Open Robotics)
- Deanna Hood (Open Robotics)
- Kaijen Hsiao (Mayfield Robotics)
- Ingo Lütkebohle (Bosch)
- Ian McMahon (Rethink Robotics)
- Allison Thackston (Toyota Research Institute)
- Deanna Hood
ROSCon 2017 Diversity Scholarships
The ROSCon 2017 organizing committee aims for ROSCon to represent the entire ROS community, which is diverse and global. In addition to promoting technology that is open source, we also strive to ensure that our communities themselves are as open and accessible as possible, since we recognize that diversity benefits the ROS ecosystem as a whole.
Whoever you are, whatever you do, and wherever you do it, if you’re interested in ROS, then we want you to join us at ROSCon. To help reduce the financial barriers to conference attendance, the ROSCon organizing committee is offering a number of scholarships to members of traditionally underrepresented groups in the tech community.
For more information please see the announcement on the ROS blog.
ROSCon has been held annually since 2012. If you’d like to know more we have archives of all the past programs with recordings of the talks and most of the slides. The sites can be found at the locations below.
- ROSCon 2016 Seoul, Korea
- ROSCon 2015 Hamburg, Germany
- ROSCon 2014 Chicago, USA
- ROSCon 2013 Stuttgart, Germany
- ROSCon 2012 St. Paul, USA
Code of Conduct
To ensure a safe environment for everybody we expect all participants to follow the conference code of conduct.
Presenters are responsible for providing a laptop to connect to the podium. Please format all presentations in 16:9 ratio for optimal display on the screen and video archiving.
If you plan to present a lightning talk, please be sure to visit the registration desk in the Burrard Foyer to check-in and deliver your presentation prior to 12:30pm on the day you plan to present.
Please be sure to format your poster in A0 landscape format. When you arrive onsite, please inform the registration staff of your poster presentation. A ROSCon representative will provide you with materials to affix your poster to the display board and take you to the location of your presentation. We encourage you to put your presentation up as early as possible and leave it up for the duration of the conference.