Our research was feature on Neue Zucher Zeitung. ESIM readily provides ground truth depth and optic flow maps. Systems with such capabilities will be advantaged in terms of being resilient to unforeseen changes and deviations from prior assumptions. I am Chief Technology Officer of Duckietown , a robotics education and outreach effort. Simultaneous calibration of odometry and sensor parameters for mobile robots. Learning Control PDF, 1. We have a new opening in our team for a Drone Research Engineer.
Please use our form to send your comment to us:. Our paper titled “The Foldable Drone: And my first job was to charge the batteries. December 14, Paper accepted in RA-L Our work on differential flatness of quadrotor dynamics subject to rotor drag has been accepted for publication in the Robotics and Automation Letters. Our paper “Rapid Exploration with Multi-Rotors: The data also include intensity images, inertial measurements, ground truth from a motion-capture system, synthetic data, as well as an event camera simulator! Custom Projects From time to time, project supervisors will develop custom student research projects to fit with a student’s particular interests or skills.
Help us improve the list by adding more entries! No commercial reproduction, distribution, display or performance rights in this work are provided. I am Chief Technology Officer of Duckietowna robotics education and outreach effort. We provide the code of our FAST event-based corner detector.
Andrea censi phd thesis
The same model can be used for learning the dynamics of different sensors range-finder, camera, field sampler. Robotics and Perception Group. January 9, Our paper on efficient decentralized visual place recognition was accepted to RA-L! Could a “brain in a jar” be able to control an unknown robotic body to which it is connected, and use it to achieve useful tasks, without any prior assumptions on the body’s sensors and actuators?
Event cameras allow predicting the steering angle of a car more robustly and accurately at night and high dynamic range scenes than a standard camera.
We recommend that you carefully review their area of research before you contact them. September 27, Agile Flight through Narrow Gaps!
PhD dissertation: Bootstrapping Vehicles | Andrea Censi’s website
Learning Control PDF, 1. Philipp Foehn, PhD student in our lab, and Naveen Kuppuswamy, former visiting researcher, received the Best Student Paper Award Finalist prize at RSS in Boston for our work on trajectory optimization for agile quadrotor maneuvers with cable-suspended payloads.
Our recent work on quadrotor flight through narrow gaps using only onboard sensing and computing is featured on MIT Technology Review.
If you are interested in doing a custom student research project, please email the project supervisor of your choice directly. We have thoroughly tested that the ported model produces a similar output to the original Matlab implementation, as well as excellent place recognition performance on KITTI Our paper The Foldable Drone: Software in the Loop: The source code is released under a GPLv3 licence.
Compmake overview fullscreen Part 2: For more details, see the ICRA’14 paper. I’m interested in co-design problems that couple sensing, computation, and actuation in a non-trivial way, especially from the point of view of design minimality and “joint inference and control”.
We are happy to announce the release of the code for recovering the brigthness map that caused the events to be triggered. Our flying robot can then use this classifier to guide a ground robot through a disaster area.
Andrea Censi’s website
Please note that the decision of whether to develop a custom student project is at the full discretion of the project supervisor. July 12, List of Event-based Vision Anxrea available We are happy to start The List of Event-based Vision Resourceswhich contains links to cenwi camera devices as well as papers, videos, code, etc. A demonstration of the capabilities of his controller is shown in this video. Welcome to Titus Cieslewski as a new PhD student in our lab!
The goal of bootstrapping is creating agents that are able to learn “everything” from scratch, including a torque-to-pixels models for its robotic body.
The 2 km dataset consists of time synchronized aerial high-resolution images, GPS and IMU sensor data, ground-level street view images, and ground truth data. May 12, Davide Scaramuzza talks at Maker Festival of his home town! Learning diffeomorphism models of robotic sensorimotor cascades.
The lifetime of an event is the time that it takes for the moving brightness gradient causing the event to travel a distance of 1 pixel. Computing the exact Fisher Nadrea Matrix FIM for pose tracking is hard, because the state comprises the map, which is infinite-dimensional and unknown. More details in our SSRR’14 paper.