Bardienus Pieter Duisterhof

Email: bduister@cmu.edu

Bio

I am a Ph.D. student at Carnegie Mellon University (CMU), in the Robotics Institute advised by Jeffrey Ichnowski. I am currently a research intern in the DUSt3R Group at NAVER Labs Europe, advised by Jérome Revaud and Vincent Leroy. My interests lie at the intersection of perception and robot manipulation of challenging objects, such as (transparent) deformables. I am a recipient of the 2023 CMLH Fellowship in Digital Health Innovation.

During my first year at CMU I worked with Sebastian Scherer , where we focused on geometric camera calibration. Prior to that I completed a Bachelor's and Master's degree in Aerospace Engineering at Delft University of Technology, in the Netherlands. Advised by Guido de Croon, I studied efficient bio-inspired algorithms for fully autonomous nano drones. In 2019 I was a visiting student at Vijay Janapa Reddi's Edge Computing lab, at Harvard University, where we studied Deep Reinforcement Learning for tiny robots.

I am passionate about creating a future where complex robotic automation is scalable, safe, and beneficial.

Students

I am always looking for collaborators, shoot me an email if you would like to work with me!

Google Scholar  /  Twitter  /  LinkedIn  /  Github

profile photo
News

  • [September, 2024] Cloth-Splatting has been accepted at CoRL 2024!

  • [August, 2024] DeformGS has been accepted at WAFR 2024!

  • [July, 2024] I started my internship at NAVER Labs Europe! Excited to work with Jérome Revaud, Vincent Leroy and the rest of the DUSt3R team.

  • [January, 2024] 2 papers accepted at ICRA 2024! See you in Japan 🇯🇵.

  • [November, 2023] Check out our recent work on MD-Splatting, a method for dense tracking and novel view synthesis of cloth 🧣.

  • [July, 2023] Our paper on NeRFs for transparent objects has been accepted for a 🌟spotlight🌟 presentation at the ICCV23 - TRICKY Workshop.

  • [April, 2023] Thanks to the CMLH Fellowship in Digital Health Innovation for generously supporting my research!

Research

MASt3R-SfM: a fully-Integrated solution for unconstrained Structure-from-Motion
Bardienus P. Duisterhof, Lojze Zust, Philippe Weinzaepfel, Vincent Leroy, Yohann Cabon, Jérome Revaud

arXiv / code

MASt3R for 1000+ unordered images!

DeformGS: Scene Flow in Highly Deformable Scenes for Deformable Object Manipulation
Bardienus P. Duisterhof, Zhao Mandi, Yunchao Yao, Jia-Wei Liu, Jenny Seidenschwarz, Mike Zheng Shou, Deva Ramanan Shuran Song, Stan Birchfield, Bowen Wen, Jeffrey Ichnowski
WARF 2024
project website / arXiv / data / code / X thread

Deformable objects are common in household, industrial and healthcare settings. Tracking them would unlock many applications in robotics, gen-AI, and AR. How? Check out DeformGS: a method for dense 3D tracking and dynamic novel view synthesis on deformable cloths in the real world.

Cloth-Splatting: 3D State Estimation from RGB Supervision for Deformable Objects
Alberta Longhini*, Marcel Büsching*, Bardienus P. Duisterhof, Jens Lundell, Jeffrey Ichnowski, Mårten Björkman Danica Kragic
CoRL 2024
project website / paper

We present Cloth-Splatting: a method for accurate state estimation of deformable objects from RGB supervision. Cloth-Splatting leverages a GNN as a prior to improve tracking accuracy and speed up convergence.

DynOMo: Online Point Tracking by Dynamic Online Monocular Gaussian Reconstruction
Jenny Seidenschwarz, Qunjie Zhou, Bardienus P. Duisterhof, Deva Ramanan Laura Leal-Taixé
Under Review
arXiv

Online 3D tracking can unlock many new applications in robotics, AR and VR. Most prior works have focused on offline tracking, requiring an entire sequence of posed images. Here we present DynOMo, a method for simultaneous 3D tracking, 3D reconstruction, novel view synthesis and pose estimation!

Residual-NeRF: Learning Residual NeRFs for Transparent Object Manipulation
Bardienus P. Duisterhof, Yuemin Mao, Si Heng Teng, Jeffrey Ichnowski
ICRA 2024
project website / arXiv / video / code

In this work, we propose Residual-NeRF, a method to improve depth perception and training speed for transparent objects. Robots often operate in the same area, such as a kitchen. By first learning a background NeRF of the scene without transparent objects to be manipulated, we improve depth perception quality and speed up training.

GSL-Bench: High Fidelity Gas Source Localisation Benchmarking
Hajo H. Erwich, Bardienus P. Duisterhof, Guido de Croon
ICRA 2024
project website / paper / video / code

Gas-source localization is an important task for autonomous robots. We present GSL-Bench, the first standardized benchmark for gas-source localization. GSL-Bench uses NVIDIA Isaac Sim for high visual fidelity, and OpenFOAM for realistic gas simulations.

TartanCalib: Iterative Wide-Angle Lens Calibration using Adaptive SubPixel Refinement of AprilTags
Bardienus P. Duisterhof, Yaoyu Hu, Si Heng Teng, Michael Kaess, Sebastian Scherer project website / arXiv / video / code

In this work we present our methodology for accurate wide-angle calibration. Our pipeline generates an intermediate model, and leverages it to iteratively improve feature detection and eventually the camera parameters.

Sniffy Bug: A Fully Autonomous Swarm of Gas-Seeking Nano Quadcopters in Cluttered Environments
Bardienus P. Duisterhof, Shushuai Li, Javier Burgués, Vijay Janapa Reddi, Guido C.H.E. de Croon
IROS, 2021
arXiv / video / code

We have developed a swarm of autonomous, tiny drones that is able to localize gas sources in unknown, cluttered environments. Bio-inspired AI allows the drones to tackle this complex task without any external infrastructure.

Tiny Robot Learning (tinyRL) for Source Seeking on a Nano Quadcopter
Bardienus P. Duisterhof, Srivatsan Krishnan, Jonathan J. Cruz, Colby R. Banbury, William Fu, Aleksandra Faust, Guido C.H.E. de Croon, Vijay Janapa Reddi
ICRA, 2021
paper / video / code

We present fully autonomous source seeking onboard a highly constrained nano quadcopter, by contributing application-specific system and observation feature design to enable inference of a deep-RL policy onboard a nano quadcopter.

A Tailless Flapping Wing MAV Performing Monocular Visual Servoing Tasks
Diana A. Olejnik, Bardienus P. Duisterhof, Matej Karásek , Kirk Y. W. Scheper, Tom van Dijk, Guido C.H.E. de Croon
Unmanned Systems, Vol. 08, No. 04, pp. 287-294 , 2020
paper / video

This paper describes the computer vision and control algorithms used to achieve autonomous flight with the ∼30g tailless flapping wing robot, used to participate in the International Micro Air Vehicle Conference and Competition (IMAV 2018) indoor microair vehicle competition.

Teaching
16-820 at CMU: Advanced Computer Vision
16-720 at CMU: Introduction to Computer Vision
AE2235-I: Aerospace Systems & Control Theory
Media Coverage
Forbes
IEEE Spectrum Video Friday
Robohub
Bitcraze Blog
PiXL Drone Show
Awards
Best Graduate in Engineering, TU Delft, academic year 2020-2021
Best Graduate in Aerospace Engineering, TU Delft, academic year 2020-2021
Innovation Award, IMAV 2018 Autonomous Drone Race

Modified version of template from here