Paper

Model-based reinforcement learning for ultrasound-driven autonomous microrobots

Abstract

Reinforcement learning is emerging as a powerful tool for microrobots control, as it enables autonomous navigation in environments where classical control approaches fall short. However, applying reinforcement learning to microrobotics is difficult due to the need for large training datasets, the slow convergence in physical systems and poor generalizability across environments. These challenges are amplified in ultrasound-actuated microrobots, which require rapid, precise adjustments in high-dimensional action space, which are often too complex for human operators. Addressing these challenges requires sample-efficient algorithms that adapt from limited data while managing complex physical interactions. To meet these challenges, we implemented model-based reinforcement learning for autonomous control of an ultrasound-driven microrobot, which learns from recurrent imagined environments. Our non-invasive, AI-controlled microrobot offers precise propulsion and efficiently learns from images in data-scarce environments. On transitioning from a pretrained simulation environment, we achieved sample-efficient collision avoidance and channel navigation, reaching a 90% success rate in target navigation across various channels within an hour of fine-tuning. Moreover, our model initially generalized successfully in 50% of tasks in new environments, improving to over 90% with 30 min of further training. We further demonstrated real-time manipulation of microrobots in complex vasculatures under both static and flow conditions, thus underscoring the potential of AI to revolutionize microrobotics in biomedical applications.