skip to main content

Learning to Cooperate in Multi-robot Task Allocation

Project

Project Details

Program
Electrical Engineering
Field of Study
Electrical and Computer Enginnering
Division
Computer, Electrical and Mathematical Sciences and Engineering

Project Description

Imagine robots self-organizing into large groups to assist people with physically demanding tasks, leveraging their core capabilities in perception, manipulation, and navigation to interact with the physical world. To successfully complete a given mission as a team, these robots must make their own decisions to allocate and carry out tasks defined in the mission and cooperate with one another when the task requires it. We realize this capability in multirobot systems through a learning-based paradigm, designing computational models to train a large number of robots to work as a team. Achieving a high level of autonomy in distributed information processing, decision-making, and collaborative manipulation is crucial. To enable a high level of cooperation among a large number of robots, the project aims to build computational models based on deep reinforcement learning (DRL). These models are designed to enhance the robots' ability to cooperate in carrying out multiple tasks using their perception, manipulation, and navigation skills. Additionally, the project's goal is to implement these models on a multi-robot platform and validate the effectiveness of the models through lab experiments.

About the Researcher

Shinkyu Park
Assistant Professor, Electrical and Computer Engineering
Computer, Electrical and Mathematical Science and Engineering Division

Affiliations

Education Profile

  • Postdoctoral Fellow, Massachusetts Institute of Technology, 2019
  • PhD, University of Maryland College Park, 2015
  • MS, Seoul National University, 2008BS, Kyungpook National University, 2006

Research Interests

Professor Park's research interests are in the general areas of robotics, multi-agent decision making, and feedback control. His most recent research has been in design and control of multi-robot systems and related topics of game theory and feedback control systems, with applications to multi-robot learning and coordination.

Selected Publications

  • S. Park, M. Cap, J. Alonso-Mora, C. Ratti, and D. Rus, ""Social Trajectory Planning for Urban Autonomous Surface Vessels,"" IEEE Transactions on Robotics, 2020.
  • S. Park, K. H. Aschenbach, M. Ahmed, W. Scott, N. E. Leonard, K. Abernathy, G. Marshall, M. Shepard, and N. C. Martins, ""Animal-Borne Wireless Network: Remote Imaging of Community Ecology,"" Journal of Field Robotics, vol. 36, no. 6, pp. 1141-1165, 2019.
  • S. Park, J. S. Shamma, and N. C. Martins, ""From Population Games to Payoff Dynamics Models: A Passivity-Based Approach,"" Tutorial Session at IEEE Conference on Decision and Control (CDC), pp. 6584-6601, 2019.
  • B. Gheneti, S. Park, R. Kelly, D. Meyers, P. Leoni, C. Ratti, and D. Rus, ""Trajectory Planning for the Shapeshifting of Autonomous Surface Vessels,"" 2nd IEEE International Symposium on Multi-Robot and Multi-Agent Systems (MRS' 19), pp. 76-82, 2019.
  • S. Park and N. C. Martins, ""Design of Distributed LTI Observers for State Omniscience,"" IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 561-576, 2017.

Desired Project Deliverables

The main objective of this project is to implement multi-agent DRL in a team of mobile manipulators and validate our DRL implementation through lab experiments. Students are expected to collaborate with lab members to explore new ideas and implement them on physical robotic systems. To fulfill the requirements of the project, students should have solid experience working with robotic systems and the Robot Operating System, as well as confidence in C++/Python programming. Experience with reinforcement learning is a plus. The model design and experiment reports are expected to be delivered at the end of the internship program.

Recommended Student Background

Robotics
Machine Learning
Electrical Engineering
Computer Science