Intelligent Human-Robot Collaboration

Sponsors: National Science Foundation

 

Description

Workplaces with intelligent human-robot collaboration are crucial to high productivity and flexibility for many modern manufacturing companies. Intelligent, reliable, and safe communication and collaboration between humans and robots on the factory floor remains a major challenge, with a range of scientific and technical issues to be solved in the areas of sensing, cognition, prediction and control. The research objective of this project is to develop, build and validate an intelligent human-robot collaboration framework that enables pervasive sensing, customized cognition, real-time prediction, and intelligent control, which ensures operational safety and production efficiency for implementation on the factory floor.

Some of our research results include: 1) dynamic gesture design and feature extraction using the Motion History Image method, 2) multi-view gesture data collection of multiple human subjects using RGB cameras, 3) dynamic gesture recognition for human-robot collaboration using convolutional neural networks, 4) action completeness modeling with background-conscious networks for weakly-supervised temporal action localization, 5) action recognition of human workers by discriminative feature pooling and video segment attention model, and 6) construction of individualized convolutional neural networks for skeletal data-based action recognition.

  1. “Dynamic Gesture Design and Recognition for Human-Robot Collaboration with Convolutional Neural Networks,” H. Chen, W. Tao, M. C. Leu, and Z. Yin, Proceedings of the 2020 International Symposium on Flexible Automation (ISFA 2020), Jul. 5-9, 2020, Chicago, IL.
  2. “Action Completeness Modeling with Background Aware Networks for Weakly-Supervised Temporal Action Localization,” M. Moniruzzaman, Z. Yin, Z. He, R. Qin, and M. C. Leu, Proceedings of ACM Multimedia Conference 2020, Oct. 12-16, 2020, Seattle, WA.
  3. “Design of a Real-Time Human-Robot Collaboration System Operated by Dynamic Gestures,” H. Chen, M. C. Leu, W. Tao and Z. Yin, Proceedings of the ASME 2020 International Mechanical Engineering Congress and Exposition (IMECE 2020), November 13-19, 2020, Portland, OR.
  4. “Human Action Recognition by Discriminative Feature Pooling and Video Segment Attention Model," M. Moniruzzaman, Z. Yin, Z. He, R. Qin, and M. C. Leu, IEEE Transactions on Multimedia (under review).
  5. “An Individualized CNN System for Skeletal Data Based Action Recognition in Smart Manufacturing,” M. Al-Amin, R. Qin, M. Moniruzzaman, Z. Yin, W. Tao, and M. C. Leu, Journal of Intelligent Manufacturing (under review).
  6. “Collaborative Forward and Backward Teacher-Student Learning for Long-Term Future Action Anticipation,” M. Moniruzzaman, Z. Yin, Z. He, R. Qin, and M. C. Leu, Proceedings of CVPR 2021 Conference (Virtual), Jun. 19-25, 2021 (under review).