COBOTICS – Intelligent Cooperating Robots and Humans
This project will address the following problem domains over the 3 year duration:
Human to collaborating robot (Cobot) communication requires simple high-level commands rather than explicit and customized programming.
This collaboration needs intelligent automation and pattern recognition to enable spatial awareness and understanding of real world context between robots and humans.
Current visualizations cannot handle multiple collaborating robots and humans doing real-world tasks using input from video, audio, depth.
Current machine learning does not perform well on difficult problems in multi-modal data embedding.
Experimental data for multi-modal data/sensor fusion has to be generated or simulated to enable scalable and realistic testing.
This project will develop solutions that will:
- extend intelligent automation and pattern recognition through fusion of monitoring data for real world context and IoT/Cobotics data analysis.
- improve safety and orchestration of multiple collaborating robots and humans with new visualization models, automation infrastructure, policy and constraints management and decision making.
Phase 2 of the project will start in August 2018 building on the multi-modal data fusion developed in the first year to address the next level of issues. An additional CVDI sponsored project on Intelligent Buildings will also start in August and the expected synergies between the projects are expected to accelerate a number of research topics.