A new framework allows humans to quickly teach robots what they want them to do.
Counterfactual explanations are generated to describe necessary changes for robot success.
Feedback from humans helps fine-tune the robot's training, making the process more efficient.
This method outperforms other techniques, improving robot learning in new environments.
The framework holds promise for robots assisting the elderly and individuals with disabilities.
Robots often struggle with objects and spaces not encountered during training.
The framework employs imitation learning to teach robots specific tasks.
By determining important task elements, it generates new synthetic data by altering unimportant visual concepts.
This data augmentation process helps the robot recognize objects irrespective of non-essential features.
User feedback is collected to identify visual concepts unrelated to the desired action.
Counterfactual explanations assist human users in identifying changeable elements without affecting the task.
Applied in simulations, the framework accelerates robot learning compared to other methods.
Future research will test the framework on real robots and enhance data creation using generative machine-learning models.
The ultimate goal is to enable robots to reason in a semantically meaningful way, similar to humans.
This research is supported by organizations like the National Science Foundation and the MIT-IBM Watson AI Lab.