Home
Research
Publications
CV
Interactive Learning and Adaptation Framework
We propose an interactive learning and adaptation framework that integrates Interactive Reinforcement Learning approaches to the adaptation mechanism. Interactive Reinforcement Learning (IRL) is a variation of RL that studies how a human can be included in the agent learning process. Human input can be either in the form of feedback or guidance. Learning from Feedback treats the human input as a reinforcement signal after the executed action. Learning from Guidance allows human interventions to the selected action before execution, proposing (corrective) actions. To our knowledge, IRL methods have not been investigated for the adaptation of an agent to a new environment. Hence, we propose their integration to the adaptation mechanism, as policy evaluation metrics used to evaluate and modify a learned policy towards an optimal one, following proper transfer methods.
                                                                                   |
Interactive Reinforcement Learning
- Learning from Feedback: We consider feedback as a personalization factor. We propose to utilize user implicit feedback (e.g., engagement through EEG signals) in order to enable the agent personalize its policy in order to maximize user's engagement, and thus performance
- Learning from Guidance: We consider guidance as the safety factor. An external supervisor can observe the interaction through a user interface. The system enables them to intervene or guide the system when needed. We propose to apply Active Learning methods so as to enable the system learn when it needs guidance, minimizing the supervisor's workload
The definition and an initial evaluation of the framework can be found here.