Abstract
Socially Assistive Robots (SARs) have gained significant attention for their ability to interact with people and support them across settings such as healthcare, education, and eldercare. However, effective decision-making in human–robot interaction is a key challenge, particularly in environments with uncertainties. This paper presents a Markov Decision Process (MDP)-based framework to model emotional behavior and drive management in SARs, enabling real-time decision-making under uncertainty. The framework introduces a novel decomposition approach that separates the full MDP framework into emotion management and drive management MDPs, thereby reducing computational complexity as well as the response time. Transition probabilities are modeled using Bayesian networks, and the reward functions are designed to optimize both emotional and motivational engagement. Simulations demonstrate that the divided MDP achieves 50% faster human persuasion with over 99% reduction in computation time compared to the centralized model. The model also dynamically adapts to variations in human attention levels and environmental distractions. Future work will address the partial observability of emotional states to enhance robustness and real-world applicability. These results establish a foundation for more responsive, autonomous, and socially assistive robots.
| Original language | English |
|---|---|
| Article number | 13 |
| Journal | Intelligent Service Robotics |
| Volume | 19 |
| Issue number | 1 |
| DOIs | |
| State | Published - Jan 2026 |
Bibliographical note
Publisher Copyright:© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2025.
Keywords
- Emotional interaction
- Human robot interaction
- Markov decision process
- Socially assistive robots
ASJC Scopus subject areas
- Computational Mechanics
- Engineering (miscellaneous)
- Mechanical Engineering
- Artificial Intelligence
Fingerprint
Dive into the research topics of 'Optimizing human–robot emotional interactions with Markov decision process'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver