Computational models for behavior prediction require a deep understanding of human decision-making mechanisms, including both observable actions and latent cognitive states. Traditional behavior prediction models often focus on external actions without considering the internal decision-making mechanisms that shape those behaviors. To address this limitation, our study proposes a hierarchical framework that explicitly integrates cognitive processes with behavioral dynamics using inverse reinforcement learning. Our model interplays between two agents: a cognitive agent that captures latent cognitive states and guiding values, and a behavior agent that translates the environmental states and guiding values into observable actions.
We apply this framework to the domain of persuasive technologies and behavioral interventions, particularly in the context of promoting prosocial behavior. Through simulations, we want to explore two key hypotheses about behavior emergence:
Our simulation results indicate that incorporating the hierarchical structure of human decision-making processes significantly improves the interpretability of human behavior and provides actionable insights for designing personalized interventions.
In this study, we want to explore the following scenarios through voluntary work settings.
This study brings contributions to the field by integrating cognitive mechanisms into computational modeling. We aim to provide a more contextual understanding of behaviors not only to enhance the predictive power of behavior models but also to develop strategic interventions that are more aligned with individual cognitive states.