ABSTRACT:
As organizations increasingly adopt artificial intelligence (AI) systems to support high-stakes decisions, a pressing concern is whether AI behavior visibly aligns with the organization’s stated values. Anchored in Mayer et al.’s integrative model of trust and psychological contract breach theory, this study investigates how organizational-AI misalignment, a structural inconsistency between the organization’s stated policy (“Talk”) and the AI system’s recommendation (“Walk”), affects users’ integrity-based trust in AI, perceptions of psychological contract breach, and AI recommendation reliance. Using data from 349 employees exposed to a hiring decision scenario, we find that organizational-AI misalignment significantly increases psychological contract breach, which in turn reduces integrity-based trust in AI and diminishes AI recommendation reliance. We also find that users’ baseline perception of AI opacity shapes these dynamics: when AI is perceived as less opaque, integrity-based trust in AI is more likely to translate into reliance, but psychological contract breach under these conditions triggers stronger negative responses. Additionally, a post hoc analysis reveals that the nature of the firm’s stated hiring policy (performance-priority vs. diversity-inclusive) moderates the effect of misalignment, with value-laden messaging intensifying user backlash when AI behavior contradicts organizational Talk. Together, these results highlight two levers for fostering appropriate reliance: maintaining Talk-Walk alignment and managing users’ opacity expectations so that organizational “Talk” and algorithmic “Walk” remain visibly consistent.
Key words and phrases: Organizational-AI misalignment, psychological contract, AI trust, AI reliance, moral integrity, AI opacity, recommendation reliance AI recommendations