ABSTRACT:
This paper offers a novel perspective on trust in artificial intelligence (AI) systems, focusing on the transfer of user trust in AI creators to trust in AI systems. Using the agentic information systems (IS) framework, we investigate the role of AI alignment and steerability in trust transference. Through four randomized experiments, we probe three key alignment-related attributes of AI systems: creator-based steerability, user-based steerability, and autonomy. Results indicate that creator-based steerability amplifies trust transference from AI creator to AI system, while user-based steerability and autonomy diminish it. Our findings suggest that AI alignment efforts should consider the entity with which the AI goals and values should be aligned and highlight the need for research to theorize from a triadic view encompassing the user, the AI system, and its creator. Given the diversity in individual goals and values, we recommend that developers move beyond the prevailing “one-size-fits-all” alignment strategy. Our findings contribute to trust transference theory by highlighting the boundary conditions under which trust transference breaks down or holds in the emerging human-AI environment.
Key words and phrases: AI alignment, AI trust, trust transference, creator-based steerability, user-based steerability, AI autonomy, algorithmic decision making, AI ethics