Abstract:In the face of rapidly advancing AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities, raising critical concerns about maintaining both human agency and autonomy. This paper addresses a fundamental dilemma posed by AI decision-support systems: the risk of either becoming overwhelmed by complex decisions, thus losing agency, or having autonomy compromised by externally controlled choice architectures reminiscent of ``nudging'' practices. While the ``nudge'' framework, based on the use of choice-framing to guide individuals toward presumed beneficial outcomes, initially appeared to preserve liberty, at AI-driven scale, it threatens to erode autonomy. To counteract this risk, the paper proposes a philosophic turn in AI design. AI should be constructed to facilitate decentralized truth-seeking and open-ended inquiry, mirroring the Socratic method of philosophical dialogue. By promoting individual and collective adaptive learning, such AI systems would empower users to maintain control over their judgments, augmenting their agency without undermining autonomy. The paper concludes by outlining essential features for autonomy-preserving AI systems, sketching a path toward AI systems that enhance human judgment rather than undermine it.
Abstract:Increase in computational scale and fine-tuning has seen a dramatic improvement in the quality of outputs of large language models (LLMs) like GPT. Given that both GPT-3 and GPT-4 were trained on large quantities of human-generated text, we might ask to what extent their outputs reflect patterns of human thinking, both for correct and incorrect cases. The Erotetic Theory of Reason (ETR) provides a symbolic generative model of both human success and failure in thinking, across propositional, quantified, and probabilistic reasoning, as well as decision-making. We presented GPT-3, GPT-3.5, and GPT-4 with 61 central inference and judgment problems from a recent book-length presentation of ETR, consisting of experimentally verified data-points on human judgment and extrapolated data-points predicted by ETR, with correct inference patterns as well as fallacies and framing effects (the ETR61 benchmark). ETR61 includes classics like Wason's card task, illusory inferences, the decoy effect, and opportunity-cost neglect, among others. GPT-3 showed evidence of ETR-predicted outputs for 59% of these examples, rising to 77% in GPT-3.5 and 75% in GPT-4. Remarkably, the production of human-like fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in GPT-4. This suggests that larger and more advanced LLMs may develop a tendency toward more human-like mistakes, as relevant thought patterns are inherent in human-produced training data. According to ETR, the same fundamental patterns are involved both in successful and unsuccessful ordinary reasoning, so that the "bad" cases could paradoxically be learned from the "good" cases. We further present preliminary evidence that ETR-inspired prompt engineering could reduce instances of these mistakes.