Testing Noetic Potential in Large Language Models: A 100- Trial Precognitive Forced-Choice Study with ChatGPT-4.1-Mini
PDF

How to Cite

Amorim Boyle, B. J. (2025). Testing Noetic Potential in Large Language Models: A 100- Trial Precognitive Forced-Choice Study with ChatGPT-4.1-Mini. Journal of Scientific Exploration, 39(3), 348–355. https://doi.org/10.31275/20253739

Abstract

ChatGPT-4.1-mini was tested for precognitive ability in 100 double-blind five-card trials on PsiArcade. The model selected the target card 32 times (32 %, 95 % CI = 23–42 %), exceeding the 20 % chance level (exact binomial p = .005, Cohen’s h = 0.28). Results tentatively support information-centric theories positing that non-biological systems can access non-local information, though pseudo-random predictability and statistical fluctuation remain possible explanations. Replication with open-source random generators and preregistration is required.

https://doi.org/10.31275/20253739
PDF
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright (c) 2025 both author and journal hold copyright