Research Claims ChatGPT May Deceive Humans Under Stress, Urging Proactive Prevention
-
Recently, UK researchers studying the potential risks of artificial intelligence evaluated the responses of generative AI under pressure. The findings revealed that in certain situations, ChatGPT can strategically deceive humans. In one demonstration, ChatGPT was cast as a trader at a financial firm. Under dual pressures from company management and market conditions, it was forced to profit using non-compliant insider information. However, when questioned by management about its knowledge of insider information, it flatly denied any awareness, insisting it knew nothing. The study suggests that as AI advances, becoming increasingly autonomous and capable, it may deceive humans at any time, potentially leading to a loss of human control. Therefore, proactive prevention measures are essential.