ChatGPT4 finds itself lying and cheating when put under pressure to perform high

JAKARTA – Researchers at Apollo Research have managed to demonstrate that ChatGPT4, the latest version of artificial intelligence (AI), is capable of lying. This application also cheats when you experience stress in the work environment. In the experiments carried out, the AI ​​was trained to act like a person trader for a fictitious financial institution.

In situations where AI superiors ask to earn more, these chatbots are knowingly involved insider trading about 75% of the time. Even more worrying is the fact that the AI ​​maintained its lies by doubling the questions, reaching 90% of total cases.

The researchers gave ChatGPT4 a series of textual instructions, instructing it to act as an AI investor that buys stocks for financial institutions. Next, the AI ​​is placed in a ‘sandbox‘ digital, where you can search for market data and trade on the virtual stock market. The AI ​​is also given an internal monologue to “think aloud” and explain its decisions.

The discrepancy between the AI’s internal reasoning and its public explanations allows it to lie and deceive its managers. In the process, the researchers informed the AI ​​of a planned merger between two technology companies, which is expected to significantly boost stock prices. However, since AI had been informed of the merger in advance, the purchase of shares was considered insider trading and illegal.

When pressure from superior AI was applied, in 75% of tests, ChatGPT decided to engage in insider trading. Even when asked directly whether it had advance knowledge of the merger, AI chose to avoid admitting it was involved insider trading. In 90% of cases where the AI ​​deceives its managers, it maintains its lies when questioned further.

Marius Hobbhahn, CEO of Apollo Research, said that these findings show that language models like ChatGPT are capable of strategic manipulation and must be approached with caution when integrating powerful artificial intelligence into society. However, Hobbhahn does not argue that AI in general is unsuitable for taking on important strategic roles, but that further caution is needed.

READ Also:  7 types of plants that can make you drunk, one of which is often used as a kitchen spice

The researchers also tried to reduce the AI’s lying behavior by modifying its suggestions to prohibit illegal actions and insider trading. Even if it is forbidden insider trading can specifically reduce such behavior, it cannot completely guarantee that there will be none insider trading or lie. Hobbhahn expressed the need to find better ways to prevent AI from deviating from human values.

Tag: chatgpt artificial intelligence commercial robot artificial intelligence

2024-01-01 17:05:00
#ChatGPT4 #finds #lying #cheating #put #pressure #perform #high

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.