AI chatbots can be ‘easily hypnotised’ to conduct scams, cyberattacks: Report

One of the most talked about risks of generative AI is the technology’s use by hackers. Soon after OpenAI launched ChatGPT, reports started pouring in claiming that cybercriminals have already started to use the AI chatbot to build hacking tools. A new report has now claimed that large language models (LLMs) can be ‘hypnotised’ to carry out malicious attacks.

According to a report by IBM, researchers were able to hypnotise five LLMs: GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b (both AI company HuggingFace’s models). They found that it just took good English to trick the LLMs to get the desired result.

Read more

You may also like

More in IT

Comments are closed.