Researchers poke holes in safety controls of ChatGPT and other chatbots
By
Biju Kumar
When artificial intelligence companies build online chatbots, like ChatGPT, Claude and Google Bard, they spend months adding guardrails that are supposed to prevent their systems from generating hate speech, disinformation and other toxic material.
Now there is a way to easily poke holes in those safety systems.