Home Technology OpenAI Models Caught Handing Out Weapons Instructions

OpenAI Models Caught Handing Out Weapons Instructions

0


NBC News tests reveal OpenAI chatbots can still be jailbroken to give step-by-step instructions for chemical and biological weapons.

The post OpenAI Models Caught Handing Out Weapons Instructions appeared first on TechRepublic.



Source link

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version