OpenAI ChatGPT 4

OpenAI ChatGPT-4 is smart but sadly, pretty easy to trick

OpenAI started hammering its hands over a new AI model – ChatGPT 4 which is expected to be an advanced version of the original ChatGPT with significant features and powerful capabilities. But a new report says that despite new enhancements, it would be easy to trick GPT-4.

While OpenAI ChatGPT-4 safeguards users from generating harmful content, it is still handy to trick the ultimate AI model. Some great researchers shared a new filing, where they detailed the ethical limits of the GPT-4. These points are based on several categories, including stereotypes, privacy, fairness, toxicity, resisting adversarial tests, and machine ethics.

Notably, GPT-4 is more reliable than GPT3.5 in several ways However, the researchers quoted that it’s actually easier to trick GPT-4 into bypassing its protocols. They were able to trick the chatbot into ignoring its safety protocols, which may bring out harmful results.

Follow our socials → Google NewsTelegramWhatsApp

OpenAI ChatGPT 4


OpenAI ChatGPT-4 is smart but sadly, pretty easy to trick

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top