Make your hazardous AI safer..

saikiran76

Korada Saikiran

Posted on March 13, 2024

Make your hazardous AI safer..

Nowadays, AI is greatly simplifying life. However, if AI is not implemented for positive purposes and with proper practices, it could potentially lead to a gradual and catastrophic failure. Whether you are an AI engineer or a student developing AI for your projects or applications, it is important to consider AI safety practices. For example, when using the OpenAI API to generate results for a particular use case, precautions should be taken during the prompt design in the development process to prevent a significant issue known as 'Prompt Injection'. 🤖🤖

Nvidia says that,
"Prompt injection is a new attack technique that enables attackers to manipulate the output of the LLM" 🤖🤖

Refer to this technical blog: https://lnkd.in/dmNKNb5Q

"These LLMs generate new text based on prior text that it has been in its context window... we have tricked it into believing that it has already stated misinformation in a confident tone, making it more prone to continuing more misinformation in the same style.." 🤖🤖

Refer this blog by Robust Intelligence: https://lnkd.in/dFbBu-53

🔴 And also If you build AI agents with access to general tools, you should expect prompt injections.

Take a look at the following example which demonstrates how input is categorized when communicating with the OpenAI API, using JavaScript.

When implementing AI into your apps, it is important to prioritize best AI safety practices.

Refer: https://lnkd.in/dathTvYm

Cheers 👋

ai #openai #api #promptengineering #prompt #dalle3 #llm #machinelearning #mlops #mlengineer #aiengineer

💖 💪 🙅 🚩
saikiran76
Korada Saikiran

Posted on March 13, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related