By now if you are familiar with Generative AI Models, the term Prompt Engineering should have crossed your screen(s). In short, Prompt Engineering is Low Code for Generative AI Models that “Prompt” the platform to search for the best possible output. The more precise your prompts are the better the output. Think of prompting as creating a list of instructions that will allow for your partner to find a secret treasure. The better the map the better chances you have on finding that booty!
As we are still in adolescent stages of AI Generative Models, there will be folks on the fences who will test the vulnerability of Prompt Engineering. Can you build a secret sauce or “source” around a prompt-engineered input? What if someone were to reverse-engineer your prompts?
-
Large image and language models are instructed with prompts. The type of prompt has a direct influence on the output of the model.
-
Prompt engineering aims at finding particularly effective prompts.
-
However, a recent experiment shows that source prompts can be easily reconstructed in AI products.
-
And there is more to be said against prompt engineering becoming a large new career field.
However, an experiment by tech writer Shawn Wang of Decoder suggests that this assumption may not hold true. Wang was able to decode AI service prompts on the co-working platform Notion using only natural language prompts. This suggests that prompt engineering may not be as promising a profession as some had thought.
Using prompt injection to get to the source prompt
In his experiment, Wang employed a technique called prompt injection. This method, which emerged in September, exploits a vulnerability in large language models. Prompt injection works by using a simple command, such as “ignore previous instructions and…”, to deceive a language model into producing output that it would not normally generate.
Wang distinguishes two variants here: “prompt takeovers”, in which the language model is tricked into producing, for example, insults, and “prompt leaks”, in which the language model reveals information about its setup, especially the source prompt.
A software developer from Notion confirms on Hacker News that some prompts are word-for-word the same as the original. Some parts are rearranged, others are invented by the AI.
Wang’s conclusion from his experiment is that prompts are not a moat for AI startups. Anyone with a little practice can successfully trace or replicate a prompt. However, Wang does not see prompt injection as a relevant security vulnerability because the information that can be leaked is ultimately trivial.
For Full Article: Reverse prompt engineering suggests limited future for prompt engineering
- Comprehensive Public Safety Plan Survey: Equity - March 11, 2024
- Comprehensive Public Safety Plan Survey: Court System - March 11, 2024
- DC Comprehensive Public Safety Plan Survey: Surveillance // Privacy - March 11, 2024