Effective Ways to Master AI Prompts
Irpan, an Indonesian writer, asks:
๐๐ฉ๐ข๐ต ๐ข๐ณ๐ฆ ๐ต๐ฉ๐ฆ ๐ฎ๐ฐ๐ด๐ต ๐ฆ๐ง๐ง๐ฆ๐ค๐ต๐ช๐ท๐ฆ ๐ต๐ฆ๐ค๐ฉ๐ฏ๐ช๐ฒ๐ถ๐ฆ๐ด ๐ง๐ฐ๐ณ ๐ฎ๐ข๐ด๐ต๐ฆ๐ณ๐ช๐ฏ๐จ ๐๐ ๐ฑ๐ณ๐ฐ๐ฎ๐ฑ๐ต ๐ฆ๐ฏ๐จ๐ช๐ฏ๐ฆ๐ฆ๐ณ๐ช๐ฏ๐จ ๐ช๐ฏ ๐ต๐ฐ๐ฐ๐ญ๐ด ๐ญ๐ช๐ฌ๐ฆ ๐๐ฉ๐ข๐ต๐๐๐, ๐๐ญ๐ข๐ถ๐ฅ๐ฆ, ๐ฐ๐ณ ๐๐ฆ๐ฎ๐ช๐ฏ๐ช?
My answer:
To effectively measure app prompt engineering with AI models like ChatGPT or Gemini, four key structures can help fine-tune the best output:
๐ฃ๐ฒ๐ฟ๐๐ผ๐ป๐ฎ: Assigning a persona (a role or profession) to the AI gives it context, enabling it to approach the response from that specific angle.
๐๐ป๐ฑ ๐ฅ๐ฒ๐๐๐น๐: For reasoning models, clearly stating the desired end result allows the AI to deduce or reason the most appropriate path to achieve that outcome. This can lead to alternative solutions you might not have considered.
๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ ๐๐ฎ๐ฐ๐ฒ๐ฑ: Explaining the current problem provides context about what you've already attempted and what's preventing you from reaching the end goal. This helps the AI address the issue or suggest a different, more effective approach.
๐๐ผ๐ฟ๐บ๐ฎ๐: Guiding the AI with a specific output format (e.g., table, narrative, bullet points) and providing an example ensures the response is presented in the desired manner.
These four elementsโrole, end result, problem, and formatโare crucial for effective prompts.
While the complexity of the prompt depends on the question's nature, a general concluding phrase like "How would you go about achieving the end result?" can give the AI freedom to think divergently, especially if your focus is solely on the outcome.
This can uncover solutions you might not have envisioned due to limited personal experience or expertise.
It's worth noting that prompt engineering is becoming less critical as AI advances in its ability to understand user intent.
However, these foundational elements will likely remain relevant to guide AI responses, even with the rise of increasingly sophisticated AI agents.
Regardless of prompt quality, remember that AI language models are probability-based predictors.
Even with instructions to avoid certain information or to fact-check, there's always a possibility of error or deviation from the prompt's exact instructions.