Hello everyone, I am Xiaoshui!
Today, I want to share my learning notes from the AI learning course “Everyone is a Prompt Engineer” by Geek Time, focusing on common design pitfalls in prompts.

Now, let me share my experiences regarding some of the above pitfalls when using AI large language models.
As a programmer, my most common use of AI is to ask programming questions and have AI help me complete some programming tasks.
Sometimes, when I ask AI to complete complex programming tasks, the results are not ideal. Even after repeatedly optimizing the prompts, the answers provided by AI do not meet my expectations, resulting in chaotic and incorrect outputs.
For instance, when I want AI to write a complex programming task in one go, I tend to design a very long prompt.
This prompt involves many complex issues, and although I provide detailed explanations and examples to help AI understand better,
the result is that AI seems to have “one ear in, one ear out,” not retaining any important information, and many details are overlooked, leading to answers that do not meet expectations.
Why does this happen?
After taking the Geek Time course, I realized that I had fallen into two common pitfalls in prompt design: having overly high expectations and neglecting details.
I expected too much from AI, wanting it to complete very complex professional tasks in one go, while ignoring the fact that AI also has limitations; it is not omnipotent and cannot solve every problem.
At this stage, it is unrealistic to rely on AI to complete everything for us, even very professional tasks; it can only serve as an assistant.
Additionally, when the designed prompts are too long and complex, AI tends to overlook details and may not remember them.
Complex prompts can make it difficult for large language models to parse; the longer the content, the more likely the model is to make errors.
The result is that AI fails to extract many important pieces of information from the prompts, leading to chaotic outputs that do not meet our expectations, or even incorrect answers, making it hard to obtain a reasonable response.
We need to lower our expectations of AI; for complex issues, we can break them down into smaller sub-issues, tackle them step by step, allowing AI to gradually address our questions, which will help us achieve the desired results.
That’s my understanding of the pitfalls in prompt design.
Alright, that’s it for today’s article. If you find my article helpful, feel free to share it with your friends.
See you next time!