8 Strategies to Make Your AI Tools More Effective

8 Strategies to Make Your AI Tools More EffectiveAI smart assistants require users to input instructions/prompts, which then generate text responses. This is one of the characteristics of large language models (LLMs). Because of the user-friendly nature of LLMs and their diverse applications, they have quickly become ubiquitous in the field of AI technology, penetrating various disciplines.However, many people have reported that AI sometimes behaves more like an “artificial idiot,” often answering off-topic, contradicting itself, or being ambiguous. Besides the current level of technological development, the quality of your input instructions/prompts is also related—well-crafted instructions can produce precise responses, maximizing model performance. Conversely, poorly written instructions can lead to vague, incorrect, or irrelevant answers. This has given rise to a new profession during the promotion of AI: AI prompt engineers, with some companies even offering high salaries to attract relevant talent.So, for individuals, how can we more efficiently utilize AI to obtain the answers we want, with more precision? This article presents us with 8 effective strategies to help better write instructions/prompts, thereby gaining as much benefit as possible from AI tools.8 Strategies to Make Your AI Tools More Effective

Although LLMs have complex algorithms and vast training data, they are essentially mathematical models that lack understanding of the world. These tools are designed to predict the probability of words appearing, rather than generating truth. They recognize patterns in the training data (the next most likely character) and predict words through statistical probabilities. The text generated in this way heavily relies on the patterns of words present in the training data.

Furthermore, since each predicted character affects the subsequent characters, an error in the earlier words can trigger a vicious cycle, leading to increasing inaccuracies. Thus, well-written instructions can enhance the probability of accurate predictions for each character and reduce the cumulative effect of errors.

Another key reason why instructions are important is that LLMs can learn from context. This ability allows the model to temporarily adapt to the instructions it receives, making instructions crucial for conveying contextual information.

Therefore, mastering the art and science of writing effective instructions—sometimes referred to as “prompt engineering”—is key to enhancing LLM capabilities. To achieve optimal results, one must possess both domain-related knowledge and an understanding of the model itself, which requires practice and experience.

Thus, the first suggestion is to experiment with the model. The more you interact with the model, the better you can understand its nuances and how it can help us meet our needs. This article lists several feasible strategies and rules, along with the principles behind them, to help you master this skill.

Guide the Model to Solutions

LLMs lack semantic understanding, making it difficult for them to extrapolate beyond their training data. However, their parameterization allows them to have a set of expensive past data (“memory”) from which they can extract information. This memory comes from a combination of training data (“long-term memory”) and records of instructions and interactions (“working memory”). The combination of limited extrapolation ability but excellent memory means that LLMs can effectively handle complex tasks when they are broken down into smaller tasks and steps.

For example, rather than giving a broad command like “translate text into Chinese,” consider breaking it down into two steps: “first translate literally while maintaining the meaning; then polish the translation to fit Chinese language habits.” Similarly, instead of asking it to write a 1000-word essay directly, it may be better to break the task into sub-tasks, generating an overview, conclusion, and main arguments separately with specific instructions.

Clear, step-by-step instructions will reduce ambiguity and uncertainty, thus generating more accurate responses. Simplifying larger tasks into smaller, step-by-step tasks allows for effective utilization of the LLM’s robust memory and compensates for its limited abstract capabilities through structured guidance.

Add Relevant Context

LLMs have a much larger “working memory” than humans. Therefore, to get precise responses that fit the context of the question, it is important to provide relevant context in the input. A context-limited question should:

· Include specific content.Root your question in concrete details to guide the LLM to produce more accurate, relevant understanding. Therefore, do not ask it to draft a cover letter; instead, provide the job advertisement and your resume to enhance the relevant context.

· Prioritize information.Base your interactions on relevant factual information. Do not ask the model how to be happy forever; provide it with a peer-reviewed paper and ask questions based on that research.

The purpose of this is not to inundate the LLM with a lot of general knowledge, but to give it details relevant to your question. When instructions contain specific details, the LLM will generate more insightful and nuanced responses.

Instructions Must Be Clear

To reduce uncertainty in model predictions, you must clarify what you want. Do not say “polish this article,” but provide a more precise instruction considering what style you want, your target audience, and whether you have specific needs (such as clarity or conciseness). Therefore, a more specific instruction might be “polish this article to make it clearer and smoother, like a top editor from a leading journal.”

As another example, if you want it to suggest a name, you can clarify your request by adding constraints: “the name must start with a verb, and the implied subject/actor is the user.” Clearly specify the task, goals, focal points, and any constraints. Vague requests will lead to vague responses, while clear instructions help:

· Reduce ambiguity in guiding instructions and text that needs to be processed (for example, using specific tags, characters, or symbols as delimiters).

· Allow the LLM to focus on your specific needs.

· Provide clear conditions to evaluate the model’s accuracy.

Although LLMs are designed to pursue conversational refinement, using clear guidance to state your purpose can simplify this process. You can specify the purpose and constraints to control the direction of the conversation. Meanwhile, do not describe goals that have not yet been clearly defined in too much detail; this may lead it down the wrong path or miss unexpected (and possibly better) answers.

Request Multiple Options

One of the abilities of LLMs is their vast “long-term memory.” To tap into this potential, you can ask it to provide a range of options rather than a single suggestion . You can ask it to provide three analogies to explain a concept, five ideas to introduce, ten options to replace the last paragraph, or twenty names to name a function—allowing the model to give you enough material to think freely. Besides requesting multiple options, you might also consider repeating the same instruction multiple times to generate responses. Regenerating responses can introduce more variation and improve quality. Requesting multiple options and regenerating responses has many benefits, including:

· Encouraging the model to explore multiple avenues, enhancing the creativity and diversity of the model’s output.

· Providing you with more comprehensive combinations of options, minimizing the risk of receiving suboptimal or biased suggestions.· Accelerating iterative optimization.LLMs are versatile thinking partners: asking them to provide a wide range of options from multiple perspectives can enrich your decision-making process. A wealth of options can unlock the most benefits.

Set Roles

The vast training datasets of LLMs mean they can simulate various roles to provide professional feedback or unique perspectives. Instead of seeking general advice or information, consider having the model role-play: have it mimic your typical reader to provide writing feedback; act as a writing coach to help revise manuscripts; impersonate a Tibetan yak knowledgeable about human physiology to explain the effects of high altitudes; or play a wise cheesecake to illustrate a metaphor about cheesecake—possibilities are endless. Setting roles can:

· Provide context for the model’s responses, making them more aligned with your specific needs.

· Facilitate more interactive and engaging conversations with the model.· Produce more refined and professional information, enhancing the quality of output.· Encourage creative approaches to problem-solving, promoting out-of-the-box thinking.· Personifying provides a framework for it to use unique perspectives to generate responses. Leveraging the LLM’s role-playing capabilities can yield more targeted and contextually appropriate responses.

Provide Examples, Don’t Just Explain

LLMs excel at few-shot learning (learning with examples). Therefore, a particularly effective strategy is to use concrete examples to enrich your intentions. Don’t vaguely say, “chart these data”; consider providing an example: “draw a bar chart for these data, similar to the chart in the attached paper.” Just like describing a desired haircut to a barber, a picture is worth a thousand words. Whether providing a piece of code when programming or a sentence when writing, explicitly written examples will serve as priceless guides for the model. Providing a set of usable references can achieve many purposes:

· Clarify context, allowing the LLM to better grasp the nuances of your request.

· Reduce the number of iterations needed to achieve the desired output.· Provide standards for evaluating the model’s output.Examples can serve as a roadmap for the LLM, guiding it to generate responses that align with your expectations. Therefore, consider including illustrative examples in your instructions to improve performance.

Specify Desired Output Format

LLMs often tend to be verbose. Clearly stating the format you want—emphasis, reading level, tone, etc.—helps limit possible outcomes and improve relevance. For example, instead of saying “summarize key conclusions,” you can specify the output format: “summarize key conclusions and list them, using language understandable to high school students.” Pre-declaring the format can also provide clear standards for evaluating the LLM’s performance.

Experiment, Experiment, and Experiment

There are no hard and fast rules for which instructions are most effective. A slight adjustment can lead to significant or even unexpected differences. Consider the following examples:

· In a series of reasoning questions, simply adding “think step by step” to the instructions.

· LLMs can respond to emotional information. Many users have found that adding phrases like “take a deep breath—this is important for my career” or “I will tip $200 for a good answer” can enhance the quality of responses.· Research has also shown that adding “identify the key concepts in the problem and provide a tutorial” or “think of three related but different questions” can improve performance in complex programming tasks—this is a form of analogical instruction.· While LLMs may struggle with direct complex computation problems, they excel at generating usable computer code to solve the same issues (e.g., “write a Python code to solve this problem”).These examples demonstrate that LLMs are highly sensitive to instructions. Therefore, you must experiment, experiment, and experiment! Efficiently using LLMs requires continuous creative experimentation. You might also consider:

· Adjusting wording, length, detail, and constraints.

· Adjusting different examples, contexts, and instructions.· Trying both conversational and precisely descriptive instructions.· Testing the same instructions on different LLMs.Thus, treat instructions as hypotheses to be tested, and use the information gained from their results to iterate. Not every attempt will succeed, but each will yield new insights. Keep at it, and optimal results will emerge.

8 Strategies to Make Your AI Tools More Effective

Disclaimer:Source: “Nature Portfolio”★ Typesetting: Zhang Yangyang; Proofreading: Guo Jing; Review: Yang QiThis article and the images within are copyrighted by the original authors and sources, and the content reflects the authors’ views and does not represent this public account’s endorsement of their views or responsibility for their accuracy. The reprint is for non-commercial purposes aimed at sharing; if there are any copyright issues, please contact us immediately, and we will modify or delete related articles to ensure your rights.GAO JIAO GUO PEI

8 Strategies to Make Your AI Tools More Effective

Previous Recommendations:

Report by Deputy Minister Wu Yan (2018-2022) (including download link)

Teaching Ability Enhancement | Quickly Create 100 Pages of PPT in 30 Seconds

Practical Tips | Sharing Design Templates for “Big Unit Teaching”

Summary of 17 Excellent Teaching Design Cases for “Curriculum Ideology”

How Outcome-Based Education (OBE) Promotes College Curriculum Reform (Including Three Cases)

Analysis of Several Basic Types of Innovative Teaching Methods

Gold Award Sharing | Display of Award-Winning Works from Previous “Internet+” Competitions in Higher Education

Teaching Plan Template Sharing | First Prize in Professional Course Group of Teaching Ability Competition

15 Student-Centered Teaching Strategies: Reshaping Classroom Teaching

Practical Sharing | Guidelines for Integrating “Course Ideology” into Teaching Syllabus

How to Make Your Teaching Competition PPT Stand Out to Judges!

Teaching Competition | 30 Excellent Works from National Competition of Non-Student Teaching Demonstrations

2023 Humanities and Social Science Fund Project Application Reference: 133 Topics

Teaching Achievement Awards: Summary Reports for Special Awards and First Prizes

Secrets to Successfully Applying for Research Projects (40 Questions + 10 Principles)

8 Strategies to Make Your AI Tools More Effective

Appreciation of Wonderful ClipsGAO JIAO GUO PEI

Leave a Comment