Best Practices for Prompt Engineering: A Practical Guide
Ten practical rules for designing reliable, repeatable prompts for large language models, with clear explanations and copy-pasteable examples.
*Note: This article was originally written on Medium. I don't write there anymore.*
Prompt engineering is a craft: clear goals, careful constraints, and iterative refinement turn ordinary prompts into reliable, repeatable instructions. Below are ten practical best practices used by practitioners and engineers when designing prompts for large language models. Each section includes a short explanation and a concrete example you can copy, paste, and adapt.
1. Start simple and refine through iteration
Why: Begin with the smallest instruction that produces an acceptable result. A minimal prompt reveals the model’s baseline behavior; subsequent iterations add constraints to correct errors.
Example — Minimal
Prompt:
Summarize the following paragraph in one sentence: "[input paragraph here]"
Example — Improved
Prompt:
You are an editor. Summarize the following paragraph in one sentence that emphasizes the main claim and outcome. Do not exceed 20 words. "[input paragraph here]"
2. State instructions clearly and explicitly
Why: Defining a persona and an objective reduces ambiguity about style, depth, and assumptions the model should adopt.
Example
Prompt:
You are an experienced data scientist. Explain the difference between precision and recall to a junior engineer in three bullet points, each with a short example.
Expected output:
Three bullets: concise definition + one-line example for each metric.
3. Be specific and detailed in your requirements
Why: Vague instructions lead to inconsistent outputs and hallucinations. State length limits, structure, tone, and audience level to ensure repeatability.
Example
Prompt:
Write a formal email (150–200 words) to the product manager requesting a one-week deployment delay. Include two concrete risks and one mitigation strategy. Use a professional tone.

4. Provide examples or templates for the desired output
Why: When you require a specific structure (JSON, table, checklist), give a template or one to three examples so the model reproduces the schema exactly.
Example
Prompt:
Produce a JSON object with keys: "title", "summary", "tags".
Example:
{"title":"...", "summary":"...", "tags":["tag1","tag2"]}
Now generate JSON for the attached article text: [article text]Expected output:
One valid JSON object matching the schema.
5. Avoid negative instructions. Improve the quality of your instruction instead
Why: Telling the model what to do is more reliable than telling it what not to do. If you must forbid something, restate the desired alternative.
Less effective
Prompt:
Don't use buzzwords or clichés in this product description.
More effective
Prompt:
Write a product description using concrete features and measurable benefits. Use plain language; avoid phrases such as "game-changer" or "world-class."
6. Progress from simple to complex (zero-shot to few-shot)
Why: Start with zero-shot to observe default behavior. If the task is complex or requires a particular style, supply one to five examples (few-shot) that the model should emulate.
Example — Zero-shot
Prompt:
Convert the following bullet list into a short persuasive paragraph: [bullets]
Example — Few-shot
Prompt:
Example 1: Bullets: [A] Output: [rewritten paragraph A] Example 2: Bullets: [B] Output: [rewritten paragraph B] Now convert: [new bullets]
7. Reduce unnecessary fluff and keep prompts concise
Why: Extra, irrelevant information increases the chance of contradictions and hallucinations. Provide only what the model needs to complete the task.
Inefficient
Prompt:
I want you to act like someone who’s read dozens of textbooks on marketing. Write something persuasive but not too long and avoid being salesy.
Efficient
Prompt:
Write two persuasive lines promoting a time-management app. Tone: professional, restrained. Avoid superlatives.
8. Use leading cues and structure for code generation
Why: For code generation, state language, function signature, input/output types, and testing expectations. Leading verbs such as “Implement”, “Return”, and “Function signature” reduce ambiguity.
Example
Prompt:
Implement a Python function def merge_intervals(intervals: List[List[int]]) -> List[List[int]]: that merges overlapping intervals. Provide only the function definition and a short docstring. Include one doctest example.
Expected output:
A single function implementation with a docstring and a doctest.
9. Define clear acceptance criteria and constraints
Why: Specify measurable success conditions (word limits, schema validation, performance constraints). This enables automated checks and reduces iteration.
Example
Prompt:
Summarize this research paper in at most 120 words. Include: (1) research question, (2) method, (3) key result. Do not include citations or verbatim quotes.
Expected output:
A ≤120-word summary with three labeled sentences: Research question, Method, Result.
10. Establish voice, audience, and contextual assumptions
Why: Consistent voice and assumptions yield consistent outputs. State the target audience and any prior knowledge the model should assume.
Example
Prompt:
You are a senior backend engineer writing for mid-level engineers. Explain how to design an idempotent payment endpoint in five short steps. Tone: concise and technical.

Quick checklist for production prompts

Workflow tips for reliability
Closing note
Effective prompts are engineered, not improvised. The more you iterate, test, and refine, the more predictable your AI integration becomes.
Concurrency vs Parallelism (and why you're confused)
→Concurrency is about how a program is structured. Parallelism is about how a program runs. Mixing them up doesn’t make you clever.
How a computer actually starts (power button to OS)
→The entire boot process is a strict chain of trust: firmware in ROM → bootloader → kernel → userspace. No magic, just carefully placed code.

