Prompt Engineering Techniques
π Introductionβ
Prompt engineering is the art and science of crafting inputs that guide Large Language Models (LLMs) to produce desired outputs. While anyone can write a prompt, effective prompt engineering requires an understanding of LLM behavior, configuration tuning, and iterative testing.
Based on Googleβs 2024 whitepaper, this guide breaks down strategies, parameters, and real-world examples to help you get the most from any LLM.
βοΈ LLM Configuration Essentialsβ
Setting | Description | Tips |
---|---|---|
Temperature | Controls randomness in output | Use 0 for deterministic, 0.9+ for creativity |
Top-K | Sample from top K tokens | Lower K for focus, higher for diversity |
Top-P | Sample from top tokens within cumulative probability P | 0.9β0.95 is a good balance |
Token Limit | Controls max length of output | Impacts cost and clarity |
β
Recommended defaults: temperature=0.2
, top-P=0.95
, top-K=30
.
π§ͺ Prompting Techniques (with Examples)β
πΉ Zero-Shot Promptingβ
Use: Simple tasks where the model can generalize well.
Prompt:
Translate this to French: "Where is the nearest restaurant?"
πΉ One-Shot Promptingβ
Use: When the model needs guidance on format or tone.
Prompt:
Example:
Input: "What is the capital of France?"
Output: "The capital of France is Paris."
Now, answer this:
Input: "What is the capital of Japan?"
πΉ Few-Shot Promptingβ
Use: Tasks with variability; adds consistency by showing patterns.
Prompt:
Q: What is 5 + 3?
A: 8
Q: What is 12 - 4?
A: 8
Q: What is 9 + 6?
A:
πΉ System Promptingβ
Use: Guide output format, tone, or persona via system-level instruction.
Prompt:
You are a JSON API assistant. Always respond in valid JSON format.
User: "Tell me the current weather in London"
πΉ Role Promptingβ
Use: Assign a personality or function to steer response style.
Prompt:
You are a helpful personal finance advisor.
Whatβs a good way to save for retirement in your 30s?
πΉ Contextual Promptingβ
Use: Provide real data or background to ground the answer.
Prompt:
Hereβs the companyβs 2023 HR policy: [insert excerpt]
Based on this policy, can an employee carry over unused vacation days to next year?
π Advanced Prompting Strategiesβ
πΈ Step-Back Promptingβ
Prompt:
Letβs first reflect on the broader question:
"What factors should we consider before choosing a new CRM platform?"
Now, given those, which platform is best for a mid-sized SaaS startup?
πΈ Chain-of-Thought (CoT)β
Prompt:
Q: Jane has 5 apples. She buys 7 more, then gives 3 to her friend. How many apples does she have now?
A: Let's think step by step...
πΈ Self-Consistencyβ
Approach:
- Run the same prompt multiple times.
- Use majority voting to find the most consistent answer.
Prompt (run 3x):
Whatβs the next number in this sequence: 2, 4, 6, 8, ?
πΈ Tree-of-Thought (ToT)β
Use: Let the model explore multiple branches of reasoning.
Prompt:
You are solving a puzzle. First, generate 3 different strategies to solve it. Then evaluate which one is most effective and explain why.
πΈ ReAct (Reason + Act)β
Use: Combine reasoning with tool use.
Prompt:
User: Whatβs the weather in Tokyo right now?
Assistant Thought: I need to look up the weather using the weather API.
[Call: GET https://api.weather.com/tokyo]
Action: Retrieve weather info
Observation: Itβs 22Β°C and sunny
Answer: The weather in Tokyo is currently 22Β°C with clear skies.
π» Prompting for Code Tasksβ
Prompt engineering also works well with LLMs like Gemini or Claude for tasks like:
- Writing Bash scripts
- Explaining code
- Refactoring
- Translation (e.g., Python to JavaScript)
Prompt Example:
Convert the following Python list comprehension to a standard for loop:
[ x**2 for x in range(10) if x % 2 == 0 ]
β Best Practices Summaryβ
- π― Be clear, concise, and direct
- π§± Use examples where helpful
- π¬ Keep format structured
- π§ͺ Test and iterate
- π§° Abstract common prompts with variables or templates
- π Stay aligned with model safety and bias guidelines
π Final Thoughtsβ
Prompt engineering is your superpower when working with LLMs. It's part design, part trial-and-error, and part understanding the modelβs training behavior. With the right strategy, even complex workflows become simple, reusable, and reliable.