Back to categories
AI & Token Efficiency

Prompts engineered to get perfect results in one shot — fewer tokens, less cost

3 free prompts5 premium prompts
One-Shot Code Request
Get production-ready code first try — zero back-and-forth
codingone-shottoken efficiency
You are a [LANGUAGE] expert. Write [WHAT YOU NEED] with zero fluff. Requirements (complete list — make no assumptions): - Input: [DESCRIBE INPUT] - Output: [DESCRIBE EXACT OUTPUT FORMAT] - Constraints: [performance / security / compatibility requirements] - Error handling: [how to handle edge cases] - Do NOT include: explanations, comments unless critical, alternative approaches Code only. If clarification is needed, ask exactly one question, then stop. [PASTE ANY RELEVANT EXISTING CODE OR CONTEXT HERE]
Minimal-Token Debug Request
Pinpoint bugs without wasting tokens on context AI doesn't need
debuggingminimaltoken efficiency
Debug this. Give me: fix + one-line reason. Nothing else unless I ask. Error: [EXACT ERROR MESSAGE — copy it, don't paraphrase] File: [FILENAME:LINE NUMBER] Broken code (only the relevant part, max 50 lines): ``` [PASTE CODE] ``` Expected: [ONE SENTENCE — what should happen] Actual: [ONE SENTENCE — what is happening instead] Already tried: [BULLET LIST ONLY]
Batch Tasks in One Message
Combine 5 tasks into 1 prompt — 5x cheaper than separate requests
batchingefficiencycost saving
Complete all tasks below in one response. Number each output to match its task. TASK 1: [DESCRIBE TASK 1] Format: [exactly how you want the output] --- TASK 2: [DESCRIBE TASK 2] Format: [exactly how you want the output] --- TASK 3: [DESCRIBE TASK 3] Format: [exactly how you want the output] --- TASK 4: [DESCRIBE TASK 4] Format: [exactly how you want the output] --- TASK 5: [DESCRIBE TASK 5] Format: [exactly how you want the output] Rules: - No preamble before each answer - No summary after each answer - Use "---" as separator between outputs - If a task is unclear, complete your best interpretation and flag it in [brackets]
Write a Perfect System Prompt
A system prompt that makes every reply precise and cheap
PREMIUM
system promptAI setupprompt engineering
Write a system prompt for an AI assistant specialising in [ROLE / DOMAIN]. Use case: [what tasks will it perform daily] Users: [who will interact with it] The system prompt MUST enforce: 1. **Response format** — exact structure for every reply (define it precisely) 2. **Tone** — define it with 3 specific adjectives and one example sentence 3. **Always include** — mandatory elements in every response 4. **Never include** — no disclaimers / no "as an AI" / no unsolicited alternatives / no padding 5. **Ambiguity rule** — exactly what to do when the request is unclear 6. **Length calibration** — when to be brief (1-3 lines) vs detailed (structured sections) 7. **Output efficiency** — respond only to what was asked; do not add context the user didn't request Target length: under 300 tokens. Every word must earn its place. After the system prompt: rate it 1-10 for token efficiency and explain the score.
Exact JSON Output Every Time
Get structured data in your exact schema — no reformatting ever
PREMIUM
JSONstructured outputextraction
Extract/generate data from the source below. Output ONLY valid JSON — no markdown fences, no explanation, no preamble. Source: [PASTE TEXT, DOCUMENT, OR DESCRIPTION TO EXTRACT FROM] Required schema (fill in your field names and types): { "[field_1]": "string", "[field_2]": "number", "[field_3]": ["array", "of", "strings"], "[nested]": { "[sub_field]": "boolean" } } Rules: - null for missing values (never omit a field) - Arrays: [] if empty, never null - Strings: trimmed, no extra whitespace - Dates: ISO 8601 (YYYY-MM-DD) - Numbers: actual numbers, not strings - If a field is ambiguous, add "_confidence": 0.0-1.0 beside it
Compress Context to Fit Your Window
Shrink documentation or notes by 80% — keeping all the important parts
PREMIUM
context compressiontoken savinglong documents
Compress the following content into a dense context block I can paste into future prompts. Target: under [TOKEN LIMIT: 500 / 1000 / 2000] tokens. Content to compress: [PASTE: documentation / codebase notes / long conversation / research / meeting notes] Compression rules: 1. KEEP: specific names, numbers, decisions, error messages, constraints, code signatures 2. REMOVE: pleasantries, examples that restate the rule, motivational framing, redundant definitions 3. MERGE: similar points into one line 4. FORMAT: terse bullets + inline code snippets; short headers for sections Start the output with this header: [COMPRESSED CONTEXT — paste this above your question] Then the compressed content. After: state original word count → compressed word count → % reduction.
Build a Prompt Chain
Break complex tasks into linked steps — cheaper and more accurate than one giant prompt
PREMIUM
prompt chainingcomplex tasksAI engineering
Design a prompt chain to accomplish: [DESCRIBE YOUR END GOAL] A prompt chain breaks one complex task into 3-6 focused steps. Each step's output feeds the next. This is 40-60% cheaper than one massive prompt and produces far better results. For each step, define: STEP [N]: - Purpose: [what this step does] - Input: [what it receives — from user or previous step output] - Instruction: [the exact prompt for this step — one clear action only] - Output: [exact format — what gets passed forward] - Token estimate: [rough] Requirements: - Max [NUMBER] steps - Flag steps that can run in parallel - Mark the most expensive step — suggest where to save tokens there - Each step must be useful even if a later step fails End with: - Total estimated token cost for the chain - vs estimated cost of doing it in one prompt - Why the chain produces better results here Task: [YOUR COMPLEX TASK IN DETAIL]
Audit & Slash Your Prompt Tokens
Cut any prompt by 50%+ without losing output quality
PREMIUM
token auditcost reductionprompt optimization
Audit this prompt and rewrite it to use the fewest tokens possible without degrading the output. Original prompt: [PASTE YOUR PROMPT] Analyse each section for: 1. **Already-known context** — what does the model know without being told? 2. **Verbose phrasing** — where 20 words replace 4 3. **Redundant constraints** — rules that are implied by other rules 4. **Unnecessary examples** — examples that add tokens but not understanding 5. **Output padding triggers** — phrases that make the model write filler Output: 1. Optimised prompt (target: 50%+ shorter) 2. Token estimate: [ORIGINAL TOKENS] → [OPTIMISED TOKENS] → [MONEY SAVED per 1000 calls] 3. Annotated cut list: what was removed and why it's safe to remove 4. One thing you should NEVER cut from this type of prompt Efficiency score for original: [1-10] with explanation. Note on real savings: at $15/M output tokens (Claude Opus), cutting 200 tokens per call saves $3 per 10,000 calls. At scale, this matters.