Skip to main content
Even experienced HoopAI users make prompt mistakes that silently degrade their AI agent’s performance. This guide covers the 10 most common mistakes we see, with before-and-after examples you can apply to your own prompts immediately. If you are new to prompt engineering, start with Prompt engineering 101 to learn the foundational framework before diving into these fixes.

Mistake 1: Prompt too vague

The most common mistake is writing a prompt that is too general. Vague prompts produce vague responses.
You are a helpful assistant for our business. Answer customer
questions and help them out.
Problem: The agent does not know your business name, what you sell, your tone, or what “help them out” means. It will generate generic, off-brand responses.

Mistake 2: Prompt too long and unfocused

The opposite extreme — a prompt that tries to cover every possible scenario in exhaustive detail. Overly long prompts dilute the most important instructions and can introduce contradictions.
[2,000+ words covering every possible scenario, repeating
instructions in different ways, including detailed product
specifications, full FAQ lists, company history, employee
bios, and holiday schedules]
Problem: The agent struggles to prioritize. Critical rules get buried. Contradictory instructions emerge.

Mistake 3: No escalation rules

Without clear escalation rules, your agent will try to handle every situation on its own — including ones it should not.
Answer all customer questions about our products and services.
Be as helpful as possible.
Problem: When a customer is furious, has a legal complaint, or asks something the bot cannot answer, it keeps trying instead of handing off. This makes things worse.

Mistake 4: No personality defined

An agent without a defined personality defaults to a generic, robotic tone that feels impersonal.
You are a customer service bot. Answer questions accurately.
Problem: Responses feel cold and mechanical. Customers disengage quickly.

Mistake 5: Contradictory instructions

When prompts grow organically over time, contradictions creep in. The agent receives conflicting instructions and behaves unpredictably.
Keep responses brief — one sentence maximum.
...
[later in the prompt]
...
Provide detailed, thorough answers to every question. Include
all relevant information the customer might need.
Problem: The agent cannot follow both rules simultaneously. It picks one inconsistently, leading to erratic behavior.

Mistake 6: Missing edge cases

Your prompt handles the happy path but falls apart when something unexpected happens.
When a customer wants to book an appointment, collect their
name, preferred date, and service type, then confirm.
Problem: What happens when the requested date is unavailable? When the customer gives an invalid date? When they change their mind? When they want to book for someone else?

Mistake 7: Not using the knowledge base

Some users try to pack all their business information directly into the prompt. This leads to overly long prompts and makes information hard to update.
Our services and pricing:
- Basic cleaning: $150
- Deep cleaning: $275
- Crown: $800-$1,200
- Root canal: $600-$900
- Whitening: $350
[... 50 more items ...]

Our hours:
Monday: 8 AM - 5 PM
Tuesday: 8 AM - 5 PM
[... etc ...]

Our insurance partners:
[... long list ...]
Problem: The prompt becomes massive. Updating a single price requires editing the entire prompt. The agent’s behavioral instructions get buried.

Mistake 8: Overly rigid responses

Scripting every response word-for-word makes the agent sound like a phone tree, not a conversational assistant.
When the customer says hello, respond with exactly:
"Hello! Welcome to ABC Company. How may I assist you today?
I can help with appointments, billing, or general questions."

When the customer asks about hours, respond with exactly:
"Our hours are Monday through Friday, 9 AM to 5 PM, and
Saturday, 10 AM to 2 PM. Is there anything else I can help
you with?"
Problem: Every interaction sounds scripted and robotic. The agent cannot adapt to conversational context or follow up naturally.

Mistake 9: No testing before going live

Launching a prompt without testing leads to embarrassing or damaging interactions with real customers.
[Writes prompt] -> [Sets bot to Auto-Pilot] -> [Goes live]
Problem: The first people to test your prompt are your real customers. Mistakes are discovered the hard way — through bad reviews, lost leads, or confused customers.

Mistake 10: Ignoring channel differences

A prompt that works well for web chat may perform poorly over SMS or voice. Each channel has different constraints and user expectations.
Provide detailed, thorough responses to every question. Include
links to relevant pages on our website. Use bullet points and
formatting to make responses easy to scan.
Problem: This works for web chat but is terrible for SMS (messages get split, links are hard to click, bullet points render poorly) and impossible for voice (cannot share links or use formatting).

Troubleshooting checklist

When your AI agent is not performing well, work through this checklist to identify the issue:
  • Check: Does the prompt include knowledge boundaries? (“Only answer using your knowledge base.”)
  • Check: Is the relevant information in the Knowledge Base?
  • Check: Are there conflicting facts between the prompt and the knowledge base?
  • Fix: Add an explicit rule: “If you do not know the answer, say so. Never guess.”
  • Check: Does the prompt define a personality, name, and tone?
  • Check: Are responses scripted word-for-word instead of guided?
  • Check: Are there example conversations showing the desired style?
  • Fix: Add 2-3 example conversations that demonstrate the ideal tone.
  • Check: Are escalation triggers clearly defined?
  • Check: Is the handoff action configured in bot settings?
  • Check: Does the agent have a script for the transfer message?
  • Fix: Add specific escalation triggers and test each one.
  • Check: Does the prompt include response length guidelines?
  • Check: Are examples showing concise responses?
  • Check: Is the prompt itself too long (possibly causing the agent to mirror verbosity)?
  • Fix: Add a rule like “Keep responses to 1-2 sentences unless more detail is needed.”
  • Check: Are there channel-specific guidelines in the prompt?
  • Check: Is the agent trying to share links, long lists, or formatted content via SMS?
  • Fix: Add SMS-specific rules for brevity and formatting.

Next steps

Last modified on March 5, 2026