Agent Prompting 101

Agent Prompting 101

Why prompts matter in AI Agents

Prompting is the primary controller for how a voice agent behaves. It defines the agent’s role, tone, boundaries, and how it handles real-world situations like confusion, edge cases, and tool usage. When you’re aiming for stable results at scale, the difference is almost always in the prompt: clearer instructions, better examples, and tighter constraints.

FlowbotAI’s realtime voice engine uses a modern Llama-based language model by default. If you’re new to Llama-style instruction tuning, Meta’s Llama prompting resources are a helpful reference point while you build your first drafts.

Info
No hidden “default prompt” (important)
FlowbotAI does not silently append a hidden system prompt behind the scenes. That means you should provide a complete prompt every time, including context, rules, and constraints you want the agent to follow. This keeps behavior transparent, auditable, and fully under your control.


Write like text. Deliver like voice.

Even though your agent speaks over voice, the underlying model is still a text model at its core. So write prompts the same way you would for a high-quality text assistant, then layer in voice-specific guidance: shorter responses, no visual formatting, and no action-based roleplay.

A simple “voice-first” header like the one below works well for most use cases.

Notes
Example: Voice-first system header
You are [Name], a friendly AI [customer service agent / helper / etc].
You're interacting with the user over voice, so speak casually.
Keep your responses short and to the point, much like someone would in dialogue.
Since this is a voice conversation, do not use lists, bullets, emojis, or other things that do not translate to voice. In addition, do not use stage directions or otherwise engage in action-based roleplay (e.g., "(pauses)", "*laughs").


Four habits that keep prompts reliable

If you’re building prompts for production (not just demos), these habits reduce variability and make agents easier to tune:
  1. Start small, then add structure. Begin with a few paragraphs describing the agent’s job, tone, and boundaries. Test a few real conversations, identify where it breaks, then add rules or examples only where needed.
  2. Be explicit and literal. Llama-style models tend to follow instructions very literally. If you want a specific behavior, state it directly and unambiguously. If you want step-by-step flows, break them into clear, concise instructions.
  3. Teach with examples. After describing the desired behavior, include a couple of realistic examples (good and bad). Models learn patterns quickly when you show what “right” looks like.
  4. Iterate using real failure modes. Prompting is an iterative process. Review transcripts from test calls, watch for repeat mistakes, and update the prompt to handle those cases—especially edge conditions and tool usage.


Copy-ready prompt patterns for common voice challenges

Below are practical patterns you can paste into your FlowbotAI agent prompt and adapt for your use case. They’re designed to improve call quality and reduce “surprise behavior” in real deployments.

Tool use: make tool calling predictable

Tools are how a FlowbotAI agent interacts with external systems (for example: looking up records, creating tickets, or retrieving knowledge). To get reliable tool usage, your prompt should clearly explain when a tool must be used and what the agent should do with the tool results.
  1. Write strong tool definitions: The full tool definition is visible to the model. Use clear names and descriptions, explain what each input means, and describe the intended outcome. Vague tool descriptions lead to vague tool usage.
  2. Add “when to use it” context: Give the agent explicit guidance on the trigger conditions for each tool. If a tool is required before replying, say so. If the tool can fail or return no results, tell the agent how to respond and what clarifying question to ask next.
Notes
Example: Clarifying when a tool is required
You have access to an address book that contains personnel information.
If someone asks for information for a particular person, you MUST use the lookUpAddressBook tool to find that information before replying.


Numbers: optimize output for text-to-speech

Text-to-speech engines can stumble on long numbers (account IDs, codes, phone numbers). A simple fix is to ask the agent to speak digits individually in a consistent, voice-friendly format.

Notes
Example: Speaking numbers clearly
Output account numbers, codes, or phone numbers as individual digits, separated by hyphens (e.g., 1234 -> "1-2-3-4"). For decimals, say "point" and then each digit (e.g., 3.14 -> "three point one four").


Dates and times: speak them consistently

Dates and times benefit from explicit rules so they are spoken clearly and consistently. This is especially important for appointments, billing cycles, and time-sensitive support flows.

Notes
Example: Reading out dates and times
Output dates as individual components (e.g., 12/25/2022 -> "December twenty-fifth twenty twenty-two"). For times, "10:00 AM" should be outputted as "10 AM". Read years naturally (e.g., 2024 -> "twenty twenty-four").


Staying on-task: reduce jailbreak attempts

Some callers will try to push the agent outside its intended scope. No prompt can prevent every attempt, but you can reduce risk by making the agent’s job and boundaries explicit and by instructing it to politely redirect.

Notes
Example: Polite refusal + redirect
Your only job is to [primary job of your agent]. If someone asks you a question that is not related to [the thing you're asking the model to do], politely decline and redirect the conversation back to the task at hand.


Natural pacing: add short pauses when it helps

If you’d like the agent to sound slower and easier to follow, you can ask it to inject brief pauses. A lightweight technique that works well is to use ellipses between thoughts, especially when the topic is complex.

Notes
Example: Speaking with pauses
You want to speak slowly and clearly, so you must inject pauses between sentences.
Do this by emitting "..." at the end of a sentence but before any final punctuation (e.g., "Wow, that's really interesting... can you tell me a bit more about that...? "). 
You should do this more when the topic is complex or requires special attention.


Step-by-step support: guide callers one action at a time

In support and troubleshooting scenarios, callers often do better when instructions are delivered one step at a time. You can teach this behavior by giving the agent an explicit example of the desired interaction pattern.

Notes
Example: User asks for help changing their password
- You will call the "searchArticle" tool
- Response from tool: {"content": "1. Click \"Forgot Password\" on the login screen 2. Enter your email address and click \"Submit\" 3. Check your email for the reset link 4. Click the link and enter your new password 5. Log in with your new password"}
- You will then use this information and proceed step-by-step with the user like this:
  * agent: "There are a few steps we need to go through."
  * agent: "The first step is: click on Forgot Password on the login screen. Let me know when you're there."
  * user: "OK, done."
  * agent: "Great. Next, enter your email address and click Submit."
  * user: "Got it."
  * agent: "Now check your email for the reset link."
  * user: "Uh huh."
- Repeat in this manner until you complete the entire process.


Quick production checklist

  1. State the agent’s job in one sentence (what it does and does not do).
  2. Define the voice style: concise, conversational, no visual formatting, no roleplay actions.
  3. List hard rules (privacy, safety, compliance) in plain language.
  4. Add tool-use rules: when to call tools and how to summarize tool results to callers.
  5. Include at least 2-3 realistic examples that match the calls you expect to receive.


    • Related Articles

    • Agent Reaction After Tool Calls

      FlowbotAI Agent Reactions After Tool Calls When an agent calls a tool, you can control what happens next: should the agent speak immediately, stay silent and listen, or speak only in specific situations. This helps you avoid awkward “double talk” ...
    • Knowledge (RAG) Overview

      Give your agent access to your KB, product docs, playbooks, and internal procedures—without stuffing everything into the prompt. Why RAG matters (especially in production) Large language models know a lot about the world, but they won’t reliably know ...
    • Noise & VAD Overview

      In this article we describe how FlowbotAI detects speech, suppresses noise, and keeps conversations fast and natural—without constant tuning. What this feature does in plain English Noise handling and Voice Activity Detection (VAD) are the “ears” of ...
    • Zapier Worfkflow Integration

      Flowbot Agent Connector – Zapier Integration Guide The Flowbot Agent Connector allows you to integrate your Flowbot Voice Agents with over 8,000+ apps in the Zapier ecosystem. Automate everyday tasks like creating tickets, updating CRMs, sending ...
    • Explore Built-in Tools

      What “built-in tools” mean in FlowbotAI FlowbotAI ships with a small set of built-in tools that cover common voice-application needs out of the box. They behave the same way as tools you create yourself: your agent can invoke them mid-conversation, ...