Overview
An AI Agent combines natural-language understanding, voice synthesis, and workflow automation. Each agent is fully configurable through four main tabs inside the Studio:- Voice & Guidelines – Define how the agent greets callers, which voice it uses, and the system guidelines that steer its behaviour.
- Knowledge Base – Grant the agent access to company knowledge at runtime.
- Actions – Orchestrate API calls before, during, and after a call so the agent can fetch data, carry out tasks, and hand off results.
- Settings – Fine-tune transcription with custom keywords and other preferences.
1 · Voice & Guidelines Tab
| Configuration | Why it matters |
|---|---|
| Name | Clear, descriptive names make it easy to identify the agent in dashboards and analytics. |
| Initial Message | The very first sentence your agent says on answering the phone. You can include variables (e.g. {{firstName}}) extracted from Pre-Call Actions. |
| TTS Voice | Choose any text-to-speech voice that match your brand personality. |
| Guidelines | System-level instructions written in markdown. Set the agent’s role, tone, and constraints. Use variables to reference data fetched in Pre-Call Actions. |
Need to reshape variable values before you drop them into a prompt? Create a Data Transformer and pick it from the variable popper wherever you use
{{variables}}.2 · Knowledge Base Tab
Link zero or at most 2 Knowledge Groups that the agent can search during a call to answer user’s queries. The AI automatically retrieves relevant passages at runtime, enabling precise, up-to-date responses without hard-coding content in prompts.3 · Actions Tab
Actions let the agent talk to your backend systems via HTTPS. They come in three flavours:3.1 · Pre-Call Actions
- Runs before the agent speaks.
- Common use-case: FetchContact – pull CRM details and store them in variables like
{{firstName}},{{accountStatus}}. - Variables created here are available to personalize Initial Message and Guidelines.
- Variables values are available throughout the lifetime of the interaction and can be utilized in During Call or Post Call Actions.
You can configure multiple Pre-Call Actions which will be executed sequentially in the order they were created. This allows you to chain API calls and build up context before the conversation starts.
3.2 · During-Call Actions
- Triggered on demand when the caller asks for something (e.g. “CheckOrderStatus”).
- Provide comprehensive documentation for each action, including specific trigger conditions and use cases. Detailed context helps the AI model accurately determine when and how to execute the action.
- Define parameters to specify the data requirements for action execution. Each parameter can be configured with:
- Name
- Description (use this to provide format requirements and context to help the AI collect data correctly)
- Type (String, Number and List)
- Required/Optional flag Parameters are optional and you can configure zero or more as needed for the action.
- You can define filler messages that the agent speaks while waiting for the API response using the Say node in the flow builder
- External API calls can be executed through the flow’s API Request Node
3.3 · Post-Call Actions
Executed when the conversation ends—whether the caller hangs up, self-serves successfully, or escalates to a human. Typical tasks include:- Generating a call summary and disposition
- Saving the transcript to your CRM or data warehouse
- Kicking off downstream workflows (e.g. ticket creation)
4 · Settings Tab
Use this tab to enhance speech-to-text accuracy:- Transcription Keywords – Add product names, acronyms, or industry-specific terms (e.g. medical jargon) that the transcription engine might otherwise miss.
Autosave & Draft Safety
- Changes to the Name, Initial Message, Guidelines, voice selection/settings, handoff dispositions, transfer messaging, call end message, and advanced settings save automatically within a second or two after you pause typing.
- A toast in the bottom-right (for example �Voice updated�) confirms each successful sync. If something fails you will see an error toast so you can retry.
Live Testing Console & DTMF Keypad
1
Click Test Your Agent to start the test call.
2
Use the on-screen keypad to send digits (0�9,
*, #) while the call is live.3
Watch the transcript and duration panes to confirm the agent�s behaviour.
DTMF presses behave like caller speech: they barge-in to stop filler audio, appear in the transcript, and can trigger flows that expect keypad input.
Import, Export & Duplicate Agents
Duplicate
- Open the actions menu (
...) on an agent card and choose Duplicate to create aCopy of {Agent Name}with the same actions, flows, variables, dispositions, linked knowledge groups, and data transformers. - Rename the clone and adjust it without touching the original agent.
Export
- From the same menu select Export to download a JSON bundle that includes the agent, actions, flow definitions, knowledge links, dispositions, data transformers, and usage mappings.
- Filenames include the export timestamp so you can keep versioned backups.
Import
- Click Import above the table and upload a bundle that was exported from SquawkVoice Studio.
- The importer recreates flows, reconnects knowledge groups (by name), restores data transformers (deduplicating when possible), and reports progress via toast notifications.
Best Practices
- Start simple. Launch with a lightweight set of actions and expand iteratively.
- Keep guidelines concise. The shorter the system prompt, the faster the agent responds.
- Monitor analytics. Use the built-in dashboard to track resolution rates, average call duration, and customer sentiment.
- Iterate on feedback. Real-world conversations reveal edge cases you can cover with new actions or guideline tweaks.