# Conversational agents solution blueprint

Create your digitial workforce through integrating the Twilio platform with your conversational AI agents. This workforce can act on your behalf across any channel, keeping every digital interaction relevant, consistent, and personalized.

The Twilio Agent Connector (TAC) serves as the intelligent middleware that bridges your preferred AI agent with the Twilio global communications fabric. You maintain full control over your business logic while Twilio handles the complex communication plumbing. This blueprint provides programmable building blocks for orchestration, memory, and intelligence to build production-ready agents.

## What you can build

Twilio conversational agents support autonomous and semi-autonomous workflows that handle customer requests from initiation to resolution.

* **Multi-Channel Support Agents**: Infuse conversational context from engaging with customers across Voice, SMS, and WhatsApp into your AI agents. This prevents context resets when a customer switches channels.
* **Context-Aware Personalization**: Enrich AI agents with customer memory and enterprise knowledge from Memory. This allows agents to recognize returning customers, recall preferences, and ground answers in actual company policies.
* **Intelligent Self-Service**: Automate routine tasks like scheduling, password resets, and triage. By using TAC's universal tool system, your agent can call external functions to resolve issues directly within the conversation.
* **Seamless Handoffs**: Empower AI agents to escalate to human agents with full context preservation. This passes the complete history and profile data to the human agent.

## Twilio components

Build out your production-ready conversational agent using five native Twilio components:

* **Twilio Agentic Connect (TAC)**: A Python and Typescript SDK that serves as the middleware layer. It manages the conversation lifecycle, handles WebSocket protocols for voice, and provides universal tool definitions for LLMs.
* **Conversations** (Conversation orchestrator): The omnichannel backbone that unifies CPaaS events into identity-aware conversations. It handles *passive hydration*, converting existing traffic into structured records.
* **Memory** (Profiles, memory, and knowledge): The contextual substrate providing durable profiles and semantic search over past conversations and business documents.
* **Conversation Intelligence**: The intelligence layer that runs real-time *language operators* and generates actionable insights and signals from every conversation. Conversation Intelligence includes conversation insights, which aggregates performance data, outcomes, and trends across all agent interactions.

### How it works

The architecture relies on the AI Agent connecting to the Twilio Platform, while Conversation Graph orchestrates the flow and Conversation Intelligence analyzes it. Your infrastructure handles the reasoning outside of the Twilio platform.

```mermaid
flowchart LR
  subgraph CustInfra[Customer Infrastructure]
      direction TB
      LLM[Customer<br/>LLM Application]
      TAC[Twilio<br/>Agentic<br/>Connect]
  end
  subgraph LeftSide[ ]
      direction TB
      User((End User))
      CustInfra
  end
  subgraph CH[Channels]
      direction TB
      Voice[Voice<br/>Conversation Relay]
      Msg[Messaging]
      WA[WhatsApp]
  end
  subgraph IntelData[Intelligence and Data]
      direction LR
      CINTEL[Conversational<br/>Intelligence]
      Memory[Conversation Memory<br/>Memory]
      KB[Knowledge Base]
  end
  subgraph TwilioPlatform[Twilio Platform]
      direction LR
      Conversations2[Conversation Orchestrator<br/>Conversations]
      CH
      IntelData
  end
  LLM <-->|Enriched context, reasoning calls, tool calls| TAC
  TAC -->|Send or receive conversation| Conversations2
  TAC -->|Recall customer memory| Memory
  CINTEL -->|Fetch knowledge<br/>for operators| KB
  TAC -->|Fetch knowledge| KB
  Conversations2 -->|Create outbound requests| CH
  Conversations2 -->|Exhaust events| CH
  User <-->|Input or response| CH
  CINTEL -->|Create or<br/>lookup profiles| Memory
  CINTEL -->|Recall memory<br/>for operators| Memory
  CINTEL -->|Execute extraction<br/>and output| Memory
  TAC <-->|Send or receive utterance| Voice

  LLM:::blue
  User:::circleNode
  Msg:::red
  Voice:::red
  CINTEL:::red
  Memory:::red
  KB:::red
  TAC:::red
  WA:::red
  Conversations2:::red
  LeftSide:::invisible
  classDef red fill:#f22f46,stroke:#d20031,color:white,rx:5,ry:5,font-size:1.3rem
  classDef blue fill:#87cdff,stroke:#51a9e3,stroke-width:2px,color:#000,rx:5,ry:5,font-size:1.3rem
  classDef circleNode fill:#6addb2,stroke:#333,stroke-width:2px,color:#000,rx:50,ry:50,font-size:1.5rem
  classDef invisible fill:transparent,stroke:none
  style CustInfra fill:#f8f8f8,stroke:#51a9e3,stroke-width:1px,color:#51a9e3
  style TwilioPlatform fill:#f8f8f8,stroke:#f22f46,stroke-width:2px,color:#f22f46
  linkStyle default curve:linear,stroke-width:3px
```

1. **Conversation Initialization and Orchestration**
   * TAC connects to Conversation Graph to receive real-time conversation events.
   * Conversation Graph manages channel-specific protocols and identifies the user through the Memory profiles.
   * TAC connects directly to Conversation Relay for the Voice channel
2. **Context Enrichment** (Bridge)
   * Before contacting your LLM, TAC pulls *traits* (user details) and *observations* (history) from Memory.
   * Memory bundles this data into a standardized format, ensuring your model has *enriched context* immediately.
3. **Reasoning Loop** (Customer infrastructure)
   * TAC sends this enriched prompt to your customer LLM and business logic.
   * Your LLM processes the request—applying your specific prompt engineering and business rules—and returns a response back to or calls a tool (like "schedule a callback") that manages a response back to the TAC.
4. **Execution and Delivery**
   * TAC converts your LLM response into the correct channel format (like TwiML for voice) and sends it to Conversation Graph or Conversation Relay for delivery.
   * At the same time, Conversation Intelligence analyzes the interaction in the background and updates Memory. This increases the intelligence in your LLM for the next turn.
