# Add TAC to your agent

This guide walks you through connecting your AI agent to Twilio's platform using the TAC SDK. You'll configure channels, handle messages, retrieve memory context, define custom tools, and search knowledge bases — all wired together in a single integration flow.

If you haven't tried TAC yet, start with the [Quickstart](/docs/conversations/agent-connect/quickstart) to get a working application first. To add a single capability to an existing TAC application, see the how-to guides in the sidebar.

## Prerequisites

Before you begin, make sure you have:

* A Twilio account with API credentials (Account SID, Auth Token, API Key, and API Secret)
* A Twilio phone number configured for SMS and/or Voice
* A [Conversation Configuration](/docs/conversations/orchestrator) created through the Console or REST API
* Python 3.10+ or Node.js 22.13.0+

## Install the SDK

## Python

```bash
pip install twilio-agent-connect[server]
```

## TypeScript

```bash
npm install twilio-agent-connect
```

## Configure environment variables

Create a `.env` file in your project root with your Twilio credentials.

## Python

```text
TWILIO_ACCOUNT_SID=your_account_sid
TWILIO_AUTH_TOKEN=your_auth_token
TWILIO_API_KEY=your_api_key_sid
TWILIO_API_SECRET=your_api_key_secret
TWILIO_PHONE_NUMBER=+1234567890
TWILIO_CONVERSATION_CONFIGURATION_ID=your_configuration_id
TWILIO_VOICE_PUBLIC_DOMAIN=your-ngrok-domain.ngrok-free.app
```

For Voice, set `TWILIO_VOICE_PUBLIC_DOMAIN` to your ngrok domain **without** `https://`.

## TypeScript

```text
TWILIO_ACCOUNT_SID=your_account_sid
TWILIO_AUTH_TOKEN=your_auth_token
TWILIO_API_KEY=your_api_key_sid
TWILIO_API_SECRET=your_api_key_secret
TWILIO_PHONE_NUMBER=+1234567890
TWILIO_CONVERSATION_CONFIGURATION_ID=your_configuration_id
TWILIO_VOICE_PUBLIC_DOMAIN=your-ngrok-domain.ngrok-free.app
```

For Voice, set `TWILIO_VOICE_PUBLIC_DOMAIN` to your ngrok domain **without** `https://`.

## Initialize TAC

Create a TAC instance that loads configuration from your environment variables.

## Python

```python
from dotenv import load_dotenv
from tac import TAC, TACConfig

load_dotenv()
tac = TAC(config=TACConfig.from_env())
```

## TypeScript

```typescript
import 'dotenv/config';
import { TAC, TACConfig } from 'twilio-agent-connect';

const tac = await TAC.create({ config: TACConfig.fromEnv() });
```

## Add channels

Configure channels for each communication method you want to support. Set `memory_mode` to `"always"` (Python) or `memoryMode` to `"always"` (TypeScript) if you want TAC to retrieve memory context with each incoming message. All channels share the same `on_message_ready` callback, so your agent handles every channel with a single function.

## Python

```python
from tac.channels.voice import VoiceChannel, VoiceChannelConfig
from tac.channels.sms import SMSChannel, SMSChannelConfig
from tac.channels.whatsapp import WhatsAppChannel, WhatsAppChannelConfig
from tac.channels.rcs import RCSChannel, RCSChannelConfig

voice_channel = VoiceChannel(tac, config=VoiceChannelConfig(memory_mode="always"))
sms_channel = SMSChannel(tac, config=SMSChannelConfig(memory_mode="always"))
whatsapp_channel = WhatsAppChannel(tac, config=WhatsAppChannelConfig(memory_mode="always"))
rcs_channel = RCSChannel(tac, config=RCSChannelConfig(memory_mode="always"))
```

## TypeScript

```typescript
import { VoiceChannel, SMSChannel, WhatsAppChannel, RCSChannel } from 'twilio-agent-connect';

const voiceChannel = new VoiceChannel(tac, { memoryMode: 'always' });
const smsChannel = new SMSChannel(tac, { memoryMode: 'always' });
const whatsAppChannel = new WhatsAppChannel(tac, { memoryMode: 'always' });
const rcsChannel = new RCSChannel(tac, { memoryMode: 'always' });

tac.registerChannel(voiceChannel);
tac.registerChannel(smsChannel);
tac.registerChannel(whatsAppChannel);
tac.registerChannel(rcsChannel);
```

For more details on channel configuration, see [Channels](/docs/conversations/agent-connect/channels).

## Handle incoming messages

Register a callback that TAC calls when a user message is ready for processing. The callback receives the user's message, a `ConversationSession` with context, and an optional memory response. Return the response string and TAC automatically sends it through the correct channel.

## Python

```python
from tac.models.session import ConversationSession
from tac.models.tac import TACMemoryResponse

conversation_history: dict[str, list] = {}

async def handle_message_ready(
    message: str,
    context: ConversationSession,
    memory: TACMemoryResponse | None,
) -> str | None:
    conv_id = context.conversation_id

    if conv_id not in conversation_history:
        conversation_history[conv_id] = []

    conversation_history[conv_id].append({"role": "user", "content": message})

    # Process with your LLM (see following sections)
    llm_response = await generate_response(conversation_history[conv_id])

    conversation_history[conv_id].append({"role": "assistant", "content": llm_response})

    return llm_response

tac.on_message_ready(handle_message_ready)
```

## TypeScript

```typescript
const conversationHistory: Record<string, Array<{ role: string; content: string }>> = {};

tac.onMessageReady(async ({ conversationId, message, memory, session, channel }) => {
  const convId = conversationId as string;

  if (!conversationHistory[convId]) {
    conversationHistory[convId] = [];
  }

  conversationHistory[convId].push({ role: 'user', content: message });

  // Process with your LLM (see following sections)
  const llmResponse = await generateResponse(conversationHistory[convId]);

  conversationHistory[convId].push({ role: 'assistant', content: llmResponse });

  return llmResponse;
});
```

## Enhance your system prompt with Conversation Memory

Define your agent's role and behavior through a system prompt. You can combine a static prompt with dynamic memory context.

## Python

The Python SDK provides `MemoryPromptBuilder` to format memory data into a prompt string:

```python
from tac.adapters.prompt_builder import MemoryPromptBuilder

SYSTEM_PROMPT = "You are a helpful customer service agent. Be concise and friendly."

async def handle_message_ready(
    message: str,
    context: ConversationSession,
    memory: TACMemoryResponse | None,
) -> str | None:
    conv_id = context.conversation_id

    if conv_id not in conversation_history:
        conversation_history[conv_id] = []

    conversation_history[conv_id].append({"role": "user", "content": message})

    # Build memory context from profile traits and conversation history
    memory_context = MemoryPromptBuilder.build(
        memory_response=memory,
        context=context,
    )

    # Combine your prompt with memory context
    if memory_context:
        system_prompt = f"{SYSTEM_PROMPT}\n\n{memory_context}"
    else:
        system_prompt = SYSTEM_PROMPT

    # Pass to your LLM as the system message
    messages = [
        {"role": "system", "content": system_prompt},
        *conversation_history[conv_id],
    ]

    response = await openai_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
    )

    llm_response = response.choices[0].message.content
    conversation_history[conv_id].append({"role": "assistant", "content": llm_response})

    return llm_response
```

## TypeScript

The TypeScript SDK provides `MemoryPromptBuilder` to format memory data into a prompt string:

```typescript
import { MemoryPromptBuilder } from 'twilio-agent-connect';

const SYSTEM_PROMPT = 'You are a helpful customer service agent. Be concise and friendly.';

tac.onMessageReady(async ({ conversationId, message, memory, session }) => {
  const convId = conversationId as string;

  if (!conversationHistory[convId]) {
    conversationHistory[convId] = [];
  }

  conversationHistory[convId].push({ role: 'user', content: message });

  // Build memory context from profile traits and conversation history
  const memoryContext = MemoryPromptBuilder.build(memory, session);
  const systemContent = memoryContext
    ? `${SYSTEM_PROMPT}\n\n${memoryContext}`
    : SYSTEM_PROMPT;

  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      { role: 'system', content: systemContent },
      ...conversationHistory[convId],
    ],
  });

  const llmResponse = response.choices[0]?.message?.content ?? '';
  conversationHistory[convId].push({ role: 'assistant', content: llmResponse });

  return llmResponse;
});
```

This approach works with any LLM provider. For OpenAI specifically, the Python SDK provides a built-in adapter that retrieves memory context automatically.

## Python

```python
from openai import AsyncOpenAI
from tac.adapters.openai import with_tac_memory

openai_client = AsyncOpenAI()

async def handle_message_ready(
    message: str,
    context: ConversationSession,
    memory: TACMemoryResponse | None,
) -> str | None:
    # Wrap the OpenAI client — memory context is added to your messages before each LLM call
    client = with_tac_memory(openai_client, memory, context)

    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=conversation_history[context.conversation_id],
    )

    llm_response = response.choices[0].message.content

    return llm_response
```

## TypeScript

The TypeScript SDK doesn't include a built-in OpenAI adapter. Use `MemoryPromptBuilder` from the previous section to build the memory context manually.

```typescript
import OpenAI from 'openai';
import { MemoryPromptBuilder } from 'twilio-agent-connect';

const openai = new OpenAI();

tac.onMessageReady(async ({ conversationId, message, memory, session }) => {
  const convId = conversationId as string;
  const memoryContext = MemoryPromptBuilder.build(memory, session);

  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      { role: 'system', content: memoryContext
        ? `${SYSTEM_PROMPT}\n\n${memoryContext}`
        : SYSTEM_PROMPT },
      ...conversationHistory[convId],
    ],
  });

  const llmResponse = response.choices[0]?.message?.content ?? '';

  return llmResponse;
});
```

## Search knowledge bases

Use [Enterprise Knowledge](/docs/conversations/knowledge) bases to give your agent grounded responses.

## Python

```python
from tac.tools import create_knowledge_tool

knowledge_tool = await create_knowledge_tool(
    knowledge_client=tac.knowledge_client,
    knowledge_base_id="know_knowledgebase_xxxxx",
)

# Add to your LLM's tool list alongside custom tools
tools = [check_order_status, knowledge_tool]
```

## TypeScript

```typescript
import { createKnowledgeSearchToolAsync } from 'twilio-agent-connect';

const knowledgeClient = tac.getKnowledgeClient();
const knowledgeTool = await createKnowledgeSearchToolAsync(
  knowledgeClient,
  'know_knowledgebase_xxxxx'
);

// Add to your LLM's tool list alongside custom tools
const tools = [checkOrderStatus, knowledgeTool];
```

## Stream Voice responses

The previous examples wait for the full LLM response before sending it to the caller. For Voice conversations, you can stream tokens from your LLM as they're generated so the caller hears the response sooner. Pass an async generator to `send_response` instead of a complete string.

## Python

```python
from collections.abc import AsyncGenerator

async def handle_message_ready(
    message: str,
    context: ConversationSession,
    memory: TACMemoryResponse | None,
) -> str | None:
    conv_id = context.conversation_id

    if conv_id not in conversation_history:
        conversation_history[conv_id] = []

    conversation_history[conv_id].append({"role": "user", "content": message})

    async def stream_tokens() -> AsyncGenerator[str, None]:
        response_tokens = []

        stream = await openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=conversation_history[conv_id],
            stream=True,
        )

        async for chunk in stream:
            if chunk.choices and chunk.choices[0].delta.content:
                token = chunk.choices[0].delta.content
                response_tokens.append(token)
                yield token

        full_response = "".join(response_tokens)
        conversation_history[conv_id].append({"role": "assistant", "content": full_response})

    if context.channel == "voice":
        await voice_channel.send_response(conv_id, stream_tokens())
    elif context.channel == "sms":
        llm_response = await generate_response(conversation_history[conv_id])
        conversation_history[conv_id].append({"role": "assistant", "content": llm_response})
        await sms_channel.send_response(conv_id, llm_response)

tac.on_message_ready(handle_message_ready)
```

## TypeScript

The TypeScript SDK provides a dedicated `sendStreamingResponse` method on the Voice channel. It accepts an `AsyncIterable<string>` and returns the accumulated full response.

```typescript
import OpenAI from 'openai';

const openai = new OpenAI();

tac.onMessageReady(async ({ conversationId, message, channel, abortSignal }) => {
  const convId = conversationId as string;

  if (!conversationHistory[convId]) {
    conversationHistory[convId] = [];
  }

  conversationHistory[convId].push({ role: 'user', content: message });

  if (channel === 'voice') {
    if (abortSignal?.aborted) return;

    const stream = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: conversationHistory[convId],
      stream: true,
    });

    async function* tokenStream() {
      for await (const chunk of stream) {
        if (chunk.choices?.[0]?.delta?.content) {
          yield chunk.choices[0].delta.content;
        }
      }
    }

    const fullResponse = await voiceChannel.sendStreamingResponse(
      conversationId,
      tokenStream(),
      abortSignal !== undefined ? { signal: abortSignal } : undefined
    );

    conversationHistory[convId].push({ role: 'assistant', content: fullResponse });
  } else if (channel === 'sms') {
    const response = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: conversationHistory[convId],
    });

    const llmResponse = response.choices[0]?.message?.content ?? '';
    conversationHistory[convId].push({ role: 'assistant', content: llmResponse });
    await smsChannel.sendResponse(conversationId, llmResponse);
  }
});
```

## Start the server

TAC includes a built-in server that sets up webhook and WebSocket routes for your configured channels.

## Python

```python
from tac.server import TACFastAPIServer

if __name__ == "__main__":
    server = TACFastAPIServer(
        tac=tac,
        voice_channel=voice_channel,
        messaging_channels=[sms_channel],
    )
    server.start()
```

The server registers routes based on the channels you configure:

**Voice routes** (when `voice_channel` is provided):

* `POST /twiml` — Incoming call handler
* `WebSocket /ws` — Streaming via Conversation Relay
* `POST /conversation-relay-callback` — Completion callback

**Messaging route** (when `messaging_channels` is provided):

* `POST /webhook` — Webhook for SMS and other messaging channels

**Conversation Intelligence route** (when `cintel_webhook_path` is set on `TACServerConfig`):

* `POST <cintel_webhook_path>` — Conversation Intelligence events (for example, `/ci-webhook`). Disabled by default.

## TypeScript

```typescript
import { TACServer } from 'twilio-agent-connect';

const server = new TACServer(tac, {
  host: '0.0.0.0',
  port: 8000,
});

server.start().then(() => {
  console.log('TAC server started');
});
```

## Next steps

* [Escalate to a human agent](/docs/conversations/agent-connect/escalate-to-human-agent): Transfer conversations to human agents through a Twilio Studio Flow.
* [Channels](/docs/conversations/agent-connect/channels): Learn more about channel configuration and routing.
* [Troubleshooting](/docs/conversations/agent-connect/troubleshooting): Common issues and solutions.
