> ## Documentation Index
> Fetch the complete documentation index at: https://docs.famulor.io/llms.txt
> Use this file to discover all available pages before exploring further.

<AgentInstructions>

## Submitting Feedback

If you encounter incorrect, outdated, or confusing documentation on this page, submit feedback:

POST https://docs.famulor.io/feedback

```json
{
  "path": "/en/api-reference/assistants/create",
  "feedback": "Description of the issue"
}
```

Only submit feedback when you have something specific and actionable to report.

</AgentInstructions>

# Create Assistants

> Create a new AI assistant with specified configuration

> Create a new AI assistant with specified configuration

This endpoint allows you to create a new AI assistant with comprehensive configuration options.

## Engine Modes

The API supports three engine modes with different capabilities:

| Mode         | Description                           | Required Fields       |
| ------------ | ------------------------------------- | --------------------- |
| `pipeline`   | Classic STT → LLM → TTS pipeline      | `llm_model_id`        |
| `multimodal` | Real-time multimodal AI               | `multimodal_model_id` |
| `dualplex`   | Multimodal “Brain” + custom TTS voice | `multimodal_model_id` |

## Request Body

### Required Core Fields

<ParamField body="name" type="string" required>
  The name of the assistant (max. 255 characters)
</ParamField>

<ParamField body="voice_id" type="integer" required>
  The voice ID for the assistant. Use the endpoint [Retrieve Voices](/en/api-reference/assistants/voices) with the query parameter `mode` to get compatible voices for your engine mode.
</ParamField>

<ParamField body="language_id" type="integer" required>
  The language ID for the assistant. Use the endpoint [Retrieve Languages](/en/api-reference/assistants/languages) to get available languages.
</ParamField>

<ParamField body="type" type="string" required>
  The assistant type. Options: `inbound`, `outbound`
</ParamField>

<ParamField body="mode" type="string" required>
  The engine mode. Options: `pipeline`, `multimodal`, `dualplex`
</ParamField>

<ParamField body="timezone" type="string" required>
  The time zone of the assistant (e.g., "Europe/Berlin", "America/New\_York")
</ParamField>

<ParamField body="initial_message" type="string" required>
  The first message the assistant speaks at the start of the call (max. 200 characters)
</ParamField>

<ParamField body="system_prompt" type="string" required>
  The system prompt that defines the assistant’s behavior and personality
</ParamField>

### Mode-Specific Fields

<ParamField body="llm_model_id" type="integer">
  The LLM model ID. **Required for mode `pipeline`.**

  Use the endpoint [Retrieve Models](/en/api-reference/assistants/models) to get available models.
</ParamField>

<ParamField body="multimodal_model_id" type="integer">
  The multimodal model ID. **Required for modes `multimodal` and `dualplex`.**

  Use the endpoint [Retrieve Models](/en/api-reference/assistants/models) to get available multimodal models.
</ParamField>

<ParamField body="chat_llm_fallback_id" type="integer">
  Fallback LLM model ID for tool calls in `multimodal`/`dualplex`. Optional.
</ParamField>

<ParamField body="turn_detection_threshold" type="number">
  Sensitivity of turn detection in `multimodal`/`dualplex` (0-1). Default: auto
</ParamField>

### Secondary Languages

<ParamField body="secondary_language_ids" type="integer[]">
  Array of additional language IDs that the assistant can speak. The assistant automatically recognizes the language and switches accordingly.

  ```json theme={null} theme={null}
  "secondary_language_ids": [2, 3, 4]
  ```
</ParamField>

### Knowledgebase Settings

<ParamField body="knowledgebase_id" type="integer">
  The knowledgebase ID to attach to this assistant
</ParamField>

<ParamField body="knowledgebase_mode" type="string">
  How the knowledgebase is used. Options:

  * `function_call` - The AI calls a function to search (required for `multimodal`/`dualplex`)
  * `prompt` - Knowledge is injected into the prompt (only `pipeline`)
</ParamField>

### Phone Number

<ParamField body="phone_number_id" type="integer">
  The ID of a phone number to assign to the assistant. Must belong to your account.

  <Warning>
    For `inbound` assistants, the phone number must not be a caller ID type and must not already be assigned to another `inbound` assistant.
  </Warning>
</ParamField>

### Custom Mid-Call Tools

<ParamField body="tool_ids" type="integer[]">
  Array of IDs for custom mid-call tools to attach. Each tool must belong to your account.

  ```json theme={null} theme={null}
  "tool_ids": [1, 5, 12]
  ```
</ParamField>

### Built-in Tools

<ParamField body="tools" type="array">
  Array of activated built-in tools. Each tool has a `type` field and tool-specific fields.

  <Expandable title="Tool Types">
    **call\_transfer** - Transfer the call to another phone number

    * `phone_number` (required): Destination number (e.g., "+1234567890")
    * `description`: When to transfer
    * `custom`: If true, the AI can dynamically determine the transfer number
    * `timezone`: Time zone for availabilities
    * `warm_transfer`: Message to the customer before transfer (default: `false`)
    * `warm_transfer_message`: Prompt for what the AI should say before the transfer (e.g., "Inform the customer the call is being transferred.")

    **warm\_call\_transfer** - Warm transfer with supervisor briefing

    * `supervisor_phone` (required): Phone number for the warm transfer (e.g., "+14155552001"). For `custom_sip` instead SIP address or internal extension.
    * `outbound_phone_id` (required): ID of the phone number used to call the supervisor. See [Retrieve Phone Numbers](/en/api-reference/assistants/phone-numbers).
    * `description` (required): **When to transfer** – when the AI should trigger the warm transfer (e.g., "Transfer to a human agent if the customer wants to speak to a real person.")
    * `custom_sip`: Custom SIP address or internal extension instead of phone number (default: `false`)
    * `caller_id_mode`: Which number the supervisor sees. Options: `outbound_number` (default), `customer_number`, `custom`
    * `custom_caller_id`: Custom number for the supervisor, only with `caller_id_mode: custom`.
    * `hold_music`: Hold music. Options: `hold_music` (default), `none`
    * `hold_music_volume`: Hold music volume 0–100 (default: `80`)
    * `hold_message`: Announcement to the caller before hold (default: "Please wait while I connect you to an agent.")
    * `summary_instructions`: Instruction on how the AI should brief the supervisor (default: who is calling, why, why a human is needed – 2–3 sentences.)
    * `briefing_initial_message`: AI’s initial message to the supervisor (default: "Hello! I have a caller who needs assistance. May I briefly explain the situation?")
    * `connected_message`: Message to the caller after connection with the supervisor (default: "You are now connected to an agent.")

    **end\_call** - Programmatically end the call

    * `description`: When the AI should end the call

    **dtmf\_input** - Send DTMF tones (keypad input)

    * `description`: When to use DTMF (e.g., IVR navigation)

    **collect\_keypad** - Collect keypad input from the caller

    * `timeout`: Wait time in seconds, 1–30 (default: 5)
    * `stop_key`: Key to end input. Options: `#` (default), `*`

    **calendar\_integration** - Schedule appointments via Cal.com

    * `calcom_api_key` (required): Your Cal.com API key
    * `calcom_event_slug` (required): Event type slug from Cal.com
    * `calcom_team_slug`: Team slug if the event belongs to a Cal.com team
    * `calcom_endpoint`: API region. Options: `us` (default – `https://api.cal.com`), `eu` (`https://api.cal.eu`), `custom` (uses `calcom_custom_endpoint`)
    * `calcom_custom_endpoint`: Custom Cal.com API URL. Only for `calcom_endpoint: custom` (e.g., `https://my-calcom-instance.com`).
    * `calcom_booking_fields`: Array of custom booking fields. Each field: `slug`, `type`, `label`, optionally `required`, `options` for select.
    * `description`: When to offer appointment booking
  </Expandable>

  ```json theme={null} theme={null}
  "tools": [
    {
      "type": "call_transfer",
      "phone_number": "+1234567890",
      "description": "Transfer when customer requests human support"
    },
    {
      "type": "warm_call_transfer",
      "supervisor_phone": "+1234567891",
      "outbound_phone_id": 7,
      "description": "Transfer to a human agent if the customer wants to speak to a real person.",
      "custom_sip": false,
      "caller_id_mode": "outbound_number",
      "hold_music": "hold_music",
      "hold_music_volume": 80,
      "hold_message": "Please wait while I connect you to an agent.",
      "summary_instructions": "Briefly from your perspective: Who is calling, why, why a human is needed. 2–3 sentences.",
      "briefing_initial_message": "Hello! I have a caller who needs assistance. May I briefly explain the situation?",
      "connected_message": "You are now connected to an agent."
    },
    {
      "type": "collect_keypad",
      "timeout": 5,
      "stop_key": "#"
    },
    {
      "type": "end_call",
      "description": "End call when customer confirms satisfaction"
    }
  ]
  ```
</ParamField>

### Voice and TTS Settings

<ParamField body="tts_emotion_enabled" type="boolean" default="true">
  Whether emotional text-to-speech synthesis is enabled
</ParamField>

<ParamField body="voice_stability" type="number" default="0.70">
  Voice stability (0-1). Higher = more consistent
</ParamField>

<ParamField body="voice_similarity" type="number" default="0.50">
  Voice similarity (0-1). Higher = closer to the original
</ParamField>

<ParamField body="speech_speed" type="number" default="1.00">
  Speech speed multiplier (0.7-1.2)
</ParamField>

<ParamField body="llm_temperature" type="number" default="0.10">
  LLM temperature (0-1). Lower = more deterministic
</ParamField>

<ParamField body="synthesizer_provider_id" type="integer">
  Custom TTS provider ID. If not set, selected automatically based on language. See [Retrieve Synthesizer Providers](/en/api-reference/assistants/synthesizer-providers).
</ParamField>

<ParamField body="transcriber_provider_id" type="integer">
  Custom STT provider ID. If not set, selected automatically based on language. Only for `pipeline`. See [Retrieve Transcriber Providers](/en/api-reference/assistants/transcriber-providers).
</ParamField>

### Call Behavior Settings

<ParamField body="allow_interruptions" type="boolean" default="true">
  Whether interruptions from the caller are allowed.

  <Warning>Cannot be disabled for `multimodal` and `dualplex`.</Warning>
</ParamField>

<ParamField body="fillers" type="boolean" default="false">
  Whether filler audio should be used during processing (e.g., "uh", "just a moment").

  <Warning>Only available in `pipeline` mode.</Warning>
</ParamField>

<ParamField body="filler_config" type="object">
  Custom filler profiles per category. If not specified, language-dependent defaults are used. Each category is an array of short phrases.

  * `positive`: Fillers for affirmative responses (e.g., "Great!", "Perfect!")
  * `negative`: Fillers for negative/neutral responses (e.g., "Hmm.", "Mhm.")
  * `question`: Fillers while processing a question (e.g., "Good question.", "One moment.")
  * `neutral`: Fillers for neutral acknowledgments (e.g., "Okay.", "Understood.")

  ```json theme={null} theme={null}
  "filler_config": {
    "positive": ["Great!", "Perfect!", "Very good!"],
    "negative": ["Hmm.", "Understood.", "Okay."],
    "question": ["Good question.", "One moment.", "Let me check."],
    "neutral": ["Okay.", "Understood.", "Noted."]
  }
  ```
</ParamField>

<ParamField body="record" type="boolean" default="false">
  Whether the call should be recorded
</ParamField>

<ParamField body="enable_noise_cancellation" type="boolean" default="true">
  Whether noise cancellation should be enabled
</ParamField>

<ParamField body="wait_for_customer" type="boolean" default="false">
  If true, the assistant waits for the customer to speak first
</ParamField>

### Timing Settings

<ParamField body="max_duration" type="integer" default="600">
  Maximum call duration in seconds (20-1200)
</ParamField>

<ParamField body="max_silence_duration" type="integer" default="40">
  Maximum silence duration until re-engagement in seconds (1-360)
</ParamField>

<ParamField body="max_initial_silence_duration" type="integer">
  Maximum silence directly after call start before termination (1-120 seconds). Optional.
</ParamField>

<ParamField body="ringing_time" type="integer" default="30">
  Maximum ringing time before canceling (1-60 seconds)
</ParamField>

### Re-Engagement Settings

<ParamField body="reengagement_interval" type="integer" default="30">
  Re-engagement interval in seconds (7-600)
</ParamField>

<ParamField body="reengagement_prompt" type="string">
  Custom prompt for re-engagement messages (max. 1000 characters)

  Example: `"Are you still there? Do you have any other questions?"`
</ParamField>

### Voicemail Settings

<ParamField body="end_call_on_voicemail" type="boolean" default="true">
  Whether to end the call if voicemail is detected
</ParamField>

<ParamField body="voice_mail_message" type="string">
  Message to leave on voicemail (max. 1000 characters)
</ParamField>

### Endpoint Detection

<ParamField body="endpoint_type" type="string" default="vad">
  Voice activity detection type. Options: `vad`, `ai`
</ParamField>

<ParamField body="endpoint_sensitivity" type="number" default="0.5">
  Endpoint sensitivity (0-5)
</ParamField>

<ParamField body="interrupt_sensitivity" type="number" default="0.5">
  Interrupt sensitivity (0-5)
</ParamField>

<ParamField body="min_interrupt_words" type="integer">
  Minimum number of words before interruption is allowed (0-10). Set to enable.
</ParamField>

### Ambient Sound

<ParamField body="ambient_sound" type="string">
  Background ambient sound. Options: `off`, `office`, `city`, `forest`, `crowded_room`, `cafe`, `nature`
</ParamField>

<ParamField body="ambient_sound_volume" type="number" default="0.5">
  Ambient sound volume (0-1)
</ParamField>

### Webhook Configuration

<ParamField body="is_webhook_active" type="boolean" default="false">
  Whether webhook notifications are enabled
</ParamField>

<ParamField body="webhook_url" type="string">
  The webhook URL for post-call notifications. **Required if `is_webhook_active` is true.**
</ParamField>

<ParamField body="send_webhook_only_on_completed" type="boolean" default="true">
  Whether to send webhooks only for completed calls (not for failed/no-answer)
</ParamField>

<ParamField body="include_recording_in_webhook" type="boolean" default="true">
  Whether to include the recording URL in the webhook payload
</ParamField>

### Post-Call Evaluation

<ParamField body="post_call_evaluation" type="boolean" default="true">
  Whether AI post-call evaluation is enabled
</ParamField>

<ParamField body="post_call_schema" type="array">
  Schema definition for post-call data extraction

  <Expandable title="post_call_schema Properties">
    <ParamField body="name" type="string" required>
      Field name (3-16 characters, lowercase, alphanumeric and underscores only)
    </ParamField>

    <ParamField body="type" type="string" required>
      Data type. Options: `string`, `number`, `bool`
    </ParamField>

    <ParamField body="description" type="string" required>
      Description of what this field represents (3-255 characters)
    </ParamField>
  </Expandable>

  ```json theme={null} theme={null}
  "post_call_schema": [
    {"name": "status", "type": "bool", "description": "Whether the call objective was met"},
    {"name": "summary", "type": "string", "description": "Brief summary of the call"}
  ]
  ```
</ParamField>

### Variables

<ParamField body="variables" type="object">
  Key-value pairs of custom variables that can be used in prompts via `{{variable_name}}`

  ```json theme={null} theme={null}
  "variables": {
    "company_name": "Acme GmbH",
    "product": "Premium Widget",
    "support_email": "support@acme.com"
  }
  ```
</ParamField>

### Conversation-Ended Settings

<ParamField body="conversation_inactivity_timeout" type="integer" default="30">
  Minutes of chat inactivity before the conversation is considered ended (1–1440)
</ParamField>

<ParamField body="conversation_ended_retrigger" type="boolean" default="false">
  Whether the conversation can be restarted after inactivity end
</ParamField>

<ParamField body="conversation_ended_webhook_url" type="string">
  Webhook URL invoked when a chat conversation ends due to inactivity. Separate from the call webhook.
</ParamField>

***

## Example Requests

### `pipeline` Mode Assistant

```json theme={null} theme={null}
{
  "name": "Sales Assistant",
  "voice_id": 1,
  "language_id": 1,
  "type": "outbound",
  "mode": "pipeline",
  "timezone": "Europe/Berlin",
  "initial_message": "Hello! How can I assist you today?",
  "system_prompt": "You are a professional sales assistant...",
  "llm_model_id": 2,
  "secondary_language_ids": [2, 3],
  "knowledgebase_id": 1,
  "knowledgebase_mode": "prompt",
  "fillers": true,
  "filler_config": {
    "positive": ["Great!", "Perfect!", "Very good!"],
    "negative": ["Hmm.", "Understood."],
    "question": ["Good question.", "One moment."],
    "neutral": ["Okay.", "Noted.", "Understood."]
  },
  "tool_ids": [1, 5],
  "tools": [
    {"type": "end_call", "description": "End call when the customer is satisfied"},
    {"type": "call_transfer", "phone_number": "+1234567890", "description": "Transfer to support"},
    {
      "type": "warm_call_transfer",
      "supervisor_phone": "+1234567891",
      "outbound_phone_id": 7,
      "description": "Transfer to a human agent if the customer wants to speak to a real person.",
      "custom_sip": false,
      "caller_id_mode": "outbound_number",
      "hold_music": "hold_music",
      "hold_music_volume": 80,
      "hold_message": "Please wait while I connect you to an agent.",
      "summary_instructions": "Briefly from your perspective: Who is calling, why, why a human is needed. 2–3 sentences.",
      "briefing_initial_message": "Hello! I have a caller who needs assistance. May I briefly explain the situation?",
      "connected_message": "You are now connected to an agent."
    },
    {"type": "collect_keypad", "timeout": 5, "stop_key": "#"}
  ],
  "reengagement_interval": 20,
  "reengagement_prompt": "Are you still there?"
}
```

### `multimodal` Mode Assistant

```json theme={null} theme={null}
{
  "name": "Support Bot",
  "voice_id": 41,
  "language_id": 1,
  "type": "inbound",
  "mode": "multimodal",
  "timezone": "America/New_York",
  "initial_message": "Hi! Welcome to support.",
  "system_prompt": "You are a helpful support agent...",
  "multimodal_model_id": 1,
  "chat_llm_fallback_id": 2,
  "turn_detection_threshold": 0.7,
  "knowledgebase_id": 1,
  "knowledgebase_mode": "function_call",
  "tts_emotion_enabled": false
}
```

### `dualplex` Mode Assistant

```json theme={null} theme={null}
{
  "name": "Premium Agent",
  "voice_id": 1,
  "language_id": 2,
  "type": "outbound",
  "mode": "dualplex",
  "timezone": "Europe/Berlin",
  "initial_message": "Good day!",
  "system_prompt": "You are a professional assistant...",
  "multimodal_model_id": 4,
  "chat_llm_fallback_id": 2,
  "secondary_language_ids": [1, 3],
  "knowledgebase_id": 1,
  "knowledgebase_mode": "function_call",
  "ambient_sound": "office",
  "ambient_sound_volume": 0.3
}
```

***

## Response

<ResponseField name="message" type="string">
  Success message confirming the creation of the assistant
</ResponseField>

<ResponseField name="data" type="object">
  <Expandable title="properties">
    <ResponseField name="id" type="integer">
      The unique ID of the created assistant
    </ResponseField>

    <ResponseField name="name" type="string">
      The name of the assistant
    </ResponseField>

    <ResponseField name="status" type="string">
      The current status (`inactive` for newly created assistants)
    </ResponseField>

    <ResponseField name="type" type="string">
      The type (`inbound` or `outbound`)
    </ResponseField>

    <ResponseField name="mode" type="string">
      The engine mode (`pipeline`, `multimodal`, or `dualplex`)
    </ResponseField>
  </Expandable>
</ResponseField>

<ResponseExample>
  ```json 201 Success Response theme={null} theme={null}
  {
    "message": "Assistant created successfully",
    "data": {
      "id": 789,
      "name": "Sales Assistant",
      "status": "inactive",
      "type": "outbound",
      "mode": "pipeline"
    }
  }
  ```

  ```json 422 Validation Error theme={null} theme={null}
  {
    "message": "Validation failed",
    "errors": {
      "name": ["The name field is required."],
      "voice_id": ["The selected voice is not compatible with the chosen engine type."],
      "knowledgebase_mode": ["Only function_call mode is available for multimodal assistants."]
    }
  }
  ```
</ResponseExample>

***

## Notes

* All required fields must be provided for successful creation
* Use the endpoint [Retrieve Voices](/en/api-reference/assistants/voices) with the query parameter `mode` to obtain compatible voices
* For `multimodal`/`dualplex`, `knowledgebase_mode` must be set to `function_call`
* For `multimodal`/`dualplex`, `allow_interruptions` is always enabled
* `fillers` is only available in `pipeline` mode
* New assistants are created with status `inactive` by default
