Overview

Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.

The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.

Advanced Customization and Access:
As part of Bria’s Source Code & Weights product, developers seeking deeper customization can access Bria’s source-available GenAI models via Hugging Face.
This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.

The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:

  1. Project Management: Create and manage projects that define IP characteristics:
  • Create and Retrieve Projects: Use the /projects endpoints to create a new project or retrieve existing projects that belong to your organization.
  • Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium (currently illustration, with photography coming soon).
  • Manage Project Details: Use the /projects/{id} endpoints to update or delete specific projects.
  1. Dataset Management: Organize and refine datasets within your projects:
  • Create and Retrieve Datasets: Use the /datasets endpoints to create new datasets or retrieve existing ones.
  • Generate an Advanced Caption Prefix (For stylized_scene IP type)
    • If the IP type is stylized_scene, it is recommended to generate an advanced prefix before uploading images.
    • Use /tailored-gen/generate_prefix to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available).
    • Update the dataset with the generated prefix using /datasets/{dataset_id} before proceeding with image uploads.
  • Upload and Manage Images: Use the /datasets/{dataset_id}/images endpoints to upload images and manage their captions.
  • Clone Datasets: Create variations of existing datasets using the clone functionality.
  1. Model Management: Train and optimize tailored models based on your datasets:
  • Create and Retrieve Models: Use the /models endpoints to create new models or list existing ones.
  • Choose Training Version: Select between "light" (for fast generation and structure reference compatibility) or "max" (for superior prompt alignment and enhanced learning capabilities).
  • Monitor and Control: Manage the model lifecycle, including training start/stop and status monitoring.

Training Process

To train a tailored model:

  1. Create a Project: Use the /projects endpoint to define your IP type and medium.
  2. Create a Dataset: Use the /datasets endpoint to create a dataset within your project.
  3. Generate an Advanced Caption Prefix (For stylized_scene IP type only):
  • Before uploading images, call /tailored-gen/generate_prefix, sampling 1-6 images from the input images provided for training (preferably 6 if available).
  • Update the dataset with the generated prefix using /datasets/{dataset_id}.
  1. Upload Images: Upload images using the /datasets/{dataset_id}/images endpoint (minimum resolution: 1024x1024px).
  2. Prepare Dataset: Review auto-generated captions and update the dataset status to 'completed'.
  3. Create Model: Use the /models endpoint to create a model, selecting either the "light" or "max" training version.
  4. Start Training: Initiate training via the /models/{id}/start_training endpoint. Training typically takes 1-3 hours.
  5. Monitor Progress: Check the training status using the /models/{id} endpoint until training is 'Completed'.
  6. Generate Images: Once trained, your model can be used in multiple ways:
  • Use /text-to-image/tailored/{model_id} for text-to-image generation.
  • Use /text-to-vector/tailored/{model_id} for generating illustrative vector graphics.
  • Use /reimagine/tailored/{model_id} for structure-based generation.
  • Access through the Bria platform interface.

Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.

Restyle Portraits

The Restyle Portrait feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.

Restyle Portrait is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.

To use Restyle Portrait:

  • Reference Image: Provide a clear portrait image that meets the recommended guidelines.

  • Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.

Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.

Guidance Methods

Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.

The following APIs support guidance methods:

  • /text-to-image/tailored
  • /text-to-vector/tailored

ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.

  • controlnet_canny: Uses edge information from the input image to guide generation based on structural outlines.
  • controlnet_depth: Derives depth information to influence spatial arrangement in the generated image.
  • controlnet_recoloring: Uses a grayscale version of the input image to guide recoloring while preserving geometry.
  • controlnet_color_grid: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image.

You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.

When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.

To use ControlNets, include the following parameters in your request:

  • guidance_method_X: Specify the guidance method (where X is 1, 2). If the parameter guidance_method_2 is used, guidance_method_1 must also be used. If you want to use only one method, use guidance_method_1.
  • guidance_method_X_scale: Set the impact of the guidance (0.0 to 1.0).
  • guidance_method_X_image_file: Provide the base64-encoded input image.
Guidance MethodPromptScaleInput ImageGuidance ImageOutput Image
ControlNet CannyAn exotic colorful shell on the beach1.0
ControlNet DepthA dog, exploring an alien planet0.8
ControlNet RecoloringA vibrant photo of a woman1.00
ControlNet Color GridA dynamic fantasy illustration of an erupting volcano0.7

Image Prompt Adapter:

This method offers two modes:

  • regular: Uses the image’s content, style elements, and color palette to guide generation.
  • style_only: Uses the image’s high-level style elements and color palette to influence the generated output.

To use Image Prompt Adapter as guidance, include the following parameters in your request:

  • image_prompt_mode: Specify how the input image influences the generation.
  • image_prompt_scale: Set the impact of the provided image on the generated result (0.0 to 1.0).
  • image_prompt_file: Provide the base64-encoded image file to be used as guidance.

or

  • image_prompt_urls: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
Guidance MethodPromptModeScaleGuidance ImageOutput Image
Image Prompt AdapterA drawing of a lion laid on a table.regular0.85
Image Prompt AdapterA drawing of a bird.style1
Languages
Servers
https://engine.prod.bria-api.com/v1/

Endpoints

Operations

Generate Image - Tailored model

Request

This route allows you to generate images using a Tailored Model. Tailored models are trained on a visual IP (illustrations, photos, vectors) to faithfully reproduce specific IP elements or guidelines. You can train an engine through our Console or implement training on your platform via API.

This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.

Path
model_idstringrequired

The model id of the tailored model you would like to use in the request.

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
promptstring

The prompt you would like to use to generate images. Bria currently supports prompts in English only, excluding special characters.

num_resultsinteger[ 1 .. 4 ]

How many images you would like to generate. This parameter is optional. When fast=false, only num_results 1, 2 are supported.

Default 4
aspect_ratiostring

The aspect ratio of the image. When a ControlNet is being used, the aspect ratio is defined by the guidance image and this parameter is ignored.

Default "1:1"
Enum"1:1""2:3""3:2""3:4""4:3""4:5""5:4""9:16""16:9"
syncboolean

Determines the response mode. When true, responses are synchronous. With false, responses are asynchronous, immediately providing URLs for images that are generated in the background. Use polling for the URLs to retrieve images once ready.

Default false
fastboolean

Determines the generation mode. When true, the generation will utilize the fast mode which provides the best balance between speed and quality. When false, the regular mode will be utilized. At the moment, tailored models trained using the 'Max' training version, do not support fast generation.

Default true
seedinteger

You can choose whether you want your generated result to be random or predictable. You can recreate the same result in the future by using the seed value of a result from the response with the prompt, model type and model version. You can exclude this parameter if you are not interested in recreating your results. This parameter is optional.

steps_numinteger[ 4 .. 20 ]

The number of iterations the model goes through to refine the generated image. This parameter is optional. When fast=false, the default value is 30, the minimum is 20 and the maximum is 50.

Default 8
model_influencenumber(float)[ 0 .. 1.5 ]

The influence of the structure reference on the generated image. This parameter is optional. Higher value means more adherence to the reference structure.

Default 1
include_generation_prefixboolean

When true, the model's generation prefix is automatically prepended to your prompt to maintain consistency with the training data, while false allows you to override the training prefix and write the complete prompt yourself, including any preferred prefix text.

Default true
faces_refinerboolean

When set to true, automatically detects and refines generated human faces for enhanced realism and detail.

Default false
content_moderationboolean

When enabled, applies content moderation to both input visuals and generated outputs.

For input images:

  • Processing stops at the first image that fails moderation
  • Returns a 422 error with details about which parameter failed

For synchronous requests (sync=true):

  • If all generated images fail moderation, returns a 422 error
  • If some images pass and others fail, returns a 200 response with successful generations and "blocked" objects for failed ones

For asynchronous requests (sync=false):

  • Failed images are replaced with zero-byte files at their placeholder URLs
  • Successful images are stored at their original placeholder URLs
Default false
guidance_method_1string

Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.

Enum"controlnet_canny""controlnet_depth""controlnet_recoloring""controlnet_color_grid"
guidance_method_1_scalenumber(float)[ 0 .. 1 ]

The impact of the guidance.

Default 1
guidance_method_1_image_filestring

The image that should be used as guidance, in base64 format, with the method defined in guidance_method_1. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.

guidance_method_2string

Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.

Enum"controlnet_canny""controlnet_depth""controlnet_recoloring""controlnet_color_grid"
guidance_method_2_scalenumber(float)[ 0 .. 1 ]

The impact of the guidance.

Default 1
guidance_method_2_image_filestring

The image that should be used as guidance, in base64 format, with the method defined in guidance_method_2. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.

image_prompt_modestring
  • regular: Uses the image’s content, style elements, and color palette to guide generation.
  • style_only: Uses the image’s high-level style elements and color palette to influence the generated output. At the moment, tailored models trained using the 'Max' training version, do not support image prompt guidense.
Default "regular"
Enum"regular""style_only"
image_prompt_filestring

The image file to be used as guidance, in base64 format. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. This image can be of any aspect ratio, even when it's not alligned with the one defined in the parameter 'aspect_ratio' or by visuals provided to the ControlNets.

image_prompt_urlsArray of strings(uri)

A list of URLs of images that should be used as guidance. The images can be of different aspect ratios. Accepted formats are jpeg, jpg, png, webp. The URLs should point to accessible, publicly available images.

image_prompt_scalenumber(float)[ 0 .. 1 ]

The impact of the provided image on the generated results. A value between 0.0 (no impact) and 1.0 (full impact).

Default 1
curl -i -X POST \
  'https://engine.prod.bria-api.com/v1/text-to-image/tailored/{model_id}' \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "prompt": "a book",
    "num_results": 2,
    "sync": true
  }'

Responses

Successful operation

Bodyapplication/json
resultArray of objects or objects

There are multiple objects in this array (based on the amount specified in num_results) and each object represents a single image or a blocked result.

One of:
result[].​seedinteger

If you want to recreate the result again, you should use in the request the prompt and the seed of the response.

result[].​urlsstring

This is the URL where the generated image can be found. It will take a few seconds for the image to become available via this URL if sync=false.

result[].​uuidstring
Response
application/json
{ "result": [ { "urls": "https://storage.server/generate_image/some_uuid/seed_111111.png", "seed": 111111, "uuid": "some_uuid_111111" }, { "urls": "https://storage.server/generate_image/some_uuid/seed_222222.png", "seed": 222222, "uuid": "some_uuid_222222" } ] }

Generate Vector Graphics - Tailored (Beta)

Request

Description

This route allows you to generate vector graphics using a Tailored Model. Tailored Models are trained on your visual IP (illustrations, photos, vectors) to preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs. To see a detailed description of the tailored models' functionalities, please refer to the /text-to-image/tailored/{model_id} route documentation. *Text-to-vector is compatible with tailored models in the illustrative domain. This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.

Path
model_idstringrequired

The model id of the tailored model you would like to use in the request.

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
promptstring

The prompt you would like to use to generate images. Bria currently supports prompts in English only, excluding special characters.

num_resultsinteger[ 1 .. 4 ]

How many images you would like to generate. This parameter is optional. When fast=false, only num_results 1, 2 are supported.

Default 4
aspect_ratiostring

The aspect ratio of the image. When a ControlNet is being used, the aspect ratio is defined by the guidance image and this parameter is ignored.

Default "1:1"
Enum"1:1""2:3""3:2""3:4""4:3""4:5""5:4""9:16""16:9"
syncboolean

Determines the response mode. When true, responses are synchronous. With false, responses are asynchronous, immediately providing URLs for images that are generated in the background. Use polling for the URLs to retrieve images once ready. This parameter is optional. When fast=false, it is reomcmned to use sync=false.

Default true
fastboolean

Determines the generation mode. When true, the generation will utilize the fast mode which provides the best balance between speed and quality. When false, the regular mode will be utilized. At the moment, tailored models trained using the 'Max' training version, do not support fast generation.

Default true
seedinteger

You can choose whether you want your generated result to be random or predictable. You can recreate the same result in the future by using the seed value of a result from the response with the prompt, model type and model version. You can exclude this parameter if you are not interested in recreating your results. This parameter is optional.

steps_numinteger[ 4 .. 20 ]

The number of iterations the model goes through to refine the generated image. This parameter is optional. When fast=false, the default value is 30, the minimum is 20 and the maximum is 50.

Default 8
model_influencenumber(float)[ 0 .. 1.5 ]

The influence of the tailored model on the generation. Only relevant if tailored_model_id is provided. This parameter is optional. Higher value gives more weight to the tailored model.

Default 1
include_generation_prefixboolean

When true, the model's generation prefix is automatically prepended to your prompt to maintain consistency with the training data, while false allows you to override the training prefix and write the complete prompt yourself, including any preferred prefix text.

Default true
content_moderationboolean

When enabled, applies content moderation to both input visuals and generated outputs.

For input images:

  • Processing stops at the first image that fails moderation
  • Returns a 422 error with details about which parameter failed

For synchronous requests (sync=true):

  • If all generated images fail moderation, returns a 422 error
  • If some images pass and others fail, returns a 200 response with successful generations and "blocked" objects for failed ones

For asynchronous requests (sync=false):

  • Failed images are replaced with zero-byte files at their placeholder URLs
  • Successful images are stored at their original placeholder URLs
Default false
guidance_method_1string

Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.

Enum"controlnet_canny""controlnet_depth""controlnet_recoloring""controlnet_color_grid"
guidance_method_1_scalenumber(float)[ 0 .. 1 ]

The impact of the guidance.

Default 1
guidance_method_1_image_filestring

The image that should be used as guidance, in base64 format, with the method defined in guidance_method_1. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.

guidance_method_2string

Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.

Enum"controlnet_canny""controlnet_depth""controlnet_recoloring""controlnet_color_grid"
guidance_method_2_scalenumber(float)[ 0 .. 1 ]

The impact of the guidance.

Default 1
guidance_method_2_image_filestring

The image that should be used as guidance, in base64 format, with the method defined in guidance_method_2. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.

image_prompt_modestring
  • regular: Uses the image’s content, style elements, and color palette to guide generation.
  • style_only: Uses the image’s high-level style elements and color palette to influence the generated output. At the moment, tailored models trained using the 'Max' training version, do not support image prompt guidense.
Default "regular"
Enum"regular""style_only"
image_prompt_filestring

The image file to be used as guidance, in base64 format. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. This image can be of any aspect ratio, even when it's not alligned with the one defined in the parameter 'aspect_ratio' or by visuals provided to the ControlNets.

image_prompt_urlsArray of strings(uri)

A list of URLs of images that should be used as guidance. The images can be of different aspect ratios. Accepted formats are jpeg, jpg, png, webp. The URLs should point to accessible, publicly available images.

image_prompt_scalenumber(float)[ 0 .. 1 ]

The impact of the provided image on the generated results. A value between 0.0 (no impact) and 1.0 (full impact).

Default 1
curl -i -X POST \
  'https://engine.prod.bria-api.com/v1/text-to-vector/tailored/{model_id}' \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "prompt": "a book",
    "num_results": 2,
    "sync": true
  }'

Responses

Successful operation

Bodyapplication/json
resultArray of objects or objects

There are multiple objects in this array (based on the amount specified in num_results) and each object represents a single image or a blocked result.

One of:
result[].​seedinteger

If you want to recreate the result again, you should use in the request the prompt and the seed of the response.

result[].​urlsstring

This is the URL where the generated image can be found. It will take a few seconds for the image to become available via this URL if sync=false.

result[].​uuidstring
Response
application/json
{ "result": [ { "urls": "https://storage.server/generate_image/some_uuid/seed_111111.png", "seed": 111111, "uuid": "some_uuid_111111" }, { "urls": "https://storage.server/generate_image/some_uuid/seed_222222.png", "seed": 222222, "uuid": "some_uuid_222222" } ] }

Restyle Portrait

Request

This endpoint lets you change the style of a portrait while keeping the person’s id. It works by using a reference image of the person along with a trained tailored model.

This capability is specifically designed for portraits that capture the subject from the torso up, with a recommended face size of at least 500×500 pixels. Images that do not meet these guidelines may produce inconsistent results.

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
id_image_urlstring(uri)

The URL of the ID reference image. If both id_image_url and id_image_file are provided, id_image_url will be used. Accepted formats: jpeg, jpg, png, webp.

id_image_filestring(binary)

The image file containing the ID reference. This parameter is used if id_image_url is not provided. Accepted formats: jpeg, jpg, png, webp.

tailored_model_idstring

The ID of the tailored model to use for generation.

tailored_model_influencenumber(float)[ 0 .. 1.2 ]

The influence of the tailored model on the generated image.

Default 0.9
id_strengthnumber(float)[ 0 .. 1 ]

Strength of the instant ID.

Default 0.7
syncboolean

Determines the response mode. When true, the request is synchronous.

Default true
curl -i -X POST \
  https://engine.prod.bria-api.com/v1/tailored-gen/restyle_portrait \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "id_image_url": "http://example.com",
    "id_image_file": "string",
    "tailored_model_id": "string",
    "tailored_model_influence": 0.9,
    "id_strength": 0.7,
    "sync": true
  }'

Responses

Successful operation

Bodyapplication/json
resultArray of objects

List of generated images or blocked results.

result[].​image_urlstring(uri)

URL where the generated image is stored.

Response
application/json
{ "result": [ { "image_url": "http://example.com" } ] }