Overview

Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.

The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.

Advanced Customization and Access:
As part of Bria’s Source Code & Weights product, developers seeking deeper customization can access Bria’s source-available GenAI models via Hugging Face.
This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.

The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:

  1. Project Management: Create and manage projects that define IP characteristics:
  • Create and Retrieve Projects: Use the /projects endpoints to create a new project or retrieve existing projects that belong to your organization.
  • Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium.
  • Manage Project Details: Use the /projects/{id} endpoints to update or delete specific projects.
  1. Dataset Management: Organize and refine datasets within your projects:
  • Create and Retrieve Datasets: Use the /datasets endpoints to create new datasets or retrieve existing ones.
  • Generate an Advanced Caption Prefix (For stylized_scene, 'defined_character' and 'object_variants' IP types)
    • If the IP type is stylized_scene, 'defined_character' or 'object_variants', it is recommended to generate an advanced prefix before uploading images.
    • Use /tailored-gen/generate_prefix to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available).
    • Update the dataset with the generated prefix using /datasets/{dataset_id} before proceeding with image uploads.
  • Upload and Manage Images: Use the /datasets/{dataset_id}/images endpoints to upload images and manage their captions.
  • Clone Datasets: Create variations of existing datasets using the clone functionality.
  1. Model Management: Train and optimize tailored models based on your datasets:
  • Create and Retrieve Models: Use the /models endpoints to create new models or list existing ones.
  • Choose Training Version: Select between "light" (for fast generation and structure reference compatibility) or "max" (for superior prompt alignment and enhanced learning capabilities).
  • Monitor and Control: Manage the model lifecycle, including training start/stop and status monitoring.

Training Process

To train a tailored model:

  1. Create a Project: Use the /projects endpoint to define your IP type and medium.
  2. Create a Dataset: Use the /datasets endpoint to create a dataset within your project.
  3. Generate an Advanced Caption Prefix (For stylized_scene, 'defined_character' and 'object_variants' IP types):
  • Before uploading images, call /tailored-gen/generate_prefix, sampling 1-6 images from the input images provided for training (preferably 6 if available).
  • Update the dataset with the generated prefix using /datasets/{dataset_id}.
  1. Upload Images: Upload images using the /datasets/{dataset_id}/images endpoint (minimum resolution: 1024x1024px).
  2. Prepare Dataset: Review auto-generated captions and update the dataset status to 'completed'.
  3. Create Model: Use the /models endpoint to create a model, selecting either the "light" or "max" training version.
  4. Start Training: Initiate training via the /models/{id}/start_training endpoint. Training typically takes 1-3 hours.
  5. Monitor Progress: Check the training status using the /models/{id} endpoint until training is 'Completed'.
  6. Generate Images: Once trained, your model can be used in multiple ways:
  • Use /text-to-image/tailored/{model_id} for text-to-image generation.
  • Use /text-to-vector/tailored/{model_id} for generating illustrative vector graphics.
  • Use /reimagine/tailored/{model_id} for structure-based generation.
  • Access through the Bria platform interface.

Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.

Reimagine - Structure Reference

The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image.

It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model. Access this capability via the /reimagine endpoint.

Restyle Portraits

The Restyle Portrait feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.

Restyle Portrait is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.

To use Restyle Portrait:

  • Reference Image: Provide a clear portrait image that meets the recommended guidelines.

  • Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.

Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.

Guidance Methods

Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.

The following APIs support guidance methods:

  • /text-to-image/tailored
  • /text-to-vector/tailored

ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.

  • controlnet_canny: Uses edge information from the input image to guide generation based on structural outlines.
  • controlnet_depth: Derives depth information to influence spatial arrangement in the generated image.
  • controlnet_recoloring: Uses a grayscale version of the input image to guide recoloring while preserving geometry.
  • controlnet_color_grid: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image.

You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.

When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.

To use ControlNets, include the following parameters in your request:

  • guidance_method_X: Specify the guidance method (where X is 1, 2). If the parameter guidance_method_2 is used, guidance_method_1 must also be used. If you want to use only one method, use guidance_method_1.
  • guidance_method_X_scale: Set the impact of the guidance (0.0 to 1.0).
  • guidance_method_X_image_file: Provide the base64-encoded input image.
Guidance MethodPromptScaleInput ImageGuidance ImageOutput Image
ControlNet CannyAn exotic colorful shell on the beach1.0
ControlNet DepthA dog, exploring an alien planet0.8
ControlNet RecoloringA vibrant photo of a woman1.00
ControlNet Color GridA dynamic fantasy illustration of an erupting volcano0.7

Image Prompt Adapter:

This method offers two modes:

  • regular: Uses the image’s content, style elements, and color palette to guide generation.
  • style_only: Uses the image’s high-level style elements and color palette to influence the generated output.

To use Image Prompt Adapter as guidance, include the following parameters in your request:

  • image_prompt_mode: Specify how the input image influences the generation.
  • image_prompt_scale: Set the impact of the provided image on the generated result (0.0 to 1.0).
  • image_prompt_file: Provide the base64-encoded image file to be used as guidance.

or

  • image_prompt_urls: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
Guidance MethodPromptModeScaleGuidance ImageOutput Image
Image Prompt AdapterA drawing of a lion laid on a table.regular0.85
Image Prompt AdapterA drawing of a bird.style1
Languages
Servers
https://engine.prod.bria-api.com/v1/

Generation Endpoints

Operations

Training Endpoints

Operations

Get Datasets by Project

Request

Retrieve all datasets for a specific project

Path
project_idstringrequired

Project ID

Query
include_modelsboolean

If true, a list of model objects using the dataset should be included in the response under the parameter 'models'

Default false
include_models_idsboolean

If true, a list of model ids using the dataset should be included in the response under the parameter 'model_ids'

Default false
Headers
api_tokenstringrequired
curl -i -X GET \
  'https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}/datasets?include_models=false&include_models_ids=false' \
  -H 'api_token: string'

Responses

Successfully retrieved datasets

Bodyapplication/jsonArray [
idinteger

Unique identifier for the dataset

project_idinteger

Associated project ID

namestring

Dataset name

caption_prefixstring

Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.

images_countinteger

Number of images in the dataset

statusstring

Status of the dataset

Enum"draft""completed"
captions_update_statusstring

Status of captions update process

Enum"empty""in_progress""completed"
modelsArray of objects

List of model objects using this dataset. Only included when include_models=true

model_idsArray of strings

List of model IDs using this dataset. Only included when include_models_ids=true

created_atstring(date-time)

Timestamp when the dataset was created

updated_atstring(date-time)

Timestamp when the dataset was last updated

]
Response
application/json
[ { "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "images_count": 15, "status": "completed", "captions_update_status": "empty", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }, { "id": 457, "project_id": 123, "name": "dataset v2", "caption_prefix": "An illustration of a character named Max, a male character with spiky black hair,", "images_count": 8, "status": "draft", "captions_update_status": "empty", "created_at": "2024-05-27T09:00:00Z", "updated_at": "2024-05-27T09:00:00Z" } ]

Get Dataset by ID

Request

Retrieve a specific dataset

Path
dataset_idstringrequired

Dataset ID

Headers
api_tokenstringrequired
curl -i -X GET \
  'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}' \
  -H 'api_token: string'

Responses

Successfully retrieved dataset

Bodyapplication/json
idinteger

Unique identifier for the dataset

project_idinteger

Associated project ID

namestring

Dataset name

caption_prefixstring

Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.

statusstring

Status of the dataset

Enum"draft""completed"
captions_update_statusstring

Status of captions update process

Enum"empty""in_progress""completed"
images_countinteger

Number of images in the dataset

imagesArray of objects

Array of images in the dataset

images[].​idinteger

Unique identifier for the image

images[].​dataset_idinteger

ID of the dataset this image belongs to

images[].​captionstring

Once an image is uploaded, a caption is generated automatically. The caption is a natural continuation of the caption_prefix.

images[].​caption_sourcestring

Source of the caption

Enum"automatic""manual"
images[].​image_namestring

Name of the image

images[].​image_urlstring

URL of the original image file

images[].​thumbnail_urlstring

URL of the image thumbnail

images[].​created_atstring(date-time)

Timestamp when the image was created

images[].​updated_atstring(date-time)

Timestamp when the image was last updated

created_atstring(date-time)

Timestamp when the dataset was created

updated_atstring(date-time)

Timestamp when the dataset was last updated

Response
application/json
{ "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "completed", "captions_update_status": "empty", "images_count": 2, "images": [ { "id": 789, "dataset_id": 456, "caption": "standing in a confident pose wearing a blue dress", "caption_source": "automatic", "image_name": "lora_standing.png", "image_url": "https://api.example.com/files/lora_standing.png", "thumbnail_url": "https://api.example.com/files/lora_standing_thumb.png", "created_at": "2024-05-26T12:30:00Z", "updated_at": "2024-05-26T12:30:00Z" }, { "id": 790, "dataset_id": 456, "caption": "sitting on a chair with a gentle smile", "caption_source": "automatic", "image_name": "lora_sitting.png", "image_url": "https://api.example.com/files/lora_sitting.png", "thumbnail_url": "https://api.example.com/files/lora_sitting_thumb.png", "created_at": "2024-05-26T12:45:00Z", "updated_at": "2024-05-26T12:45:00Z" } ], "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }

Update Dataset

Request

Update a dataset.

In order to use a dataset in a model training, its status must be set to completed.

Once a dataset status is changed to completed:

  • Images cannot be added or removed
  • Image captions cannot be edited
  • Caption prefix cannot be modified

Updating the Caption Prefix
If the caption prefix needs to be changed, update it here first,
then use the Regenerate All Captions endpoint to refresh all captions with the new prefix.
If you want to generate an advanced new caption prefix,
use the /tailored-gen/generate_prefix endpoint before updating the dataset.

It is recommended to use the route Clone Dataset As Draft in order to create a new version of a dataset.

Constraints:

  • Cannot update caption_prefix if dataset status is completed
  • Dataset must have at least 1 image to be marked as completed
Path
dataset_idstringrequired

Dataset ID

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
namestring

New dataset name (optional)

caption_prefixstring

New caption prefix (optional). Cannot be updated if dataset status is completed. If the user has updated the caption prefix, it is crucial to Regenerate All Captions using the endpoint PUT /datasets/{dataset_id}/images/. Use /tailored-gen/generate_prefix to generate an advanced prefix.

statusstring

Dataset status (optional). Can be set to completed to enable usage in model training

Enum"draft""completed"
curl -i -X PUT \
  'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}' \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "status": "completed"
  }'

Responses

Dataset successfully updated

Bodyapplication/json
idinteger

Unique identifier for the dataset

project_idinteger

Associated project ID

namestring

Dataset name

caption_prefixstring

Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.

statusstring

Status of the dataset

Enum"draft""completed"
captions_update_statusstring

Status of captions update process

Enum"empty""in_progress""completed"
created_atstring(date-time)

Timestamp when the dataset was created

updated_atstring(date-time)

Timestamp when the dataset was last updated

Response
application/json
{ "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "completed", "captions_update_status": "empty", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T15:30:00Z" }