Overview
Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.
The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.
Advanced Customization and Access:
As part of Bria’s Source Code & Weights product, developers seeking deeper customization can access Bria’s source-available GenAI models via Hugging Face.
This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.
The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:
- Project Management: Create and manage projects that define IP characteristics:
- Create and Retrieve Projects: Use the
/projects
endpoints to create a new project or retrieve existing projects that belong to your organization. - Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium.
- Manage Project Details: Use the
/projects/{id}
endpoints to update or delete specific projects.
- Dataset Management: Organize and refine datasets within your projects:
- Create and Retrieve Datasets: Use the
/datasets
endpoints to create new datasets or retrieve existing ones. - Generate an Advanced Caption Prefix (For
stylized_scene
, 'defined_character' and 'object_variants' IP types)- If the IP type is
stylized_scene
, 'defined_character' or 'object_variants', it is recommended to generate an advanced prefix before uploading images. - Use
/tailored-gen/generate_prefix
to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available). - Update the dataset with the generated prefix using
/datasets/{dataset_id}
before proceeding with image uploads.
- If the IP type is
- Upload and Manage Images: Use the
/datasets/{dataset_id}/images
endpoints to upload images and manage their captions. - Clone Datasets: Create variations of existing datasets using the clone functionality.
- Model Management: Train and optimize tailored models based on your datasets:
- Create and Retrieve Models: Use the
/models
endpoints to create new models or list existing ones. - Choose Training Version: Select between "light" (for fast generation and structure reference compatibility) or "max" (for superior prompt alignment and enhanced learning capabilities).
- Monitor and Control: Manage the model lifecycle, including training start/stop and status monitoring.
To train a tailored model:
- Create a Project: Use the
/projects
endpoint to define your IP type and medium. - Create a Dataset: Use the
/datasets
endpoint to create a dataset within your project. - Generate an Advanced Caption Prefix (For
stylized_scene
, 'defined_character' and 'object_variants' IP types):
- Before uploading images, call
/tailored-gen/generate_prefix
, sampling 1-6 images from the input images provided for training (preferably 6 if available). - Update the dataset with the generated prefix using
/datasets/{dataset_id}
.
- Upload Images: Upload images using the
/datasets/{dataset_id}/images
endpoint (minimum resolution: 1024x1024px). - Prepare Dataset: Review auto-generated captions and update the dataset status to 'completed'.
- Create Model: Use the
/models
endpoint to create a model, selecting either the "light" or "max" training version. - Start Training: Initiate training via the
/models/{id}/start_training
endpoint. Training typically takes 1-3 hours. - Monitor Progress: Check the training status using the
/models/{id}
endpoint until training is 'Completed'. - Generate Images: Once trained, your model can be used in multiple ways:
- Use
/text-to-image/tailored/{model_id}
for text-to-image generation. - Use
/text-to-vector/tailored/{model_id}
for generating illustrative vector graphics. - Use
/reimagine/tailored/{model_id}
for structure-based generation. - Use
/tailored-gen/restyle_portrait
for human portrait-based generation. - Access through the Bria platform interface.
Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.
The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image.
It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model. Access this capability via the /reimagine endpoint.

The Reimagine - Portrait Reference feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.
Reimagine - Portrait Reference is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.
To use Reimagine - Portrait Reference:
Reference Image: Provide a clear portrait image that meets the recommended guidelines.
Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.
Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.
Model Compatibility Note: This feature supports only tailored models trained using the Light training version, tailored models trained in Expert training mode based on Bria 2.3, or uploaded tailored models that were trained based on Bria 2.3.
Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.
The following APIs support guidance methods:
/text-to-image/tailored
/text-to-vector/tailored
ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.
- controlnet_canny: Uses edge information from the input image to guide generation based on structural outlines.
- controlnet_depth: Derives depth information to influence spatial arrangement in the generated image.
- controlnet_recoloring: Uses a grayscale version of the input image to guide recoloring while preserving geometry.
- controlnet_color_grid: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image.
You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.
When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.
To use ControlNets, include the following parameters in your request:
guidance_method_X
: Specify the guidance method (where X is 1, 2). If the parameterguidance_method_2
is used,guidance_method_1
must also be used. If you want to use only one method, useguidance_method_1
.guidance_method_X_scale
: Set the impact of the guidance (0.0 to 1.0).guidance_method_X_image_file
: Provide the base64-encoded input image.
Guidance Method | Prompt | Scale | Input Image | Guidance Image | Output Image |
---|---|---|---|---|---|
ControlNet Canny | An exotic colorful shell on the beach | 1.0 | ![]() | ![]() | ![]() |
ControlNet Depth | A dog, exploring an alien planet | 0.8 | ![]() | ![]() | ![]() |
ControlNet Recoloring | A vibrant photo of a woman | 1.00 | ![]() | ![]() | ![]() |
ControlNet Color Grid | A dynamic fantasy illustration of an erupting volcano | 0.7 | ![]() | ![]() | ![]() |
Image Prompt Adapter:
This method offers two modes:
- regular: Uses the image’s content, style elements, and color palette to guide generation.
- style_only: Uses the image’s high-level style elements and color palette to influence the generated output.
To use Image Prompt Adapter as guidance, include the following parameters in your request:
image_prompt_mode
: Specify how the input image influences the generation.image_prompt_scale
: Set the impact of the provided image on the generated result (0.0 to 1.0).image_prompt_file
: Provide the base64-encoded image file to be used as guidance.
or
image_prompt_urls
: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
Guidance Method | Prompt | Mode | Scale | Guidance Image | Output Image |
---|---|---|---|---|---|
Image Prompt Adapter | A drawing of a lion laid on a table. | regular | 0.85 | ![]() | ![]() |
Image Prompt Adapter | A drawing of a bird. | style | 1 | ![]() | ![]() |
https://engine.prod.bria-api.com/v1/
Request
Create a new dataset.
Constraints:
- Dataset must have at least 1 image to be completed
- Maximum of 200 images per dataset
When creating a dataset, a defoult caption prefix is created in all cases.
Generating an advanced Caption Prefix Before Uploading Images
When creating a dataset with stylized_scene
, 'defined_character' or 'object_variants' IP types, it is recommended to generate an advanced caption prefix before uploading images.
To do this, use the /tailored-gen/generate_prefix endpoint, send up to 6 images, and update the dataset with the received prefix using the Update Dataset endpoint. Once the prefix is updated, proceed with uploading images.
Image Preprocessing
Uploaded images will be automatically resized so that the shortest side is 1024 pixels while maintaining the aspect ratio.
Then, a centered 1024x1024 crop will be applied. The final cropped image will be saved for training.
https://engine.prod.bria-api.com/v1/tailored-gen/datasets
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X POST \
https://engine.prod.bria-api.com/v1/tailored-gen/datasets \
-H 'Content-Type: application/json' \
-H 'api_token: string' \
-d '{
"project_id": 123,
"name": "dataset v1"
}'
Dataset successfully created
Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.
{ "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "draft", "captions_update_status": "empty", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T12:00:00Z" }
https://engine.prod.bria-api.com/v1/tailored-gen/datasets
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
https://engine.prod.bria-api.com/v1/tailored-gen/datasets \
-H 'api_token: string'
Successfully retrieved datasets
Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.
[ { "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "completed", "captions_update_status": "empty", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }, { "id": 457, "project_id": 124, "name": "dataset v2", "caption_prefix": "An illustration of a character named Max, a male character with spiky black hair,", "status": "draft", "captions_update_status": "empty", "created_at": "2024-05-27T09:00:00Z", "updated_at": "2024-05-27T09:00:00Z" } ]
https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}/datasets
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}/datasets?include_models=false&include_models_ids=false' \
-H 'api_token: string'
Successfully retrieved datasets
Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.
List of model objects using this dataset. Only included when include_models=true
List of model IDs using this dataset. Only included when include_models_ids=true
[ { "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "images_count": 15, "status": "completed", "captions_update_status": "empty", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }, { "id": 457, "project_id": 123, "name": "dataset v2", "caption_prefix": "An illustration of a character named Max, a male character with spiky black hair,", "images_count": 8, "status": "draft", "captions_update_status": "empty", "created_at": "2024-05-27T09:00:00Z", "updated_at": "2024-05-27T09:00:00Z" } ]