Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.
The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.
Advanced Customization and Access:
As part of Bria’s Source Code & Weights product, developers seeking deeper customization can access Bria’s source-available GenAI models via Hugging Face.
This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.
The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:
/projects
endpoints to create a new project or retrieve existing projects that belong to your organization./projects/{id}
endpoints to update or delete specific projects./datasets
endpoints to create new datasets or retrieve existing ones.stylized_scene
, 'defined_character' and 'object_variants' IP types)stylized_scene
, 'defined_character' or 'object_variants', it is recommended to generate an advanced prefix before uploading images./tailored-gen/generate_prefix
to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available)./datasets/{dataset_id}
before proceeding with image uploads./datasets/{dataset_id}/images
endpoints to upload images and manage their captions./models
endpoints to create new models or list existing ones.To train a tailored model:
/projects
endpoint to define your IP type and medium./datasets
endpoint to create a dataset within your project.stylized_scene
, 'defined_character' and 'object_variants' IP types):/tailored-gen/generate_prefix
, sampling 1-6 images from the input images provided for training (preferably 6 if available)./datasets/{dataset_id}
./datasets/{dataset_id}/images
endpoint (minimum resolution: 1024x1024px)./models
endpoint to create a model, selecting either the "light" or "max" training version./models/{id}/start_training
endpoint. Training typically takes 1-3 hours./models/{id}
endpoint until training is 'Completed'./text-to-image/tailored/{model_id}
for text-to-image generation./text-to-vector/tailored/{model_id}
for generating illustrative vector graphics./reimagine/tailored/{model_id}
for structure-based generation.Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.
The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image.
It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model. Access this capability via the /reimagine endpoint.
The Restyle Portrait feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.
Restyle Portrait is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.
To use Restyle Portrait:
Reference Image: Provide a clear portrait image that meets the recommended guidelines.
Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.
Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.
Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.
The following APIs support guidance methods:
/text-to-image/tailored
/text-to-vector/tailored
ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.
You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.
When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.
To use ControlNets, include the following parameters in your request:
guidance_method_X
: Specify the guidance method (where X is 1, 2). If the parameter guidance_method_2
is used, guidance_method_1
must also be used. If you want to use only one method, use guidance_method_1
.guidance_method_X_scale
: Set the impact of the guidance (0.0 to 1.0).guidance_method_X_image_file
: Provide the base64-encoded input image.Guidance Method | Prompt | Scale | Input Image | Guidance Image | Output Image |
---|---|---|---|---|---|
ControlNet Canny | An exotic colorful shell on the beach | 1.0 | ![]() | ![]() | ![]() |
ControlNet Depth | A dog, exploring an alien planet | 0.8 | ![]() | ![]() | ![]() |
ControlNet Recoloring | A vibrant photo of a woman | 1.00 | ![]() | ![]() | ![]() |
ControlNet Color Grid | A dynamic fantasy illustration of an erupting volcano | 0.7 | ![]() | ![]() | ![]() |
Image Prompt Adapter:
This method offers two modes:
To use Image Prompt Adapter as guidance, include the following parameters in your request:
image_prompt_mode
: Specify how the input image influences the generation.image_prompt_scale
: Set the impact of the provided image on the generated result (0.0 to 1.0).image_prompt_file
: Provide the base64-encoded image file to be used as guidance.or
image_prompt_urls
: Provide a list of URLs pointing to publicly accessible images to be used as guidance.Guidance Method | Prompt | Mode | Scale | Guidance Image | Output Image |
---|---|---|---|---|---|
Image Prompt Adapter | A drawing of a lion laid on a table. | regular | 0.85 | ![]() | ![]() |
Image Prompt Adapter | A drawing of a bird. | style | 1 | ![]() | ![]() |
https://engine.prod.bria-api.com/v1/
This route allows you to generate images using a Tailored Model. Tailored models are trained on a visual IP (illustrations, photos, vectors) to faithfully reproduce specific IP elements or guidelines. You can train an engine through our Console or implement training on your platform via API.
This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.
The prompt you would like to use to generate images. Bria currently supports prompts in English only, excluding special characters.
How many images you would like to generate. This parameter is optional. When fast=false, only num_results 1, 2 are supported.
The aspect ratio of the image. When a ControlNet is being used, the aspect ratio is defined by the guidance image and this parameter is ignored.
Determines the response mode. When true, responses are synchronous. With false, responses are asynchronous, immediately providing URLs for images that are generated in the background. Use polling for the URLs to retrieve images once ready.
Determines the generation mode. When true, the generation will utilize the fast mode which provides the best balance between speed and quality. When false, the regular mode will be utilized. At the moment, tailored models trained using the 'Max' training version, do not support fast generation.
You can choose whether you want your generated result to be random or predictable. You can recreate the same result in the future by using the seed value of a result from the response with the prompt, model type and model version. You can exclude this parameter if you are not interested in recreating your results. This parameter is optional.
Specify here elements that you didn't ask in the prompt, but are being generated, and you would like to exclude. This parameter is optional. Bria currently supports prompts in English only. This parameter is only relevant when fast
is set to false
.
The number of iterations the model goes through to refine the generated image. This parameter is optional. When fast=false, the default value is 30, the minimum is 20 and the maximum is 50.
Determines how closely the generated image should adhere to the input text description. This parameter is optional. This parameter is only relevant when fast
is set to false
.
The influence of the structure reference on the generated image. This parameter is optional. Higher value means more adherence to the reference structure.
When true, the model's generation prefix is automatically prepended to your prompt to maintain consistency with the training data, while false allows you to override the training prefix and write the complete prompt yourself, including any preferred prefix text.
When set to true, automatically detects and refines generated human faces for enhanced realism and detail.
When enabled (default: true), the input prompt is scanned for NSFW or ethically restricted terms before image generation. If the prompt violates Bria's ethical guidelines, the request will be rejected with a 408 error.
When enabled, applies content moderation to both input visuals and generated outputs.
For input images:
For synchronous requests (sync=true):
For asynchronous requests (sync=false):
Flags prompts with potential IP content. If detected, a warning will be included in the response.
Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.
The image that should be used as guidance, in base64 format, with the method defined in guidance_method_1. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.
Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.
The image that should be used as guidance, in base64 format, with the method defined in guidance_method_2. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.
regular
: Uses the image’s content, style elements, and color palette to guide generation.style_only
: Uses the image’s high-level style elements and color palette to influence the generated output. At the moment, tailored models trained using the 'Max' training version, do not support image prompt guidense.The image file to be used as guidance, in base64 format. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. This image can be of any aspect ratio, even when it's not alligned with the one defined in the parameter 'aspect_ratio' or by visuals provided to the ControlNets.
A list of URLs of images that should be used as guidance. The images can be of different aspect ratios. Accepted formats are jpeg, jpg, png, webp. The URLs should point to accessible, publicly available images.
https://engine.prod.bria-api.com/v1/text-to-image/tailored/{model_id}
curl -i -X POST \
'https://engine.prod.bria-api.com/v1/text-to-image/tailored/{model_id}' \
-H 'Content-Type: application/json' \
-H 'api_token: string' \
-d '{
"prompt": "a book",
"num_results": 2,
"sync": true
}'
Successful operation.
There are multiple objects in this array (based on the amount specified in num_results) and each object represents a single image or a blocked result.
If you want to recreate the result again, you should use in the request the prompt and the seed of the response.
This is the URL where the generated image can be found. It will take a few seconds for the image to become available via this URL if sync=false.
{ "result": [ { "urls": "https://storage.server/generate_image/some_uuid/seed_111111.png", "seed": 111111, "uuid": "some_uuid_111111" }, { "urls": "https://storage.server/generate_image/some_uuid/seed_222222.png", "seed": 222222, "uuid": "some_uuid_222222" } ] }
Description
This route allows you to generate vector graphics using a Tailored Model. Tailored Models are trained on your visual IP (illustrations, photos, vectors) to preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs. To see a detailed description of the tailored models' functionalities, please refer to the /text-to-image/tailored/{model_id} route documentation. *Text-to-vector is compatible with tailored models in the illustrative domain. This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.
The prompt you would like to use to generate images. Bria currently supports prompts in English only, excluding special characters.
How many images you would like to generate. This parameter is optional. When fast=false, only num_results 1, 2 are supported.
The aspect ratio of the image. When a ControlNet is being used, the aspect ratio is defined by the guidance image and this parameter is ignored.
Determines the response mode. When true, responses are synchronous. With false, responses are asynchronous, immediately providing URLs for images that are generated in the background. Use polling for the URLs to retrieve images once ready. This parameter is optional. When fast=false, it is reomcmned to use sync=false.
Determines the generation mode. When true, the generation will utilize the fast mode which provides the best balance between speed and quality. When false, the regular mode will be utilized. At the moment, tailored models trained using the 'Max' training version, do not support fast generation.
You can choose whether you want your generated result to be random or predictable. You can recreate the same result in the future by using the seed value of a result from the response with the prompt, model type and model version. You can exclude this parameter if you are not interested in recreating your results. This parameter is optional.
Specify here elements that you didn't ask in the prompt, but are being generated, and you would like to exclude. This parameter is optional. Bria currently supports prompts in English only. This parameter is only relevant when fast
is set to false
.
The number of iterations the model goes through to refine the generated image. This parameter is optional. When fast=false, the default value is 30, the minimum is 20 and the maximum is 50.
Determines how closely the generated image should adhere to the input text description. This parameter is optional. This parameter is only relevant when fast
is set to false
.
The influence of the tailored model on the generation. Only relevant if tailored_model_id is provided. This parameter is optional. Higher value gives more weight to the tailored model.
When true, the model's generation prefix is automatically prepended to your prompt to maintain consistency with the training data, while false allows you to override the training prefix and write the complete prompt yourself, including any preferred prefix text.
When enabled (default: true), the input prompt is scanned for NSFW or ethically restricted terms before image generation. If the prompt violates Bria's ethical guidelines, the request will be rejected with a 408 error.
When enabled, applies content moderation to both input visuals and generated outputs.
For input images:
For synchronous requests (sync=true):
For asynchronous requests (sync=false):
Flags prompts with potential IP content. If detected, a warning will be included in the response.
Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.
The image that should be used as guidance, in base64 format, with the method defined in guidance_method_1. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.
Which guidance type you would like to include in the generation. Up to 2 guidance methods can be combined during a single inference. This parameter is optional.
The image that should be used as guidance, in base64 format, with the method defined in guidance_method_2. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. If more then one guidance method is used, all guidance images must be of the same aspect ratio, and this will be the aspect ratio of the generated results. If guidance_method_1 is selected, an image must be provided.
regular
: Uses the image’s content, style elements, and color palette to guide generation.style_only
: Uses the image’s high-level style elements and color palette to influence the generated output. At the moment, tailored models trained using the 'Max' training version, do not support image prompt guidense.The image file to be used as guidance, in base64 format. Accepted formats are jpeg, jpg, png, webp. Maximum file size 12MB. This image can be of any aspect ratio, even when it's not alligned with the one defined in the parameter 'aspect_ratio' or by visuals provided to the ControlNets.
A list of URLs of images that should be used as guidance. The images can be of different aspect ratios. Accepted formats are jpeg, jpg, png, webp. The URLs should point to accessible, publicly available images.
https://engine.prod.bria-api.com/v1/text-to-vector/tailored/{model_id}
curl -i -X POST \
'https://engine.prod.bria-api.com/v1/text-to-vector/tailored/{model_id}' \
-H 'Content-Type: application/json' \
-H 'api_token: string' \
-d '{
"prompt": "a book",
"num_results": 2,
"sync": true
}'
Successful operation.
There are multiple objects in this array (based on the amount specified in num_results) and each object represents a single image or a blocked result.
If you want to recreate the result again, you should use in the request the prompt and the seed of the response.
This is the URL where the generated image can be found. It will take a few seconds for the image to become available via this URL if sync=false.
{ "result": [ { "urls": "https://storage.server/generate_image/some_uuid/seed_111111.png", "seed": 111111, "uuid": "some_uuid_111111" }, { "urls": "https://storage.server/generate_image/some_uuid/seed_222222.png", "seed": 222222, "uuid": "some_uuid_222222" } ] }
This endpoint lets you change the style of a portrait while keeping the person’s id. It works by using a reference image of the person along with a trained tailored model.
This capability is specifically designed for portraits that capture the subject from the torso up, with a recommended face size of at least 500×500 pixels. Images that do not meet these guidelines may produce inconsistent results.
The URL of the ID reference image. If both id_image_url
and id_image_file
are provided,id_image_url
will be used. Accepted formats: jpeg
, jpg
, png
, webp
.
The image file containing the ID reference. This parameter is used if id_image_url
is not provided. Accepted formats: jpeg
, jpg
, png
, webp
.
The influence of the tailored model on the generated image.
The number of iterations the model goes through to refine the generated image. This parameter is optional.
When enabled, applies content moderation to both input visuals and generated outputs.
For input images:
For synchronous requests (sync=true):
For asynchronous requests (sync=false):
https://engine.prod.bria-api.com/v1/tailored-gen/restyle_portrait
curl -i -X POST \
https://engine.prod.bria-api.com/v1/tailored-gen/restyle_portrait \
-H 'Content-Type: application/json' \
-H 'api_token: string' \
-d '{
"id_image_url": "http://example.com",
"id_image_file": "string",
"tailored_model_id": "string",
"tailored_model_influence": 0.9,
"steps_num": 12,
"content_moderation": false,
"id_strength": 0.7,
"sync": true
}'
{ "result": [ { "image_url": "http://example.com" } ] }