Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.
The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.
Advanced Customization and Access:
As part of Bria’s Source Code & Weights product, developers seeking deeper customization can access Bria’s source-available GenAI models via Hugging Face.
This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.
The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:
/projects
endpoints to create a new project or retrieve existing projects that belong to your organization./projects/{id}
endpoints to update or delete specific projects./datasets
endpoints to create new datasets or retrieve existing ones.stylized_scene
, 'defined_character' and 'object_variants' IP types)stylized_scene
, 'defined_character' or 'object_variants', it is recommended to generate an advanced prefix before uploading images./tailored-gen/generate_prefix
to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available)./datasets/{dataset_id}
before proceeding with image uploads./datasets/{dataset_id}/images
endpoints to upload images and manage their captions./models
endpoints to create new models or list existing ones.To train a tailored model:
/projects
endpoint to define your IP type and medium./datasets
endpoint to create a dataset within your project.stylized_scene
, 'defined_character' and 'object_variants' IP types):/tailored-gen/generate_prefix
, sampling 1-6 images from the input images provided for training (preferably 6 if available)./datasets/{dataset_id}
./datasets/{dataset_id}/images
endpoint (minimum resolution: 1024x1024px)./models
endpoint to create a model, selecting either the "light" or "max" training version./models/{id}/start_training
endpoint. Training typically takes 1-3 hours./models/{id}
endpoint until training is 'Completed'./text-to-image/tailored/{model_id}
for text-to-image generation./text-to-vector/tailored/{model_id}
for generating illustrative vector graphics./reimagine/tailored/{model_id}
for structure-based generation.Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.
The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image.
It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model. Access this capability via the /reimagine endpoint.
The Restyle Portrait feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.
Restyle Portrait is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.
To use Restyle Portrait:
Reference Image: Provide a clear portrait image that meets the recommended guidelines.
Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.
Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.
Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.
The following APIs support guidance methods:
/text-to-image/tailored
/text-to-vector/tailored
ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.
You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.
When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.
To use ControlNets, include the following parameters in your request:
guidance_method_X
: Specify the guidance method (where X is 1, 2). If the parameter guidance_method_2
is used, guidance_method_1
must also be used. If you want to use only one method, use guidance_method_1
.guidance_method_X_scale
: Set the impact of the guidance (0.0 to 1.0).guidance_method_X_image_file
: Provide the base64-encoded input image.Guidance Method | Prompt | Scale | Input Image | Guidance Image | Output Image |
---|---|---|---|---|---|
ControlNet Canny | An exotic colorful shell on the beach | 1.0 | ![]() | ![]() | ![]() |
ControlNet Depth | A dog, exploring an alien planet | 0.8 | ![]() | ![]() | ![]() |
ControlNet Recoloring | A vibrant photo of a woman | 1.00 | ![]() | ![]() | ![]() |
ControlNet Color Grid | A dynamic fantasy illustration of an erupting volcano | 0.7 | ![]() | ![]() | ![]() |
Image Prompt Adapter:
This method offers two modes:
To use Image Prompt Adapter as guidance, include the following parameters in your request:
image_prompt_mode
: Specify how the input image influences the generation.image_prompt_scale
: Set the impact of the provided image on the generated result (0.0 to 1.0).image_prompt_file
: Provide the base64-encoded image file to be used as guidance.or
image_prompt_urls
: Provide a list of URLs pointing to publicly accessible images to be used as guidance.Guidance Method | Prompt | Mode | Scale | Guidance Image | Output Image |
---|---|---|---|---|---|
Image Prompt Adapter | A drawing of a lion laid on a table. | regular | 0.85 | ![]() | ![]() |
Image Prompt Adapter | A drawing of a bird. | style | 1 | ![]() | ![]() |
https://engine.prod.bria-api.com/v1/
Create a new project within the organization. A project encompasses all models trained and datasets created for the IP defined in the project.
The following IP types are supported:
Defined Character A specific character that maintains consistent identity and unique traits while being reproduced in different poses, situations, and actions.
Medium: Photography
Medium: Illustration
Stylized Scene Complete environments or scenes created with a consistent visual style, look, and feel.
Medium: Photography
Medium: Illustration
Multi-Object Set A collection of different objects sharing a common style, design language, or color scheme. Objects are typically isolated on solid backgrounds.
Object Variants Multiple variations of the same object type, maintaining consistent style and structure while showing different interpretations. Objects are typically isolated on solid backgrounds.
Icons A collection of cohesive, small-scale illustrations or symbols designed to represent concepts, actions, or objects in interfaces and applications. Maintains consistent visual style across the set.
Character Variants Multiple characters sharing the same fundamental structure, style, and color palette, allowing creation of new characters that fit within the established design system.
Required only for defined_character IP type. The name of the character (1-3 words, e.g., "Lora", "Captain Smith"). This name will be incorporated into the automatically created caption prefix and generation prefix, used consistently during training and generation.
Required only for defined_character and object_variants IP types. A short phrase (up to 6 words) describing only the most crucial distinguishing features of your character (e.g., "a female character with purple hair"). Keep it brief as the model will learn additional details from the training images. This description will be incorporated into the automatically created caption prefix and generation prefix, used during training and generation.
Type of the IP (required):
https://engine.prod.bria-api.com/v1/tailored-gen/projects
curl -i -X POST \
https://engine.prod.bria-api.com/v1/tailored-gen/projects \
-H 'Content-Type: application/json' \
-H 'api_token: string' \
-d '{
"project_name": "Branded Character",
"ip_name": "Adventure Series Characters",
"ip_description": "A set of adventure game characters with unique personalities",
"ip_medium": "illustration",
"ip_type": "defined_character"
}'
{ "id": 123, "project_name": "Branded Character", "project_description": "", "ip_name": "Lora", "ip_description": "A female character with purple hair", "ip_medium": "illustration", "ip_type": "defined_character", "status": "active", "created_at": "2024-05-26T12:00:00Z" }
https://engine.prod.bria-api.com/v1/tailored-gen/projects
curl -i -X GET \
https://engine.prod.bria-api.com/v1/tailored-gen/projects \
-H 'api_token: string'
[ { "id": 123, "project_name": "Branded Character", "project_description": "", "ip_name": "Lora", "ip_description": "A female character with purple hair", "ip_medium": "illustration", "ip_type": "defined_character", "status": "active", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }, { "id": 124, "project_name": "Branded icons", "project_description": "", "ip_name": "", "ip_description": "", "ip_medium": "illustration", "ip_type": "icons", "status": "active", "created_at": "2024-05-27T09:00:00Z", "updated_at": "2024-05-27T10:15:00Z" } ]
https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}
curl -i -X GET \
'https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}' \
-H 'api_token: string'
{ "id": 123, "project_name": "Branded Character", "project_description": "", "ip_name": "Lora", "ip_description": "A female character with purple hair", "ip_medium": "illustration", "ip_type": "defined_character", "status": "active", "created_at": "2024-05-26T12:00:00Z" }