Overview

Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.

The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.

Advanced Customization and Access:
As part of Bria’s Source Code & Weights product, developers seeking deeper customization can access Bria’s source-available GenAI models via Hugging Face.
This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.

The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:

  1. Project Management: Create and manage projects that define IP characteristics:
  • Create and Retrieve Projects: Use the /projects endpoints to create a new project or retrieve existing projects that belong to your organization.
  • Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium (currently illustration, with photography coming soon).
  • Manage Project Details: Use the /projects/{id} endpoints to update or delete specific projects.
  1. Dataset Management: Organize and refine datasets within your projects:
  • Create and Retrieve Datasets: Use the /datasets endpoints to create new datasets or retrieve existing ones.
  • Generate an Advanced Caption Prefix (For stylized_scene IP type)
    • If the IP type is stylized_scene, it is recommended to generate an advanced prefix before uploading images.
    • Use /tailored-gen/generate_prefix to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available).
    • Update the dataset with the generated prefix using /datasets/{dataset_id} before proceeding with image uploads.
  • Upload and Manage Images: Use the /datasets/{dataset_id}/images endpoints to upload images and manage their captions.
  • Clone Datasets: Create variations of existing datasets using the clone functionality.
  1. Model Management: Train and optimize tailored models based on your datasets:
  • Create and Retrieve Models: Use the /models endpoints to create new models or list existing ones.
  • Choose Training Version: Select between "light" (for fast generation and structure reference compatibility) or "max" (for superior prompt alignment and enhanced learning capabilities).
  • Monitor and Control: Manage the model lifecycle, including training start/stop and status monitoring.

Training Process

To train a tailored model:

  1. Create a Project: Use the /projects endpoint to define your IP type and medium.
  2. Create a Dataset: Use the /datasets endpoint to create a dataset within your project.
  3. Generate an Advanced Caption Prefix (For stylized_scene IP type only):
  • Before uploading images, call /tailored-gen/generate_prefix, sampling 1-6 images from the input images provided for training (preferably 6 if available).
  • Update the dataset with the generated prefix using /datasets/{dataset_id}.
  1. Upload Images: Upload images using the /datasets/{dataset_id}/images endpoint (minimum resolution: 1024x1024px).
  2. Prepare Dataset: Review auto-generated captions and update the dataset status to 'completed'.
  3. Create Model: Use the /models endpoint to create a model, selecting either the "light" or "max" training version.
  4. Start Training: Initiate training via the /models/{id}/start_training endpoint. Training typically takes 1-3 hours.
  5. Monitor Progress: Check the training status using the /models/{id} endpoint until training is 'Completed'.
  6. Generate Images: Once trained, your model can be used in multiple ways:
  • Use /text-to-image/tailored/{model_id} for text-to-image generation.
  • Use /text-to-vector/tailored/{model_id} for generating illustrative vector graphics.
  • Use /reimagine/tailored/{model_id} for structure-based generation.
  • Access through the Bria platform interface.

Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.

Restyle Portraits

The Restyle Portrait feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.

Restyle Portrait is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.

To use Restyle Portrait:

  • Reference Image: Provide a clear portrait image that meets the recommended guidelines.

  • Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.

Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.

Guidance Methods

Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.

The following APIs support guidance methods:

  • /text-to-image/tailored
  • /text-to-vector/tailored

ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.

  • controlnet_canny: Uses edge information from the input image to guide generation based on structural outlines.
  • controlnet_depth: Derives depth information to influence spatial arrangement in the generated image.
  • controlnet_recoloring: Uses a grayscale version of the input image to guide recoloring while preserving geometry.
  • controlnet_color_grid: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image.

You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.

When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.

To use ControlNets, include the following parameters in your request:

  • guidance_method_X: Specify the guidance method (where X is 1, 2). If the parameter guidance_method_2 is used, guidance_method_1 must also be used. If you want to use only one method, use guidance_method_1.
  • guidance_method_X_scale: Set the impact of the guidance (0.0 to 1.0).
  • guidance_method_X_image_file: Provide the base64-encoded input image.
Guidance MethodPromptScaleInput ImageGuidance ImageOutput Image
ControlNet CannyAn exotic colorful shell on the beach1.0
ControlNet DepthA dog, exploring an alien planet0.8
ControlNet RecoloringA vibrant photo of a woman1.00
ControlNet Color GridA dynamic fantasy illustration of an erupting volcano0.7

Image Prompt Adapter:

This method offers two modes:

  • regular: Uses the image’s content, style elements, and color palette to guide generation.
  • style_only: Uses the image’s high-level style elements and color palette to influence the generated output.

To use Image Prompt Adapter as guidance, include the following parameters in your request:

  • image_prompt_mode: Specify how the input image influences the generation.
  • image_prompt_scale: Set the impact of the provided image on the generated result (0.0 to 1.0).
  • image_prompt_file: Provide the base64-encoded image file to be used as guidance.

or

  • image_prompt_urls: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
Guidance MethodPromptModeScaleGuidance ImageOutput Image
Image Prompt AdapterA drawing of a lion laid on a table.regular0.85
Image Prompt AdapterA drawing of a bird.style1
Languages
Servers
https://engine.prod.bria-api.com/v1/

Endpoints

Operations

Restyle Portrait

Request

This endpoint lets you change the style of a portrait while keeping the person’s id. It works by using a reference image of the person along with a trained tailored model.

This capability is specifically designed for portraits that capture the subject from the torso up, with a recommended face size of at least 500×500 pixels. Images that do not meet these guidelines may produce inconsistent results.

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
id_image_urlstring(uri)

The URL of the ID reference image. If both id_image_url and id_image_file are provided, id_image_url will be used. Accepted formats: jpeg, jpg, png, webp.

id_image_filestring(binary)

The image file containing the ID reference. This parameter is used if id_image_url is not provided. Accepted formats: jpeg, jpg, png, webp.

tailored_model_idstring

The ID of the tailored model to use for generation.

tailored_model_influencenumber(float)[ 0 .. 1.2 ]

The influence of the tailored model on the generated image.

Default 0.9
id_strengthnumber(float)[ 0 .. 1 ]

Strength of the instant ID.

Default 0.7
syncboolean

Determines the response mode. When true, the request is synchronous.

Default true
curl -i -X POST \
  https://engine.prod.bria-api.com/v1/tailored-gen/restyle_portrait \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "id_image_url": "http://example.com",
    "id_image_file": "string",
    "tailored_model_id": "string",
    "tailored_model_influence": 0.9,
    "id_strength": 0.7,
    "sync": true
  }'

Responses

Successful operation

Bodyapplication/json
resultArray of objects

List of generated images or blocked results.

result[].​image_urlstring(uri)

URL where the generated image is stored.

Response
application/json
{ "result": [ { "image_url": "http://example.com" } ] }

Create Project

Request

Create a new project within the organization. A project encompasses all models trained and datasets created for the IP defined in the project.

The following IP types are supported:

Multi-Object Set A collection of different objects sharing a common style, design language, or color scheme. Objects are typically isolated on solid backgrounds. Example of multi-object set showing different objects with consistent style

Object Variants Multiple variations of the same object type, maintaining consistent style and structure while showing different interpretations. Objects are typically isolated on solid backgrounds. Example of object variants showing different versions of the same object

Icons A collection of cohesive, small-scale illustrations or symbols designed to represent concepts, actions, or objects in interfaces and applications. Maintains consistent visual style across the set. Example of icon set with consistent design language

Defined Character A specific character that maintains consistent identity and unique traits while being reproduced in different poses, situations, and actions. Example of defined character in different poses

Character Variants Multiple characters sharing the same fundamental structure, style, and color palette, allowing creation of new characters that fit within the established design system. Example of character variants showing different characters with consistent style

Stylized Scene Complete environments or scenes created with a consistent visual style, look, and feel. Example of stylized scene showing cohesive environment

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
project_namestring

Name of the project (required)

project_descriptionstring

Description of the project (optional)

ip_namestring

Required only for defined_character IP type. The name of the character (1-3 words, e.g., "Lora", "Captain Smith"). This name will be incorporated into the automatically created caption prefix and generation prefix, used consistently during training and generation.

ip_descriptionstring

Required only for defined_character and object_variants IP types. A short phrase (up to 6 words) describing only the most crucial distinguishing features of your character (e.g., "a female character with purple hair"). Keep it brief as the model will learn additional details from the training images. This description will be incorporated into the automatically created caption prefix and generation prefix, used during training and generation.

ip_mediumstring

Medium of the IP (required)

  • photography
  • illustration
Enum"photography""illustration"
ip_typestring

Type of the IP (required):

  • multi_object_set: Multiple distinct objects that share a mutual style, design language, or color scheme. These objects are often isolated on a solid background. This is currently valid only when ip_medium = illustration.
  • object_variants: Variations of the same object type, designed with consistent style, structure, and coloring, showcasing different interpretations. These objects are often isolated on a solid background. This is currently valid only when ip_medium = illustration.
  • icons: A collection of small, visually distinct illustrations, such as symbols or graphical elements, designed with a cohesive style and used to represent concepts, actions, or objects in interfaces, applications, or visual communication materials. This is valid only when ip_medium = illustration.
  • defined_character: A specific predefined character or person that can be reproduced consistently in different situations, poses, or actions, preserving their identity and unique traits.
  • character_variants: Multiple characters sharing the same structure, style, and color palette, with the ability to create new characters that adhere to these shared characteristics while introducing unique elements. This is currently valid only when ip_medium = illustration.
  • stylized_scene: A complete scene or environment, such as a gaming background or a series of photos with a shared color palette, created with a cohesive style, look, and feel.
  • other: For IP types that don't fit into the above categories
Enum"multi_object_set""object_variants""icons""defined_character""character_variants""stylized_scene""other"
curl -i -X POST \
  https://engine.prod.bria-api.com/v1/tailored-gen/projects \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "project_name": "Branded Character",
    "ip_name": "Adventure Series Characters",
    "ip_description": "A set of adventure game characters with unique personalities",
    "ip_medium": "illustration",
    "ip_type": "defined_character"
  }'

Responses

Project successfully created

Bodyapplication/json
idinteger

Unique identifier for the project

project_namestring

Name of the project

project_descriptionstring

Description of the project

ip_namestring

Name of the IP

ip_descriptionstring

Description of the IP

ip_mediumstring

Medium of the IP

ip_typestring

Type of the IP

statusstring

Status of the project

Value"active"
created_atstring(date-time)

Timestamp when the project was created

Response
application/json
{ "id": 123, "project_name": "Branded Character", "project_description": "", "ip_name": "Lora", "ip_description": "A female character with purple hair", "ip_medium": "illustration", "ip_type": "defined_character", "status": "active", "created_at": "2024-05-26T12:00:00Z" }

Get Projects

Request

Retrieve all projects within the organization. If there are no projects, returns an empty array.

Headers
api_tokenstringrequired
curl -i -X GET \
  https://engine.prod.bria-api.com/v1/tailored-gen/projects \
  -H 'api_token: string'

Responses

Successfully retrieved projects

Bodyapplication/jsonArray [
idinteger

Unique identifier for the project

project_namestring

Name of the project

project_descriptionstring

Description of the project

ip_namestring

Name of the IP

ip_descriptionstring

Description of the IP

ip_mediumstring

Medium of the IP

Enum"photography""illustration"
ip_typestring

Type of the IP

Enum"multi_object_set""object_variants""defined_object""icons""defined_character""character_variants""stylized_scene""other"
statusstring

Status of the project

Value"active"
created_atstring(date-time)

Timestamp when the project was created

updated_atstring(date-time)

Timestamp when the project was last updated

]
Response
application/json
[ { "id": 123, "project_name": "Branded Character", "project_description": "", "ip_name": "Lora", "ip_description": "A female character with purple hair", "ip_medium": "illustration", "ip_type": "defined_character", "status": "active", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }, { "id": 124, "project_name": "Branded icons", "project_description": "", "ip_name": "", "ip_description": "", "ip_medium": "illustration", "ip_type": "icons", "status": "active", "created_at": "2024-05-27T09:00:00Z", "updated_at": "2024-05-27T10:15:00Z" } ]