Skip to content

Overview

Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.

The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.

Fully Automated Training Mode Bria supports users in training high-quality finetuned models without the guesswork. Based on the selected IP type & dataset, Bria automatically selects the right training parameters. This means that the user only needs to spend time curating their dataset.

Advanced Customization and Access: Bria offers 2 types of advanced training customization: Expert training mode and source-code & weights.

  • Expert training mode is for LoRa Finetune experts and provides the ability to finetune training parameters and upload larger training datasets.
  • Source-code & Weights is for developers seeking deeper customization and access to Bria’s source-available GenAI models via Hugging Face.

All methods allow full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.

The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:

  1. Project Management: Create and manage projects that define IP characteristics:
  • Create and Retrieve Projects: Use the /projects endpoints to create a new project or retrieve existing projects that belong to your organization.
  • Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium.
  • Manage Project Details: Use the /projects/{id} endpoints to update or delete specific projects.
  1. Dataset Management: Organize and refine datasets within your projects:
  • Create and Retrieve Datasets: Use the /datasets endpoints to create new datasets or retrieve existing ones.
  • Generate a Visual Schema (FIBO Models)
    • Required for fibo training versions
    • Use /tailored-gen/generate_visual_schema to create a structured visual schema using 5-10 sample images.
  • Generate Caption Prefix (Legacy Models)
    • Use /tailored-gen/generate_prefix to create a text-based prefix for legacy training versions.
  • Refine Structured Data
    • Use /tailored-gen/refine_structured_prompt to iterate on your Visual Schema or Image Captions using natural language instructions.
    • Example: You can send your generated schema with the instruction "Character's name is Lucy" to improve the training metadata programmatically.
  • Upload and Manage Images:
    • Basic upload: Use /datasets/{dataset_id}/images to upload up to 200 images individually.
    • Bulk upload: Use /datasets/{dataset_id}/images/bulk to upload zip files with >200 high-quality images (Advanced).
  • Clone Datasets: Create variations of existing datasets using the clone functionality.
  1. Model Management: Train and optimize tailored models based on your datasets:
  • Create and Retrieve Models: Use the /models endpoints to create new models or list existing ones.
  • Choose training mode: Select between Fully automated mode (automatic training based on Bria's recipes) and Expert mode (for training parameter tweaking).
  • Choose Training version: Select "Fibo" for best results.
  • Monitor and Control: Manage the model lifecycle, including training start/stop, status monitoring, and version control over the training parameters.
  1. Generation Capabilities:
  • Image Generation: Use v2/image/generate/tailored (FIBO) or v1/text-to-image/tailored (Legacy).
  • Structured Prompting: Use v2/structured_prompt/generate/tailored to create structured prompts via VLM before generation.
  • Video Generation: Use /video/generate/tailored/image-to-video to animate tailored images.

Training Process

To train a tailored model:

  1. Create a Project: Use the /projects endpoint to define your IP type and medium.
  2. Create a Dataset: Use the /datasets endpoint to create a dataset within your project.
  3. Define Visual Identity:
    • Step A (Generate): Call /tailored-gen/generate_visual_schema, sampling 5-10 images from your input set.
    • Step B (Refine - Optional): Call /tailored-gen/refine_structured_prompt with the generated schema and instructions to tweak the definitions (e.g., "Remove references to blue background").
    • Step C (Apply): Update the dataset with the final schema using /datasets/{dataset_id}.
  4. Upload Images: Upload images using the /datasets/{dataset_id}/images or /datasets/{dataset_id}/images/bulk endpoints (minimum resolution: 1024x1024px).
  5. Prepare Dataset: Review auto-generated captions (you can also use refine_structured_prompt to fix specific image captions) and update the dataset status to 'completed'.
  6. Create Model: Use the /models endpoint to create a model, which requires a training mode and version.
  7. Start Training: Initiate training via the /models/{id}/start_training endpoint. Training typically takes 4-6 hours.
  8. Monitor Progress: Check the training status using the /models/{id} endpoint until training is 'Completed'.
  9. Generate Images:
  • Use v2/image/generate/tailored for text-to-image generation.

Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.

Languages
Servers
https://engine.prod.bria-api.com/v2
https://engine.prod.bria-api.com/v1

Project

Manage your projects

Operations

Dataset

Manage training datasets

Operations

Upload Image files

Request

Upload new image to a dataset.

Image Requirements:

  • Recommended minimum resolution: 1024x1024 pixels for best quality
    • By default, smaller images (down to 256x256) will be automatically upscaled to meet this threshold (increase_resolution=true)
    • To strictly enforce the 1024x1024 minimum, set increase_resolution=false
  • Supported formats: jpg, jpeg, png, webp
  • Preferably use original high-quality assets

Dataset Guidelines:

  • Recommended: 5-50 images for optimal results when using Max/Fibo training version, 15-100 for optimal results when using Light training version
  • Maximum supported: 200 images
  • Ensure consistency in style, structure, and visual elements
  • Balance diversity in content (poses, scenes, objects) while maintaining consistency in key elements (style, colors, theme)
  • Note: Larger datasets may introduce more variety, which can reduce overall consistency

For optimal training (especially for characters/objects):

  • Subject should occupy most of the image area
  • Minimize unnecessary margins around the subject
  • Transparent backgrounds will be converted to black
  • For character datasets: include diverse poses, environments, attires, and interactions

Captions and Generation: For Legacy models:

  • Each image receives an automatic caption that continues from the dataset's caption prefix
  • Default caption prefix is recommended for initial training
  • Captions can be modified to include domain-specific terms
  • Both captions and prefix influence training and future generations
  • Focus on essential elements rather than extensive details

Constraints:

  • Can only be used by "basic" upload type. use images/bulk for advanced dataset upload
  • Dataset must have at least 5 images
  • Dataset cannot exceed 200 images
  • Cannot upload to a completed dataset

This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request.

Path
dataset_idintegerrequired

Dataset ID

Headers
api_tokenstringrequired
Bodyapplication/jsonrequired
filestring(binary)

Image file to upload (required if image_url not provided)

image_urlstring

URL of image to upload (required if file not provided)

image_namestring

Custom name for the image (optional)

increase_resolutionboolean

When enabled (default: true), input images smaller than 1024x1024 pixels but larger than 256x256 pixels will be automatically upscaled to meet the minimum requirement.

  • If true: Images must be at least 256x256 pixels. Upscaling is applied.
  • If false: Images must be at least 1024x1024 pixels. No upscaling is applied.
Default true
content_moderationboolean

When enabled, applies content moderation to input visuals.

For input images:

  • Processing stops at the first image that fails moderation
  • Returns a 422 error with details about which parameter failed
Default false
curl -i -X POST \
  'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}/images' \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "image_url": "https://example.com/images/character_pose.jpg",
    "image_name": "character_standing_pose.jpg"
  }'

Responses

Image successfully uploaded

Bodyapplication/json
idinteger

Unique identifier for the image

dataset_idinteger

ID of the dataset this image belongs to

captionstring or null

The generated caption. Null if uploaded to a FIBO dataset without a schema.

caption_sourcestring or null

Source of the caption. Returns 'pending' if caption generation was skipped.

Enum"automatic""manual""pending"
image_namestring

Name of the image

filestring

file of the original image file in base64 format. Either file or image_url should be provided, not both.

image_urlstring

URL of the original image file.

thumbnail_urlstring

URL of the image thumbnail

created_atstring(date-time)

Timestamp when the image was created

updated_atstring(date-time)

Timestamp when the image was last updated

Response
application/json
{ "id": 789, "dataset_id": 456, "caption": "standing in a confident pose wearing a blue dress", "caption_source": "automatic", "image_name": "lora_standing.png", "image_url": "https://api.example.com/files/lora_standing.png", "thumbnail_url": "https://api.example.com/files/lora_standing_thumb.png", "created_at": "2024-05-26T12:30:00Z", "updated_at": "2024-05-26T12:30:00Z" }

Get Images

Request

Retrieve all images in a specific dataset. If there are no images, returns an empty array.

Path
dataset_idintegerrequired

Dataset ID

Headers
api_tokenstringrequired
curl -i -X GET \
  'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}/images' \
  -H 'api_token: string'

Responses

Successfully retrieved images

Bodyapplication/jsonArray [
idinteger

Unique identifier for the image

dataset_idinteger

ID of the dataset this image belongs to

captionstring

Caption describing the image

caption_sourcestring

Source of the caption. 'unknown' value only appears for images that were uploaded using an old version of Tailored Generation.

Enum"automatic""manual""unknown"
image_namestring

Name of the image

image_urlstring

URL of the original image file

thumbnail_urlstring

URL of the image thumbnail

created_atstring(date-time)

Timestamp when the image was created

updated_atstring(date-time)

Timestamp when the image was last updated

]
Response
application/json
[ { "id": 789, "dataset_id": 456, "caption": "standing in a confident pose wearing a blue dress", "caption_source": "automatic", "image_name": "lora_standing.png", "image_url": "https://api.example.com/files/lora_standing.png", "thumbnail_url": "https://api.example.com/files/lora_standing_thumb.png", "created_at": "2024-05-26T12:30:00Z", "updated_at": "2024-05-26T12:30:00Z" }, { "id": 790, "dataset_id": 456, "caption": "sitting on a chair with a gentle smile", "caption_source": "manual", "image_name": "lora_sitting.png", "image_url": "https://api.example.com/files/lora_sitting.png", "thumbnail_url": "https://api.example.com/files/lora_sitting_thumb.png", "created_at": "2024-05-26T12:45:00Z", "updated_at": "2024-05-26T13:15:00Z" } ]

Regenerate All Captions

Request

Regenerate captions for all images in dataset. This action is crucial after the user updates the visual schema or caption_prefix, and then it's recommended to regenerate all the captions of all images, to have full compatibility with the new visual schema or caption_prefix.

This is an asynchronous operation. Once this endpoint is called, Get Dataset by ID should be sampled until the captions_update_status changes to 'completed'.

Path
dataset_idintegerrequired

Dataset ID

Headers
api_tokenstringrequired
curl -i -X PUT \
  'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}/images' \
  -H 'api_token: string'

Responses

Caption regeneration process started

Bodyapplication/json
idinteger

Unique identifier for the dataset

project_idinteger

Associated project ID

namestring

Dataset name

training_versionstring

The foundation model version this dataset targets (e.g., 'fibo', 'max').

Enum"max""light""3.2""2.3""fibo"
caption_prefixstring or null

Text prepended to captions.

  • For training_version = max/light/3.2/2.3: Required string.
  • For training_version = fibo: Null.
statusstring

Status of the dataset

Enum"draft""completed"
captions_update_statusstring

Status of captions update process

Enum"empty""in_progress""completed"
created_atstring(date-time)

Timestamp when the dataset was created

updated_atstring(date-time)

Timestamp when the dataset was last updated

Response
application/json
{ "id": 456, "project_id": 123, "name": "dataset v1", "training_version": "3.2", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "draft", "captions_update_status": "in_progress", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T15:45:00Z" }

Model

Manage and train models

Operations

Image Generation V2

Generation using FIBO models

Operations

Image Generation (Legacy)

Generation using Legacy models

Operations

Video Generation

Image-to-Video capabilities

Operations