Skip to content

Overview

Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.

The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.

Fully Automated Training Mode Bria supports users in training high-quality finetuned models without the guesswork. Based on the selected IP type & dataset, Bria automatically selects the right training parameters. This means that the user only needs to spend time curating their dataset.

Advanced Customization and Access: Bria offers 2 types of advanced training customization: Expert training mode and source-code & weights.

  • Expert training mode is for LoRa Finetune experts and provides the ability to finetune training parameters and upload larger training datasets.
  • Source-code & Weights is for developers seeking deeper customization and access to Bria’s source-available GenAI models via Hugging Face.

All methods allow full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.

The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:

  1. Project Management: Create and manage projects that define IP characteristics:
  • Create and Retrieve Projects: Use the /projects endpoints to create a new project or retrieve existing projects that belong to your organization.
  • Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium.
  • Manage Project Details: Use the /projects/{id} endpoints to update or delete specific projects.
  1. Dataset Management: Organize and refine datasets within your projects:
  • Create and Retrieve Datasets: Use the /datasets endpoints to create new datasets or retrieve existing ones.
  • Generate a Visual Schema (FIBO Models)
    • Required for fibo training versions
    • Use /tailored-gen/generate_visual_schema to create a structured visual schema using 5-10 sample images.
  • Generate Caption Prefix (Legacy Models)
    • Use /tailored-gen/generate_prefix to create a text-based prefix for legacy training versions.
  • Refine Structured Data
    • Use /tailored-gen/refine_structured_prompt to iterate on your Visual Schema or Image Captions using natural language instructions.
    • Example: You can send your generated schema with the instruction "Character's name is Lucy" to improve the training metadata programmatically.
  • Upload and Manage Images:
    • Basic upload: Use /datasets/{dataset_id}/images to upload up to 200 images individually.
    • Bulk upload: Use /datasets/{dataset_id}/images/bulk to upload zip files with >200 high-quality images (Advanced).
  • Clone Datasets: Create variations of existing datasets using the clone functionality.
  1. Model Management: Train and optimize tailored models based on your datasets:
  • Create and Retrieve Models: Use the /models endpoints to create new models or list existing ones.
  • Choose training mode: Select between Fully automated mode (automatic training based on Bria's recipes) and Expert mode (for training parameter tweaking).
  • Choose Training version: Select "Fibo" for best results.
  • Monitor and Control: Manage the model lifecycle, including training start/stop, status monitoring, and version control over the training parameters.
  1. Generation Capabilities:
  • Image Generation: Use v2/image/generate/tailored (FIBO) or v1/text-to-image/tailored (Legacy).
  • Structured Prompting: Use v2/structured_prompt/generate/tailored to create structured prompts via VLM before generation.
  • Video Generation: Use /video/generate/tailored/image-to-video to animate tailored images.

Training Process

To train a tailored model:

  1. Create a Project: Use the /projects endpoint to define your IP type and medium.
  2. Create a Dataset: Use the /datasets endpoint to create a dataset within your project.
  3. Define Visual Identity:
    • Step A (Generate): Call /tailored-gen/generate_visual_schema, sampling 5-10 images from your input set.
    • Step B (Refine - Optional): Call /tailored-gen/refine_structured_prompt with the generated schema and instructions to tweak the definitions (e.g., "Remove references to blue background").
    • Step C (Apply): Update the dataset with the final schema using /datasets/{dataset_id}.
  4. Upload Images: Upload images using the /datasets/{dataset_id}/images or /datasets/{dataset_id}/images/bulk endpoints (minimum resolution: 1024x1024px).
  5. Prepare Dataset: Review auto-generated captions (you can also use refine_structured_prompt to fix specific image captions) and update the dataset status to 'completed'.
  6. Create Model: Use the /models endpoint to create a model, which requires a training mode and version.
  7. Start Training: Initiate training via the /models/{id}/start_training endpoint. Training typically takes 4-6 hours.
  8. Monitor Progress: Check the training status using the /models/{id} endpoint until training is 'Completed'.
  9. Generate Images:
  • Use v2/image/generate/tailored for text-to-image generation.

Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here.

Languages
Servers
https://engine.prod.bria-api.com/v2
https://engine.prod.bria-api.com/v1

Project

Manage your projects

Operations

Dataset

Manage training datasets

Operations

Generate Caption Prefix

Request

Generates a caption prefix based on the provided images.

This is currently supported only when ip_type is stylized_scene, 'defined_character' or 'object_variants' IP types.

Usage Scenarios:
  1. Before uploading visuals to a new dataset
  • This use case applies when creating a new dataset.
  • In the first step, you can create the dataset entity in parallel while calling this endpoint.
  • Randomly sample 1-6 images from the input images provided for training. If there are 6 or more images, provide exactly 6 for the best results.
  • Once you receive the prefix, update the dataset using the Update Dataset endpoint.
  • Then, proceed with uploading images to the dataset.
  1. To regenerate a new prefix (even if previously generated)
  • This allows users to select the prefix they prefer.
  • Randomly sample 1-6 images from the input images provided for training. If there are 6 or more images, provide exactly 6 for the best results.
  • Update the dataset with the new prefix.
  • Then, use the Regenerate All Captions endpoint to ensure all images in the dataset get updated captions.

If any image fails validation, the request will fail.

This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request.

Headers
api_tokenstringrequired

Authentication token.

Bodyapplication/jsonrequired
image_urlsArray of strings

An array of 1-6 image URLs. Either image_urls or images must be provided, but not both.

imagesArray of strings

An array of 1-6 base64-encoded images. Either image_urls or images must be provided, but not both.

ip_typestring

The IP type, provided when creating the project.

Enum"stylized_scene""defined_character""object_variants"
ip_mediumstring

The IP medium, provided when creating the project.

Enum"photography""illustration"
ip_namestring

Name of the IP, provided when creating the project. This field is relevant only when ip_type is defined_character.

content_moderationboolean

When enabled, applies content moderation to both input visuals and generated outputs.

  • Processing stops at the first image that fails moderation
  • Returns a 422 error with details about which parameter failed
Default false
curl -i -X POST \
  https://engine.prod.bria-api.com/v1/tailored-gen/generate_prefix \
  -H 'Content-Type: application/json' \
  -H 'api_token: string' \
  -d '{
    "image_urls": [
      "https://fake-image-host.com/images/sample1.jpg",
      "https://fake-image-host.com/images/sample2.jpg",
      "https://fake-image-host.com/images/sample3.jpg"
    ],
    "ip_type": "stylized_scene",
    "ip_medium": "illustration",
    "content_moderation": true
  }'

Responses

Successfully generated caption prefix.

Bodyapplication/json
prefixstring

The generated caption prefix.

Response
application/json
{ "prefix": "A photo in a style defined by vibrant purple hues, moody lighting effects, featuring " }

Download Advanced Dataset

Request

Enables users to download an advanced dataset. The response includes a pre-signed URL for downloading the dataset, details about the base model used, and the prompt prefix applied during training.

Path
dataset_idintegerrequired

The unique identifier of the dataset.

Headers
api_tokenstringrequired
curl -i -X GET \
  'https://engine.prod.bria-api.com/v1/datasets/{dataset_id}/download' \
  -H 'api_token: string'

Responses

Successful retrieval of the dataset.

Bodyapplication/json
download_urlstring

A pre-signed URL allowing users to download the dataset.

captions_urlstring

A pre-signed URL for downloading the captions file, only returned for datasets where all captions are automatically generated.

Response
application/json
{ "download_url": "https://download-url-for-dataset.com", "captions_url": "https://download-url-for-captions.com" }

Model

Manage and train models

Operations

Image Generation V2

Generation using FIBO models

Operations

Image Generation (Legacy)

Generation using Legacy models

Operations

Video Generation

Image-to-Video capabilities

Operations