Retrieve all datasets for a specific project
Overview
Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs.
The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console here.
Fully automted training mode Bria supports users in training high-quality finetuned models without the guesswork. Based on the selected IP type & dataset, bria automatically selects the right training parameters. This means that the user only needs to spend time curating their dataset.
Advanced Customization and Access:
Bria offers 2 types of advanced training customization: Expert training mode and source-code & weights.
- Expert training mode is for LoRa Finetune experts and provide the abilities to finetune the training parameters and upload larger training datasets.
- Source-code & Weights is for developers seeking deeper customization and can access Bria’s source-available GenAI models via Hugging Face.
All methods allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.
The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project:
- Project Management: Create and manage projects that define IP characteristics:
- Create and Retrieve Projects: Use the
/projects
endpoints to create a new project or retrieve existing projects that belong to your organization. - Define IP Type: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium.
- Manage Project Details: Use the
/projects/{id}
endpoints to update or delete specific projects.
- Dataset Management: Organize and refine datasets within your projects:
- Create and Retrieve Datasets: Use the
/datasets
endpoints to create new datasets or retrieve existing ones. - Generate an Advanced Caption Prefix (For
stylized_scene
, 'defined_character' and 'object_variants' IP types)- If the IP type is
stylized_scene
, 'defined_character' or 'object_variants', it is recommended to generate an advanced prefix before uploading images. - Use
/tailored-gen/generate_prefix
to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available). - Update the dataset with the generated prefix using
/datasets/{dataset_id}
before proceeding with image uploads.
- If the IP type is
- Upload and Manage Images: For Basic upload use the
/datasets/{dataset_id}/images
endpoints to upload up to 200 images and manage their captions. For advanced upload use the/datasets/{dataset_id}/images/bulk
endpoint to upload zip files with >200 high quality images. - Clone Datasets: Create variations of existing datasets using the clone functionality.
- Model Management: Train and optimize tailored models based on your datasets:
- Create and Retrieve Models: Use the
/models
endpoints to create new models or list existing ones. - Choose training mode: Select between Fully automated mode (automatic training based on Bria's reciepes) and Expert mode (for training parameters tweaking)
- Choose Training version & parameters: Select between "light"/bria-2.3 (for fast generation) or "max"/3.2 (for superior prompt alignment and enhanced learning capabilities).
- Monitor and Control: Manage the model lifecycle, including training start/stop and status monitoring and version control over the training parameters.
To train a tailored model:
- Create a Project: Use the
/projects
endpoint to define your IP type and medium. - Create a Dataset: Use the
/datasets
endpoint to create a dataset within your project. - Generate an Advanced Caption Prefix (For
stylized_scene
, 'defined_character' and 'object_variants' IP types):
- Before uploading images, call
/tailored-gen/generate_prefix
, sampling 1-6 images from the input images provided for training (preferably 6 if available). - Update the dataset with the generated prefix using
/datasets/{dataset_id}
.
- Upload Images: Upload images using the
/datasets/{dataset_id}/images
or/datasets/{dataset_id}/images/bulk
endpoints (minimum resolution: 1024x1024px). - Prepare Dataset: Review auto-generated captions and update the dataset status to 'completed'.
- Create Model: Use the
/models
endpoint to create a model, which requires a training mode (fully automated or expert) and a training version (a base model). - Start Training: Initiate training via the
/models/{id}/start_training
endpoint. Training typically takes 1-3 hours. - Monitor Progress: Check the training status using the
/models/{id}
endpoint until training is 'Completed'. - Generate Images: Once trained, your model can be used in multiple ways:
- Use
/text-to-image/tailored/{model_id}
for text-to-image generation. - Use
/text-to-vector/tailored/{model_id}
for generating illustrative vector graphics. - Use
/reimagine/tailored/{model_id}
for structure-based generation. - Use
/tailored-gen/restyle_portrait
for human portrait-based generation. - Access through the Bria platform interface.
Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started here. Upcoming soon: configurable tailored generation iFrames.
The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image.
It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model. Access this capability via the /reimagine endpoint.

The Reimagine - Portrait Reference feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.
Reimagine - Portrait Reference is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.
To use Reimagine - Portrait Reference:
Reference Image: Provide a clear portrait image that meets the recommended guidelines.
Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.
Use the /tailored-gen/restyle_portrait endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.
Model Compatibility Note: This feature supports only tailored models trained using the Light training version, tailored models trained in Expert training mode based on Bria 2.3, or uploaded tailored models that were trained based on Bria 2.3.
Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.
The following APIs support guidance methods:
/text-to-image/tailored
/text-to-vector/tailored
ControlNets:
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.
- controlnet_canny: Uses edge information from the input image to guide generation based on structural outlines.
- controlnet_depth: Derives depth information to influence spatial arrangement in the generated image.
- controlnet_recoloring: Uses a grayscale version of the input image to guide recoloring while preserving geometry.
- controlnet_color_grid: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image.
You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.
When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.
To use ControlNets, include the following parameters in your request:
guidance_method_X
: Specify the guidance method (where X is 1, 2). If the parameterguidance_method_2
is used,guidance_method_1
must also be used. If you want to use only one method, useguidance_method_1
.guidance_method_X_scale
: Set the impact of the guidance (0.0 to 1.0).guidance_method_X_image_file
: Provide the base64-encoded input image.
Guidance Method | Prompt | Scale | Input Image | Guidance Image | Output Image |
---|---|---|---|---|---|
ControlNet Canny | An exotic colorful shell on the beach | 1.0 | ![]() | ![]() | ![]() |
ControlNet Depth | A dog, exploring an alien planet | 0.8 | ![]() | ![]() | ![]() |
ControlNet Recoloring | A vibrant photo of a woman | 1.00 | ![]() | ![]() | ![]() |
ControlNet Color Grid | A dynamic fantasy illustration of an erupting volcano | 0.7 | ![]() | ![]() | ![]() |
Image Prompt Adapter:
This method offers two modes:
- regular: Uses the image’s content, style elements, and color palette to guide generation.
- style_only: Uses the image’s high-level style elements and color palette to influence the generated output.
To use Image Prompt Adapter as guidance, include the following parameters in your request:
image_prompt_mode
: Specify how the input image influences the generation.image_prompt_scale
: Set the impact of the provided image on the generated result (0.0 to 1.0).image_prompt_file
: Provide the base64-encoded image file to be used as guidance.
or
image_prompt_urls
: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
Guidance Method | Prompt | Mode | Scale | Guidance Image | Output Image |
---|---|---|---|---|---|
Image Prompt Adapter | A drawing of a lion laid on a table. | regular | 0.85 | ![]() | ![]() |
Image Prompt Adapter | A drawing of a bird. | style | 1 | ![]() | ![]() |
https://engine.prod.bria-api.com/v1/
https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}/datasets
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://engine.prod.bria-api.com/v1/tailored-gen/projects/{project_id}/datasets?include_models=false&include_models_ids=false' \
-H 'api_token: string'
Successfully retrieved datasets
Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.
The source of the captions. For 'basic' datasets, this is a null. For 'advanced' datasets, this indicates if captions were generated 'automatic' or provided 'manual'.
List of model objects using this dataset. Only included when include_models=true
List of model IDs using this dataset. Only included when include_models_ids=true
[ { "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "upload_type": "advanced", "captions_source": "automatic", "images_count": 1500, "status": "completed", "captions_update_status": "empty", "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }, { "id": 457, "project_id": 123, "name": "dataset v2", "caption_prefix": "An illustration of a character named Max, a male character with spiky black hair,", "upload_type": "basic", "captions_source": null, "images_count": 8, "status": "draft", "captions_update_status": "empty", "created_at": "2024-05-27T09:00:00Z", "updated_at": "2024-05-27T09:00:00Z" } ]
https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}?max_images=200' \
-H 'api_token: string'
Successfully retrieved dataset
Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.
The method used to upload images to the dataset. 'basic' is the default.
The source of the captions. For 'basic' datasets, this is a null. For 'advanced' datasets, this indicates if captions were generated 'automatic' or provided 'manual'.
Array of images in the dataset (up to 200, controlled by the max_images query)
Once an image is uploaded, a caption is generated automatically. The caption is a natural continuation of the caption_prefix.
Source of the caption. 'unknown' value only appears for images that were uploaded using an old version of Tailored Generation.
{ "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "completed", "captions_update_status": "empty", "upload_type": "basic", "captions_source": null, "images_count": 2, "images": [ { "id": 789, "dataset_id": 456, "caption": "standing in a confident pose wearing a blue dress", "caption_source": "automatic", "upload_source_url": null, "image_name": "lora_standing.png", "image_url": "https://api.example.com/files/lora_standing.png", "thumbnail_url": "https://api.example.com/files/lora_standing_thumb.png", "created_at": "2024-05-26T12:30:00Z", "updated_at": "2024-05-26T12:30:00Z" }, { "id": 790, "dataset_id": 456, "caption": "sitting on a chair with a gentle smile", "caption_source": "automatic", "upload_source_url": null, "image_name": "lora_sitting.png", "image_url": "https://api.example.com/files/lora_sitting.png", "thumbnail_url": "https://api.example.com/files/lora_sitting_thumb.png", "created_at": "2024-05-26T12:45:00Z", "updated_at": "2024-05-26T12:45:00Z" } ], "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T14:30:00Z" }
Request
Update a dataset.
In order to use a dataset in a model training, its status must be set to completed.
Once a dataset status is changed to completed:
- Images cannot be added or removed
- Image captions cannot be edited
- Caption prefix cannot be modified
Updating the Caption Prefix
If the caption prefix needs to be changed, update it here first,
then use the Regenerate All Captions endpoint to refresh all captions with the new prefix.
If you want to generate an advanced new caption prefix,
use the /tailored-gen/generate_prefix
endpoint before updating the dataset.
It is recommended to use the route Clone Dataset As Draft in order to create a new version of a dataset.
Constraints:
- Cannot update caption_prefix if dataset status is completed
- Dataset must have at least 1 image to be marked as completed
New caption prefix (optional). Cannot be updated if dataset status is completed. If the user has updated the caption prefix, it is crucial to Regenerate All Captions using the endpoint PUT /datasets/{dataset_id}/images/. Use /tailored-gen/generate_prefix
to generate an advanced prefix.
https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X PUT \
'https://engine.prod.bria-api.com/v1/tailored-gen/datasets/{dataset_id}' \
-H 'Content-Type: application/json' \
-H 'api_token: string' \
-d '{
"status": "completed"
}'
Dataset successfully updated
Text automatically prepended to all image captions in the dataset. Each image caption should naturally continues this prefix. A default prefix is automatically created but can be modified, and this same prefix is later used as the default generation prefix during image generation.
The method used to upload images to the dataset. 'basic' is the default.
The source of the captions. For 'basic' datasets, this is a null. For 'advanced' datasets, this indicates if captions were generated 'automatic' or provided 'manual'.
{ "id": 456, "project_id": 123, "name": "dataset v1", "caption_prefix": "An illustration of a character named Lora, a female character with purple hair,", "status": "completed", "captions_update_status": "empty", "upload_type": "basic", "captions_source": null, "created_at": "2024-05-26T12:00:00Z", "updated_at": "2024-05-26T15:30:00Z" }