# Overview
Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors)
that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency
across all generated outputs.
The Tailored Generation APIs allow you to manage and train tailored models that maintain the
integrity of your visual IP. You can train models through our Console or implement training
directly via API. Explore the Console [here](https://platform.bria.ai/console/tailored-generation).
**Fully automted training mode**
Bria supports users in training high-quality finetuned models without the guesswork. Based on the selected IP type & dataset, bria automatically selects the right training parameters.
This means that the user only needs to spend time curating their dataset.
**Advanced Customization and Access:**
Bria offers 2 types of advanced training customization: Expert training mode and source-code & weights.
- **Expert training mode** is for LoRa Finetune experts and provide the abilities to finetune the training parameters and upload larger training datasets.
- **Source-code & Weights** is for developers seeking deeper customization and can access Bria’s source-available GenAI models via [Hugging Face](https://huggingface.co/briaai).
All methods allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions.
The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle
of a tailored generation project:
1. **Project Management**: Create and manage projects that define IP characteristics:
- **Create and Retrieve Projects**: Use the `/projects` endpoints to create a new project or
retrieve existing projects that belong to your organization.
- **Define IP Type**: Specify the IP type (e.g., multi_object_set, defined_character,
stylized_scene) and medium.
- **Manage Project Details**: Use the `/projects/{id}` endpoints to update or delete
specific projects.
2. **Dataset Management**: Organize and refine datasets within your projects:
- **Create and Retrieve Datasets**: Use the `/datasets` endpoints to create new datasets or
retrieve existing ones.
- **Generate an Advanced Caption Prefix** (For `stylized_scene`, 'defined_character' and 'object_variants' IP types)
- If the IP type is `stylized_scene`, 'defined_character' or 'object_variants', it is **recommended** to generate an advanced prefix before uploading images.
- Use `/tailored-gen/generate_prefix` to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available).
- Update the dataset with the generated prefix using `/datasets/{dataset_id}` before proceeding with image uploads.
- **Upload and Manage Images**: For Basic upload use the `/datasets/{dataset_id}/images` endpoints to upload
up to 200 images and manage their captions. For advanced upload use the `/datasets/{dataset_id}/images/bulk` endpoint to upload zip files with >200 high quality images.
- **Clone Datasets**: Create variations of existing datasets using the clone functionality.
3. **Model Management**: Train and optimize tailored models based on your datasets:
- **Create and Retrieve Models**: Use the `/models` endpoints to create new models or list
existing ones.
- **Choose training mode**: Select between Fully automated mode (automatic training based on Bria's reciepes) and Expert mode (for training parameters tweaking)
- **Choose Training version & parameters**: Select between "light"/bria-2.3 (for fast generation) or "max"/3.2 (for superior prompt alignment and enhanced learning capabilities).
- **Monitor and Control**: Manage the model lifecycle, including training start/stop and
status monitoring and version control over the training parameters.
### **Training Process**
To train a tailored model:
1. **Create a Project**: Use the `/projects` endpoint to define your IP type and medium.
2. **Create a Dataset**: Use the `/datasets` endpoint to create a dataset within your project.
3. **Generate an Advanced Caption Prefix** *(For `stylized_scene`, 'defined_character' and 'object_variants' IP types)*:
- Before uploading images, call `/tailored-gen/generate_prefix`, sampling 1-6 images from the input images provided for training (preferably 6 if available).
- Update the dataset with the generated prefix using `/datasets/{dataset_id}`.
3. **Upload Images**: Upload images using the `/datasets/{dataset_id}/images` or `/datasets/{dataset_id}/images/bulk` endpoints
(minimum resolution: 1024x1024px).
4. **Prepare Dataset**: Review auto-generated captions and update the dataset status to 'completed'.
5. **Create Model**: Use the `/models` endpoint to create a model, which requires a training mode (fully automated or expert) and a training version (a base model).
6. **Start Training**: Initiate training via the `/models/{id}/start_training` endpoint.
Training typically takes 1-3 hours.
7. **Monitor Progress**: Check the training status using the `/models/{id}` endpoint until
training is 'Completed'.
8. **Generate Images**: Once trained, your model can be used in multiple ways:
- Use `/text-to-image/tailored/{model_id}` for text-to-image generation.
- Use `/text-to-vector/tailored/{model_id}` for generating illustrative vector graphics.
- Use `/reimagine/tailored/{model_id}` for structure-based generation.
- Use `/tailored-gen/restyle_portrait` for human portrait-based generation.
- Access through the Bria platform interface.
Alternatively, manage and train tailored models through Bria's user-friendly Console.
Get started [here](https://platform.bria.ai/console/tailored-generation).
Upcoming soon: configurable tailored generation iFrames.
### Reimagine - Structure Reference
The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image.
It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model.
Access this capability via the [/reimagine](https://docs.bria.ai/image-generation/endpoints/reimagine-structure-reference) endpoint.
### Reimagine - Portrait Reference
The Reimagine - Portrait Reference feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles.
Reimagine - Portrait Reference is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results.
To use Reimagine - Portrait Reference:
* Reference Image: Provide a clear portrait image that meets the recommended guidelines.
* Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP.
Use the [/tailored-gen/restyle_portrait](https://docs.bria.ai/tailored-generation/generation-endpoints/restyle-portrait) endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow.
.png)

Model Compatibility Note:
This feature supports only tailored models trained using the Light training version, tailored models trained in Expert training mode based on Bria 2.3, or uploaded tailored models that were trained based on Bria 2.3.
### **Guidance Methods**
Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals.
The following APIs support guidance methods:
- `/text-to-image/tailored`
- `/text-to-vector/tailored`
**ControlNets:**
A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation.
- **controlnet_canny**: Uses edge information from the input image to guide generation based on structural outlines.
- **controlnet_depth**: Derives depth information to influence spatial arrangement in the generated image.
- **controlnet_recoloring**: Uses a grayscale version of the input image to guide recoloring while preserving geometry.
- **controlnet_color_grid**: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image.
You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference.
When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results.
To use **ControlNets**, include the following parameters in your request:
- `guidance_method_X`: Specify the guidance method (where X is 1, 2). If the parameter `guidance_method_2` is used, `guidance_method_1` must also be used. If you want to use only one method, use `guidance_method_1`.
- `guidance_method_X_scale`: Set the impact of the guidance (0.0 to 1.0).
- `guidance_method_X_image_file`: Provide the base64-encoded input image.
| Guidance Method |
Prompt |
Scale |
Input Image |
Guidance Image |
Output Image |
| ControlNet Canny |
An exotic colorful shell on the beach |
1.0 |
 |
 |
 |
| ControlNet Depth |
A dog, exploring an alien planet |
0.8 |
 |
 |
 |
| ControlNet Recoloring |
A vibrant photo of a woman |
1.00 |
 |
 |
 |
| ControlNet Color Grid |
A dynamic fantasy illustration of an erupting volcano |
0.7 |
 |
 |
 |
**Image Prompt Adapter:**
This method offers two modes:
- **regular**: Uses the image’s content, style elements, and color palette to guide generation.
- **style_only**: Uses the image’s high-level style elements and color palette to influence the generated output.
To use **Image Prompt Adapter** as guidance, include the following parameters in your request:
- `image_prompt_mode`: Specify how the input image influences the generation.
- `image_prompt_scale`: Set the impact of the provided image on the generated result (0.0 to 1.0).
- `image_prompt_file`: Provide the base64-encoded image file to be used as guidance.
or
- `image_prompt_urls`: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
| Guidance Method |
Prompt |
Mode |
Scale |
Guidance Image |
Output Image |
| Image Prompt Adapter |
A drawing of a lion laid on a table. |
regular |
0.85 |
 |
 |
| Image Prompt Adapter |
A drawing of a bird. |
style |
1 |
 |
 |
### **IP-related prompts**
Our models are trained exclusively on fully licensed, safe-for-commercial-use data.
Prompts that reference public figures, brands, or other protected content may result in generic or altered outputs.
These prompts are not blocked, but results may differ from what you expect.
If an IP-related signal is detected in the prompt, the following warning will appear in the API response:
```text
This prompt may contain intellectual property (IP)-protected content.
To ensure compliance and safety, certain elements may be omitted or altered.
As a result, the output may not fully meet your request.
```
## Servers
```
https://engine.prod.bria-api.com/v2/
```
```
https://engine.prod.bria-api.com/v1/
```
## Download OpenAPI description
[Overview](https://docs.bria.ai/_bundle/tailored-generation.yaml)
## Image Generation Endpoints
### Generate Image - Tailored model
- [POST /text-to-image/tailored/{model_id}](https://docs.bria.ai/tailored-generation/image-generation-endpoints/text-to-image-tailored.md): This route allows you to generate images using a Tailored Model. Tailored models are trained on a visual IP (illustrations, photos, vectors) to faithfully reproduce specific IP elements or guidelines. You can train an engine through our Console or implement training on your platform via API.
This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.
### Generate Image From Checkpoint - Tailored model
- [POST /text-to-image/tailored/{model_id}/{checkpoint_step}](https://docs.bria.ai/tailored-generation/image-generation-endpoints/text-to-image-checkpoint-tailored.md): This route allows you to generate images using a different checkpoint (if exists) of the Tailored Model. Tailored models are trained on a visual IP (illustrations, photos, vectors) to faithfully reproduce specific IP elements or guidelines. You can train an engine through our Console or implement training on your platform via API.
This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.
### Generate Vector Graphics - Tailored (Beta)
- [POST /text-to-vector/tailored/{model_id}](https://docs.bria.ai/tailored-generation/image-generation-endpoints/text-to-vector-tailored.md): Description
This route allows you to generate vector graphics using a Tailored Model. Tailored Models are trained on your visual IP (illustrations, photos, vectors) to preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs. To see a detailed description of the tailored models' functionalities, please refer to the /text-to-image/tailored/{model_id} route documentation.
*Text-to-vector is compatible with tailored models in the illustrative domain.
This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.
### Generate Vector Graphics From Checkpoint - Tailored (Beta)
- [POST /text-to-vector/tailored/{model_id}/{checkpoint_step}](https://docs.bria.ai/tailored-generation/image-generation-endpoints/text-to-vector-checkpoint-tailored.md): Description
This route allows you to generate vector graphics using a different checkpoint (if exists) of the Tailored Model. Tailored Models are trained on your visual IP (illustrations, photos, vectors) to preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs. To see a detailed description of the tailored models' functionalities, please refer to the /text-to-image/tailored/{model_id} route documentation.
*Text-to-vector is compatible with tailored models in the illustrative domain.
This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request.
### Reimagine - Portrait Reference
- [POST /tailored-gen/restyle_portrait](https://docs.bria.ai/tailored-generation/image-generation-endpoints/restyle-portrait.md): This endpoint lets you change the style of a portrait while keeping the person’s id.
It works by using a reference image of the person along with a trained tailored model.
This capability is specifically designed for portraits that capture the subject from the torso up, with a recommended face size of at least 500×500 pixels. Images that do not meet these guidelines may produce inconsistent results.
Model Compatibility Note:
This feature supports only tailored models trained using the Light training version, tailored models trained in Expert training mode based on Bria 2.3, or uploaded tailored models that were trained based on Bria 2.3.
## Video Generation Endpoints
### Generate Video from Tailored Image (Beta)
- [POST /video/generate/tailored/image-to-video](https://docs.bria.ai/tailored-generation/video-generation-endpoints/generate-video-from-tailored-image.md): Note: This is a V2 API endpoint and uses the V2 base URL (https://engine.prod.bria-api.com/v2).
(Beta - V2 API Feature)
Initiates an asynchronous job to generate a 5-second MP4 video file animating a source image created by a tailored model.
Asynchronous Requests and the Status Service Bria API v2 endpoints process requests asynchronously by default. When you make an asynchronous request, the API immediately returns a request_id and a status_url instead of the final result. Use the Status Service to track the request's progress until it reaches a completed state.
See the full guide at Status Service Documentation for complete details and usage examples.
On successful initiation, this endpoint returns a 202 Accepted response. Poll the status_url or use the Status Service with the request_id to check for job completion.
Once 'Completed', the status response will contain the URL to the generated MP4 video.
## Training Endpoints
### Create Project
- [POST /tailored-gen/projects](https://docs.bria.ai/tailored-generation/training-endpoints/create-project.md): Create a new project within the organization. A project encompasses all models trained and datasets created for the IP defined in the project.
The following IP types are supported:
Defined Character
A specific character that maintains consistent identity and unique traits while being reproduced in different poses, situations, and actions.
Medium: Photography
Medium: Illustration
Stylized Scene
Complete environments or scenes created with a consistent visual style, look, and feel.
Medium: Photography
Medium: Illustration
Multi-Object Set
A collection of different objects sharing a common style, design language, or color scheme. Objects are typically isolated on solid backgrounds.
Object Variants
Multiple variations of the same object type, maintaining consistent style and structure while showing different interpretations. Objects are typically isolated on solid backgrounds.
Icons
A collection of cohesive, small-scale illustrations or symbols designed to represent concepts, actions, or objects in interfaces and applications. Maintains consistent visual style across the set.
Character Variants
Multiple characters sharing the same fundamental structure, style, and color palette, allowing creation of new characters that fit within the established design system.
### Get Projects
- [GET /tailored-gen/projects](https://docs.bria.ai/tailored-generation/training-endpoints/get-projects.md): Retrieve all projects within the organization. If there are no projects, returns an empty array.
### Get Project by ID
- [GET /tailored-gen/projects/{project_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-project-by-id.md): Retrieve full project information including project name and description, IP name and description, IP medium (photography/illustration), IP type, status, and timestamps.
### Update Project
- [PUT /tailored-gen/projects/{project_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-project.md): Update a specific project
### Delete Project
- [DELETE /tailored-gen/projects/{project_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-project.md): Permanently delete a project and all its associated resources, including all datasets, images, and models. This action cannot be undone. Training models must be stopped before deletion.
### Generate Caption Prefix
- [POST /tailored-gen/generate_prefix](https://docs.bria.ai/tailored-generation/training-endpoints/generate-prefix.md): Generates a caption prefix based on the provided images.
This is currently supported only when ip_type is stylized_scene, 'defined_character' or 'object_variants' IP types.
##### Usage Scenarios:
1. Before uploading visuals to a new dataset
- This use case applies when creating a new dataset.
- In the first step, you can create the dataset entity in parallel while calling this endpoint.
- Randomly sample 1-6 images from the input images provided for training. If there are 6 or more images, provide exactly 6 for the best results.
- Once you receive the prefix, update the dataset using the Update Dataset endpoint.
- Then, proceed with uploading images to the dataset.
2. To regenerate a new prefix (even if previously generated)
- This allows users to select the prefix they prefer.
- Randomly sample 1-6 images from the input images provided for training. If there are 6 or more images, provide exactly 6 for the best results.
- Update the dataset with the new prefix.
- Then, use the Regenerate All Captions endpoint to ensure all images in the dataset get updated captions.
If any image fails validation, the request will fail.
This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request.
### Create Dataset
- [POST /tailored-gen/datasets](https://docs.bria.ai/tailored-generation/training-endpoints/create-dataset.md): Create a new dataset.
Upload types:
* Basic upload type: Supports up to 200 images, uploading image files
* Advanced upload type: Supports up to 5000 images, uploading a zip file
Constraints:
* Dataset must have at least 1 image to be completed
* In basic datasets, a Maximum of 200 images per dataset
When creating a dataset, a default caption prefix is created in all cases.
Generating an advanced Caption Prefix Before Uploading Images When creating a dataset with stylized_scene, 'defined_character' or 'object_variants' IP types, it is recommended to generate an advanced caption prefix before uploading images.
To do this, use the /tailored-gen/generate_prefix endpoint, send up to 6 images, and update the dataset with the received prefix using the Update Dataset endpoint. Once the prefix is updated, proceed with uploading images.
Image Preprocessing Uploaded images will be automatically resized so that the shortest side is 1024 pixels while maintaining the aspect ratio.
Then, a centered 1024x1024 crop will be applied. The final cropped image will be saved for training.
### Get Datasets
- [GET /tailored-gen/datasets](https://docs.bria.ai/tailored-generation/training-endpoints/get-datasets.md): Retrieve a list of all datasets. If there are no datasets, returns an empty array.
### Get Datasets by Project
- [GET /tailored-gen/projects/{project_id}/datasets](https://docs.bria.ai/tailored-generation/training-endpoints/get-datasets-by-project.md): Retrieve all datasets for a specific project
### Get Dataset by ID
- [GET /tailored-gen/datasets/{dataset_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-dataset-by-id.md): Retrieve a specific dataset
### Update Dataset
- [PUT /tailored-gen/datasets/{dataset_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-dataset.md): Update a dataset.
In order to use a dataset in a model training, its status must be set to completed.
Once a dataset status is changed to completed:
* Images cannot be added or removed
* Image captions cannot be edited
* Caption prefix cannot be modified
Updating the Caption Prefix
If the caption prefix needs to be changed, update it here first,
then use the Regenerate All Captions endpoint to refresh all captions with the new prefix.
If you want to generate an advanced new caption prefix,
use the /tailored-gen/generate_prefix endpoint before updating the dataset.
It is recommended to use the route Clone Dataset As Draft in order to create a new version of a dataset.
Constraints:
* Cannot update caption_prefix if dataset status is completed
* Dataset must have at least 1 image to be marked as completed
### Delete Dataset
- [DELETE /tailored-gen/datasets/{dataset_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-dataset.md): Delete a specific dataset. Deletes all associated images.
### Clone Dataset As Draft
- [POST /tailored-gen/datasets/{dataset_id}/clone](https://docs.bria.ai/tailored-generation/training-endpoints/clone-dataset.md): Create a new draft dataset based on existing one. This is useful when you would like to use the same dataset again for another training, but with some modification (create a variation).
### Upload Image files
- [POST /tailored-gen/datasets/{dataset_id}/images](https://docs.bria.ai/tailored-generation/training-endpoints/upload-image.md): Upload new image to a dataset.
Image Requirements:
- Recommended minimum resolution: 1024x1024 pixels for best quality
- By default, smaller images (down to 256x256) will be automatically upscaled to meet this threshold (increase_resolution=true)
- To strictly enforce the 1024x1024 minimum, set increase_resolution=false
- Supported formats: jpg, jpeg, png, webp
- Preferably use original high-quality assets
- For best results, use 1:1 aspect ratio or ensure main content is centered
Dataset Guidelines:
- Recommended: 5-50 images for optimal results when using Max training version, 15-100 for optimal results when using Light training version
- Maximum supported: 200 images
- Ensure consistency in style, structure, and visual elements
- Balance diversity in content (poses, scenes, objects) while maintaining consistency in key elements (style, colors, theme)
- Note: Larger datasets may introduce more variety, which can reduce overall consistency
For optimal training (especially for characters/objects):
- Subject should occupy most of the image area
- Minimize unnecessary margins around the subject
- Transparent backgrounds will be converted to black
- For character datasets: include diverse poses, environments, attires, and interactions
Captions and Generation:
- Each image receives an automatic caption that continues from the dataset's caption prefix
- Default caption prefix is recommended for initial training
- Captions can be modified to include domain-specific terms
- Both captions and prefix influence training and future generations
- Focus on essential elements rather than extensive details
Constraints:
- Can only be used by "basic" upload type. use images/bulk for advanced dataset upload
- Dataset must have at least 1 image
- Dataset cannot exceed 200 images
- Cannot upload to a completed dataset
This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request.
### Get Images
- [GET /tailored-gen/datasets/{dataset_id}/images](https://docs.bria.ai/tailored-generation/training-endpoints/get-images.md): Retrieve all images in a specific dataset. If there are no images, returns an empty array.
### Regenerate All Captions
- [PUT /tailored-gen/datasets/{dataset_id}/images](https://docs.bria.ai/tailored-generation/training-endpoints/regenerate-all-captions.md): Regenerate captions for all images in dataset. This action is crucial after the user updates the caption_prefix, and then it's recommended to regenerate all the captions of all images, to have full compatibility with the new caption_prefix.
This is an asynchronous operation. Once this endpoint is called, Get Dataset by ID should be sampled until the captions_update_status changes to 'completed'.
### Advanced image upload
- [POST /tailored-gen/datasets/{dataset_id}/images/bulk-upload](https://docs.bria.ai/tailored-generation/training-endpoints/bulk-upload-images.md): Efficiently upload a large volume of images (up to 5000) from a ZIP file to an advanced dataset.
This operation is asynchronous and status can be retrived by using the {dataset_id}/bulk-upload/status endpoint.
IMPORTANT: This endpoint is only supported for datasets with an advanced upload type.
The dataset must be empty.
Image Requirements:
* Supported formats: jpg, jpeg, png, webp.
* Minimum dimensions: 1024 x 1024 pixels.
* Total size limit: 5 GB zip file.
Captioning:
* automatic_captioning: true (default): The system automatically generates captions for all images.
* automatic_captioning: false: Each image must have a corresponding .txt file with the same name (e.g., image.jpg and image.txt) containing the plaintext caption.
Important Notes:
* This endpoint is for bulk upload and does not support the increase_resolution parameter.
* Images that fail validation (e.g., unsupported format, wrong dimensions, missing captions) will be skipped and included in the failure report.
* If the dataset is not empty, if another bulk upload is in progress, or if any previous bulk upload attemp took place, the request will fail.
### Get Image by ID
- [GET /tailored-gen/datasets/{dataset_id}/images/{image_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-image.md): Retrieve full image information including caption (which naturally continues the dataset's caption_prefix), caption source (automatic/manual/unknown), image name, URL and thumbnail URL, dataset ID, and timestamps.
### Update Image Caption
- [PUT /tailored-gen/datasets/{dataset_id}/images/{image_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-image-caption.md): Update the caption of a specific image. There are two mutually exclusive ways to update a caption:
1. Provide a new caption text:
* Use the caption parameter
* This will set caption_source to "manual"
* Reflects a human-written caption
2. Request automatic caption regeneration:
* Set regenerate_caption to true
* This will set caption_source to "automatic"
* A new caption will be generated automatically based on the image and caption_prefix
* For the same caption_prefix, regenerate_caption will always return the same caption
* Useful for resetting captions or regenerating them after changing the caption_prefix
Note: You cannot provide both parameters simultaneously as they represent different update approaches.
Constraints:
* Cannot update captions in a completed dataset
* Cannot provide both caption and regenerate_caption in the same request
### Delete Image
- [DELETE /tailored-gen/datasets/{dataset_id}/images/{image_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-image.md): Permanently remove an image from a dataset. This will also delete the image files and associated thumbnails.
Constraints:
* Cannot delete images from completed datasets
### Get Bulk Upload Status
- [GET /tailored-gen/datasets/{dataset_id}/images/bulk-upload/status](https://docs.bria.ai/tailored-generation/training-endpoints/get-bulk-upload-status.md): Retrieve the status and progress of a bulk image upload job.
### Download Advanced Dataset
- [GET /datasets/{dataset_id}/download](https://docs.bria.ai/tailored-generation/training-endpoints/download-dataset.md): Enables users to download an advanced dataset.
The response includes a pre-signed URL for downloading the dataset, details about the base model used,
and the prompt prefix applied during training.
### Create Model
- [POST /tailored-gen/models](https://docs.bria.ai/tailored-generation/training-endpoints/create-model.md): Create new model. A dataset can be used to train multiple models with different training versions (e.g., one light and one max). The model will belong to the same project as its dataset.
### Get Models
- [GET /tailored-gen/models](https://docs.bria.ai/tailored-generation/training-endpoints/get-models.md): Retrieve a list of models. If there are no models, an empty array is returned.
### Upload Model
- [POST /tailored-gen/models/upload_model](https://docs.bria.ai/tailored-generation/training-endpoints/upload-model.md): This API allows users to upload a pre-trained tailored model to Bria’s infrastructure, and run them within Bria’s ecosystem.
Model has to be in .safetensors format. Maximum model size supported is 3GB.
You can find out the model's status by using the /tailored-gen/models/{model_id} route. Potential statuses are: "syncing" or "completed".
### Get Models by Project
- [GET /tailored-gen/projects/{project_id}/models](https://docs.bria.ai/tailored-generation/training-endpoints/get-models-by-project.md): Retrieve all models for a project. If there are no models, an empty array is returned.
### Get Model by ID
- [GET /tailored-gen/models/{model_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-model.md): Retrieve full model information including name, description, status (Created/InProgress/Completed/Failed/Stopping/Stopped), training version (Light/Max), generation prefix, project ID, dataset ID, and timestamps.
### Update Model
- [PUT /tailored-gen/models/{model_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-model.md): Update a model's name and description. Other model attributes such as training version and dataset cannot be modified after creation.
### Delete Model
- [DELETE /tailored-gen/models/{model_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-model.md): Delete a specific model. Changes status to Deleted.
### Start Training
- [POST /tailored-gen/models/{model_id}/start_training](https://docs.bria.ai/tailored-generation/training-endpoints/start-training.md): Start model training. Training duration is typically 1-3 hours. The associated dataset must have a status of 'completed' before training can begin.
Constraints:
* The dataset must be in a 'completed' status.
* Advanced training parameters (rank, checkpoint_interval, lr_scheduler, learning_rate_scheduler, total_training_steps) are only supported when the model's training_mode is set to 'expert'.
### Stop Training
- [POST /tailored-gen/models/{model_id}/stop_training](https://docs.bria.ai/tailored-generation/training-endpoints/stop-training.md): Stop an ongoing model training process. Once stopped, training cannot be resumed - a new model would need to be created and trained.
### List Checkpoints
- [GET /tailored-gen/models/{model_id}/checkpoints](https://docs.bria.ai/tailored-generation/training-endpoints/list-checkpoints.md): Retrieve a list of all available checkpoints for a model. This is only available for models trained in expert mode.
### Get Specific Checkpoint
- [GET /tailored-gen/models/{model_id}/checkpoints/{checkpoint_step}](https://docs.bria.ai/tailored-generation/training-endpoints/get-checkpoint.md): Retrieve details for a specific model checkpoint by its step number.
### Delete Checkpoint
- [DELETE /tailored-gen/models/{model_id}/checkpoints/{checkpoint_step}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-checkpoint.md): Permanently delete a specific model checkpoint. Deletion is not allowed if the checkpoint is currently selected for inference.
### Download Tailored Model
- [GET /tailored-gen/models/{model_id}/download](https://docs.bria.ai/tailored-generation/training-endpoints/download-tailored-model.md): Enables users to download a trained tailored generation model after completing the training process.
The response includes a pre-signed URL for downloading the model, details about the base model used,
and the prompt prefix applied during training.
To use the tailored model source code, access to the base model source code is required.
The base model source code is exclusively available through Bria's Source Code and Weights product.
For more information or to gain access, contact us at info@bria.ai.