# Overview Tailored Generation provides capabilities to generate visuals (photos, illustrations, vectors) that preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs. The Tailored Generation APIs allow you to manage and train tailored models that maintain the integrity of your visual IP. You can train models through our Console or implement training directly via API. Explore the Console [here](https://platform.bria.ai/console/tailored-generation). **Advanced Customization and Access:** As part of Bria’s **Source Code & Weights** product, developers seeking deeper customization can access Bria’s source-available GenAI models via [Hugging Face](https://huggingface.co/briaai). This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions. The Tailored Generation Training API provides a set of endpoints to manage the entire lifecycle of a tailored generation project: 1. **Project Management**: Create and manage projects that define IP characteristics: - **Create and Retrieve Projects**: Use the `/projects` endpoints to create a new project or retrieve existing projects that belong to your organization. - **Define IP Type**: Specify the IP type (e.g., multi_object_set, defined_character, stylized_scene) and medium. - **Manage Project Details**: Use the `/projects/{id}` endpoints to update or delete specific projects. 2. **Dataset Management**: Organize and refine datasets within your projects: - **Create and Retrieve Datasets**: Use the `/datasets` endpoints to create new datasets or retrieve existing ones. - **Generate an Advanced Caption Prefix** (For `stylized_scene`, 'defined_character' and 'object_variants' IP types) - If the IP type is `stylized_scene`, 'defined_character' or 'object_variants', it is **recommended** to generate an advanced prefix before uploading images. - Use `/tailored-gen/generate_prefix` to generate a structured caption prefix using 1-6 sample images from the input images provided for training (preferably 6 if available). - Update the dataset with the generated prefix using `/datasets/{dataset_id}` before proceeding with image uploads. - **Upload and Manage Images**: Use the `/datasets/{dataset_id}/images` endpoints to upload images and manage their captions. - **Clone Datasets**: Create variations of existing datasets using the clone functionality. 3. **Model Management**: Train and optimize tailored models based on your datasets: - **Create and Retrieve Models**: Use the `/models` endpoints to create new models or list existing ones. - **Choose Training Version**: Select between "light" (for fast generation and structure reference compatibility) or "max" (for superior prompt alignment and enhanced learning capabilities). - **Monitor and Control**: Manage the model lifecycle, including training start/stop and status monitoring. ### **Training Process** To train a tailored model: 1. **Create a Project**: Use the `/projects` endpoint to define your IP type and medium. 2. **Create a Dataset**: Use the `/datasets` endpoint to create a dataset within your project. 3. **Generate an Advanced Caption Prefix** *(For `stylized_scene`, 'defined_character' and 'object_variants' IP types)*: - Before uploading images, call `/tailored-gen/generate_prefix`, sampling 1-6 images from the input images provided for training (preferably 6 if available). - Update the dataset with the generated prefix using `/datasets/{dataset_id}`. 3. **Upload Images**: Upload images using the `/datasets/{dataset_id}/images` endpoint (minimum resolution: 1024x1024px). 4. **Prepare Dataset**: Review auto-generated captions and update the dataset status to 'completed'. 5. **Create Model**: Use the `/models` endpoint to create a model, selecting either the "light" or "max" training version. 6. **Start Training**: Initiate training via the `/models/{id}/start_training` endpoint. Training typically takes 1-3 hours. 7. **Monitor Progress**: Check the training status using the `/models/{id}` endpoint until training is 'Completed'. 8. **Generate Images**: Once trained, your model can be used in multiple ways: - Use `/text-to-image/tailored/{model_id}` for text-to-image generation. - Use `/text-to-vector/tailored/{model_id}` for generating illustrative vector graphics. - Use `/reimagine/tailored/{model_id}` for structure-based generation. - Use `/tailored-gen/restyle_portrait` for human portrait-based generation. - Access through the Bria platform interface. Alternatively, manage and train tailored models through Bria's user-friendly Console. Get started [here](https://platform.bria.ai/console/tailored-generation). ### Reimagine - Structure Reference The Reimagine - Structure Reference feature lets you guide outputs using both a tailored model and a structure reference image. It produces visuals that preserve the structure of the reference image while applying specific characteristics defined by your tailored model. Access this capability via the [/reimagine](https://docs.bria.ai/image-generation/endpoints/reimagine-structure-reference) endpoint. ### Reimagine - Portrait Reference The Reimagine - Portrait Reference feature enables you to change the style of portrait images while preserving the identity of the subject. It utilizes a reference image of the person alongside a trained tailored model, ensuring consistent representation of facial features and identity across different visual styles. Reimagine - Portrait Reference is specifically optimized for portraits captured from the torso upward, with an ideal face resolution of at least 500×500 pixels. Using images below these recommended dimensions may lead to inconsistent or unsatisfactory results. To use Reimagine - Portrait Reference: * Reference Image: Provide a clear portrait image that meets the recommended guidelines. * Tailored Model: Select a tailored model trained specifically to reflect your desired style or visual IP. Use the [/tailored-gen/restyle_portrait](https://docs.bria.ai/tailored-generation/generation-endpoints/restyle-portrait) endpoint to access this capability directly, allowing seamless integration of personalized style transformations into your workflow. Model Compatibility Note: This feature supports only tailored models trained using the Light training version, tailored models trained in Expert training mode based on Bria 2.3, or uploaded tailored models that were trained based on Bria 2.3. ### **Guidance Methods** Some of the APIs below support various guidance methods to provide greater control over generation. These methods enable to guide the generation using not only a textual prompt, but also visuals. The following APIs support guidance methods: - `/text-to-image/tailored` - `/text-to-vector/tailored` **ControlNets:** A set of methods that allow conditioning the model on additional inputs, providing detailed control over image generation. - **controlnet_canny**: Uses edge information from the input image to guide generation based on structural outlines. - **controlnet_depth**: Derives depth information to influence spatial arrangement in the generated image. - **controlnet_recoloring**: Uses a grayscale version of the input image to guide recoloring while preserving geometry. - **controlnet_color_grid**: Extracts a 16x16 color grid from the input image to guide the color scheme of the generated image. You can specify up to two ControlNet guidance methods in a single request. Each method requires an accompanying image and a scale parameter to determine its impact on the generation inference. When using multiple ControlNets, all input images must have the same aspect ratio, which will determine the aspect ratio of the generated results. To use **ControlNets**, include the following parameters in your request: - `guidance_method_X`: Specify the guidance method (where X is 1, 2). If the parameter `guidance_method_2` is used, `guidance_method_1` must also be used. If you want to use only one method, use `guidance_method_1`. - `guidance_method_X_scale`: Set the impact of the guidance (0.0 to 1.0). - `guidance_method_X_image_file`: Provide the base64-encoded input image.
Guidance Method Prompt Scale Input Image Guidance Image Output Image
ControlNet Canny An exotic colorful shell on the beach 1.0
ControlNet Depth A dog, exploring an alien planet 0.8
ControlNet Recoloring A vibrant photo of a woman 1.00
ControlNet Color Grid A dynamic fantasy illustration of an erupting volcano 0.7
**Image Prompt Adapter:** This method offers two modes: - **regular**: Uses the image’s content, style elements, and color palette to guide generation. - **style_only**: Uses the image’s high-level style elements and color palette to influence the generated output. To use **Image Prompt Adapter** as guidance, include the following parameters in your request: - `image_prompt_mode`: Specify how the input image influences the generation. - `image_prompt_scale`: Set the impact of the provided image on the generated result (0.0 to 1.0). - `image_prompt_file`: Provide the base64-encoded image file to be used as guidance. or - `image_prompt_urls`: Provide a list of URLs pointing to publicly accessible images to be used as guidance.
Guidance Method Prompt Mode Scale Guidance Image Output Image
Image Prompt Adapter A drawing of a lion laid on a table. regular 0.85
Image Prompt Adapter A drawing of a bird. style 1
### **IP-related prompts** Our models are trained exclusively on fully licensed, safe-for-commercial-use data. Prompts that reference public figures, brands, or other protected content may result in generic or altered outputs. These prompts are not blocked, but results may differ from what you expect. If an IP-related signal is detected in the prompt, the following warning will appear in the API response: ```text This prompt may contain intellectual property (IP)-protected content. To ensure compliance and safety, certain elements may be omitted or altered. As a result, the output may not fully meet your request. ``` ## Servers ``` https://engine.prod.bria-api.com/v1 ``` ## Download OpenAPI description [Overview](https://docs.bria.ai/_spec/tailored-generation.yaml) ## Generation Endpoints ### Generate Image - Tailored model - [POST /text-to-image/tailored/{model_id}](https://docs.bria.ai/tailored-generation/generation-endpoints/text-to-image-tailored.md): This route allows you to generate images using a Tailored Model. Tailored models are trained on a visual IP (illustrations, photos, vectors) to faithfully reproduce specific IP elements or guidelines. You can train an engine through our Console or implement training on your platform via API. This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request. ### Generate Vector Graphics - Tailored (Beta) - [POST /text-to-vector/tailored/{model_id}](https://docs.bria.ai/tailored-generation/generation-endpoints/text-to-vector-tailored.md): This route allows you to generate vector graphics using a Tailored Model. Tailored Models are trained on your visual IP (illustrations, photos, vectors) to preserve and faithfully reproduce specific IP elements or guidelines, ensuring consistency across all generated outputs. To see a detailed description of the tailored models' functionalities, please refer to the /text-to-image/tailored/{model_id} route documentation. *Text-to-vector is compatible with tailored models in the illustrative domain. This API endpoint supports content moderation via an optional parameter that can prevent generation if input images contain inappropriate content, and filters out unsafe generated images - the first blocked input image will fail the entire request. ### Reimagine - Portrait Reference - [POST /tailored-gen/restyle_portrait](https://docs.bria.ai/tailored-generation/generation-endpoints/restyle-portrait.md): This endpoint lets you change the style of a portrait while keeping the person’s id. It works by using a reference image of the person along with a trained tailored model. This capability is specifically designed for portraits that capture the subject from the torso up, with a recommended face size of at least 500×500 pixels. Images that do not meet these guidelines may produce inconsistent results. Model Compatibility Note: This feature supports only tailored models trained using the Light training version, tailored models trained in Expert training mode based on Bria 2.3, or uploaded tailored models that were trained based on Bria 2.3. ## Training Endpoints ### Create Project - [POST /tailored-gen/projects](https://docs.bria.ai/tailored-generation/training-endpoints/create-project.md): Create a new project within the organization. A project encompasses all models trained and datasets created for the IP defined in the project. The following IP types are supported: A specific character that maintains consistent identity and unique traits while being reproduced in different poses, situations, and actions. Medium: Photography Medium: Illustration Complete environments or scenes created with a consistent visual style, look, and feel. Medium: Photography Medium: Illustration A collection of different objects sharing a common style, design language, or color scheme. Objects are typically isolated on solid backgrounds. Multiple variations of the same object type, maintaining consistent style and structure while showing different interpretations. Objects are typically isolated on solid backgrounds. A collection of cohesive, small-scale illustrations or symbols designed to represent concepts, actions, or objects in interfaces and applications. Maintains consistent visual style across the set. Multiple characters sharing the same fundamental structure, style, and color palette, allowing creation of new characters that fit within the established design system. ### Get Projects - [GET /tailored-gen/projects](https://docs.bria.ai/tailored-generation/training-endpoints/get-projects.md): Retrieve all projects within the organization. If there are no projects, returns an empty array. ### Get Project by ID - [GET /tailored-gen/projects/{project_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-project-by-id.md): Retrieve full project information including project name and description, IP name and description, IP medium (photography/illustration), IP type, status, and timestamps. ### Update Project - [PUT /tailored-gen/projects/{project_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-project.md): Update a specific project ### Delete Project - [DELETE /tailored-gen/projects/{project_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-project.md): Permanently delete a project and all its associated resources, including all datasets, images, and models. This action cannot be undone. Training models must be stopped before deletion. ### Generate Caption Prefix - [POST /tailored-gen/generate_prefix](https://docs.bria.ai/tailored-generation/training-endpoints/generate-prefix.md): Generates a caption prefix based on the provided images. This is currently supported only when is , 'defined_character' or 'object_variants' IP types. ##### Usage Scenarios: 1. - This use case applies when creating a new dataset. - In the first step, you can while calling this endpoint. - Randomly . If there are , provide exactly for the best results. - Once you receive the prefix, update the dataset using the endpoint. - Then, proceed with uploading images to the dataset. 2. - This allows users to select the prefix they prefer. - Randomly . If there are , provide exactly for the best results. - Update the dataset with the new prefix. - Then, use the endpoint to ensure all images in the dataset get updated captions. If any image fails validation, the request will fail. This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request. ### Create Dataset - [POST /tailored-gen/datasets](https://docs.bria.ai/tailored-generation/training-endpoints/create-dataset.md): Create a new dataset. Constraints: * Dataset must have at least 1 image to be completed * Maximum of 200 images per dataset When creating a dataset, a defoult caption prefix is created in all cases. When creating a dataset with , 'defined_character' or 'object_variants' IP types, it is to generate an advanced caption prefix before uploading images. To do this, use the endpoint, send up to 6 images, and update the dataset with the received prefix using the endpoint. Once the prefix is updated, proceed with uploading images. Uploaded images will be automatically resized so that the shortest side is 1024 pixels while maintaining the aspect ratio. Then, a centered 1024x1024 crop will be applied. The final cropped image will be saved for training. ### Get Datasets - [GET /tailored-gen/datasets](https://docs.bria.ai/tailored-generation/training-endpoints/get-datasets.md): Retrieve a list of all datasets. If there are no datasets, returns an empty array. ### Get Datasets by Project - [GET /tailored-gen/projects/{project_id}/datasets](https://docs.bria.ai/tailored-generation/training-endpoints/get-datasets-by-project.md): Retrieve all datasets for a specific project ### Get Dataset by ID - [GET /tailored-gen/datasets/{dataset_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-dataset-by-id.md): Retrieve a specific dataset ### Update Dataset - [PUT /tailored-gen/datasets/{dataset_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-dataset.md): Update a dataset. In order to use a dataset in a model training, its status must be set to completed. Once a dataset status is changed to completed: * Images cannot be added or removed * Image captions cannot be edited * Caption prefix cannot be modified If the caption prefix needs to be changed, update it here first, then use the endpoint to refresh all captions with the new prefix. If you want to generate an advanced , use the endpoint before updating the dataset. It is recommended to use the route Clone Dataset As Draft in order to create a new version of a dataset. Constraints: * Cannot update caption_prefix if dataset status is completed * Dataset must have at least 1 image to be marked as completed ### Delete Dataset - [DELETE /tailored-gen/datasets/{dataset_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-dataset.md): Delete a specific dataset. Deletes all associated images. ### Clone Dataset As Draft - [POST /tailored-gen/datasets/{dataset_id}/clone](https://docs.bria.ai/tailored-generation/training-endpoints/clone-dataset.md): Create a new draft dataset based on existing one. This is useful when you would like to use the same dataset again for another training, but with some modification (create a variation). ### Upload Image - [POST /tailored-gen/datasets/{dataset_id}/images](https://docs.bria.ai/tailored-generation/training-endpoints/upload-image.md): Upload new image to a dataset. Image Requirements: - Recommended minimum resolution: 1024x1024 pixels for best quality - By default, smaller images (down to 256x256) will be automatically upscaled to meet this threshold () - To strictly enforce the 1024x1024 minimum, set - Supported formats: jpg, jpeg, png, webp - Preferably use original high-quality assets - For best results, use 1:1 aspect ratio or ensure main content is centered Dataset Guidelines: - Recommended: 5-50 images for optimal results when using Max training version, 15-100 for optimal results when using Light training version - Maximum supported: 200 images - Ensure consistency in style, structure, and visual elements - Balance diversity in content (poses, scenes, objects) while maintaining consistency in key elements (style, colors, theme) - Note: Larger datasets may introduce more variety, which can reduce overall consistency For optimal training (especially for characters/objects): - Subject should occupy most of the image area - Minimize unnecessary margins around the subject - Transparent backgrounds will be converted to black - For character datasets: include diverse poses, environments, attires, and interactions Captions and Generation: - Each image receives an automatic caption that continues from the dataset's caption prefix - Default caption prefix is recommended for initial training - Captions can be modified to include domain-specific terms - Both captions and prefix influence training and future generations - Focus on essential elements rather than extensive details Constraints: - Dataset must have at least 1 image - Dataset cannot exceed 200 images - Cannot upload to a completed dataset This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request. ### Get Images - [GET /tailored-gen/datasets/{dataset_id}/images](https://docs.bria.ai/tailored-generation/training-endpoints/get-images.md): Retrieve all images in a specific dataset. If there are no images, returns an empty array. ### Regenerate All Captions - [PUT /tailored-gen/datasets/{dataset_id}/images](https://docs.bria.ai/tailored-generation/training-endpoints/regenerate-all-captions.md): Regenerate captions for all images in dataset. This action is crucial after the user updates the caption_prefix, and then it's recommended to regenerate all the captions of all images, to have full compatibility with the new caption_prefix. This is an asynchronous operation. Once this endpoint is called, Get Dataset by ID should be sampled until the captions_update_status changes to 'completed'. ### Get Image by ID - [GET /tailored-gen/datasets/{dataset_id}/images/{image_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-image.md): Retrieve full image information including caption (which naturally continues the dataset's caption_prefix), caption source (automatic/manual/unknown), image name, URL and thumbnail URL, dataset ID, and timestamps. ### Update Image Caption - [PUT /tailored-gen/datasets/{dataset_id}/images/{image_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-image-caption.md): Update the caption of a specific image. There are two mutually exclusive ways to update a caption: 1. Provide a new caption text: * Use the parameter * This will set to "manual" * Reflects a human-written caption 2. Request automatic caption regeneration: * Set to true * This will set to "automatic" * A new caption will be generated automatically based on the image and caption_prefix * For the same caption_prefix, regenerate_caption will always return the same caption * Useful for resetting captions or regenerating them after changing the caption_prefix Note: You cannot provide both parameters simultaneously as they represent different update approaches. Constraints: * Cannot update captions in a completed dataset * Cannot provide both caption and regenerate_caption in the same request ### Delete Image - [DELETE /tailored-gen/datasets/{dataset_id}/images/{image_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-image.md): Permanently remove an image from a dataset. This will also delete the image files and associated thumbnails. Constraints: * Cannot delete images from completed datasets ### Create Model - [POST /tailored-gen/models](https://docs.bria.ai/tailored-generation/training-endpoints/create-model.md): Create new model. A dataset can be used to train multiple models with different training versions (e.g., one light and one max). The model will belong to the same project as its dataset. ### Get Models - [GET /tailored-gen/models](https://docs.bria.ai/tailored-generation/training-endpoints/get-models.md): Retrieve a list of models. If there are no models, an empty array is returned. ### Upload Model - [POST /tailored-gen/models/upload_model](https://docs.bria.ai/tailored-generation/training-endpoints/upload-model.md): This API allows users to upload a pre-trained tailored model to Bria’s infrastructure, and run them within Bria’s ecosystem. Model has to be in .safetensors format. Maximum model size supported is 3GB. You can find out the model's status by using the route. Potential statuses are: "syncing" or "completed". ### Get Models by Project - [GET /tailored-gen/projects/{project_id}/models](https://docs.bria.ai/tailored-generation/training-endpoints/get-models-by-project.md): Retrieve all models for a project. If there are no models, an empty array is returned. ### Get Model by ID - [GET /tailored-gen/models/{model_id}](https://docs.bria.ai/tailored-generation/training-endpoints/get-model.md): Retrieve full model information including name, description, status (Created/InProgress/Completed/Failed/Stopping/Stopped), training version (Light/Max), generation prefix, project ID, dataset ID, and timestamps. ### Update Model - [PUT /tailored-gen/models/{model_id}](https://docs.bria.ai/tailored-generation/training-endpoints/update-model.md): Update a model's name and description. Other model attributes such as training version and dataset cannot be modified after creation. ### Delete Model - [DELETE /tailored-gen/models/{model_id}](https://docs.bria.ai/tailored-generation/training-endpoints/delete-model.md): Delete a specific model. Changes status to Deleted. ### Start Training - [POST /tailored-gen/models/{model_id}/start_training](https://docs.bria.ai/tailored-generation/training-endpoints/start-training.md): Start model training (1-3 hours duration). The associated dataset must have status 'completed' before training can begin. Constraints: * Dataset must be in 'completed' status ### Stop Training - [POST /tailored-gen/models/{model_id}/stop_training](https://docs.bria.ai/tailored-generation/training-endpoints/stop-training.md): Stop an ongoing model training process. Once stopped, training cannot be resumed - a new model would need to be created and trained. ### Download Tailored Model - [GET /tailored-gen/models/{model_id}/download](https://docs.bria.ai/tailored-generation/training-endpoints/download-tailored-model.md): Enables users to download a trained tailored generation model after completing the training process. The response includes a pre-signed URL for downloading the model, details about the base model used, and the prompt prefix applied during training. To use the tailored model source code, access to the base model source code is required. The base model source code is exclusively available through Bria's product. For more information or to gain access, contact us at .