# Overview Bria's Image Editing API equips builders with a comprehensive suite of tools for manipulating and enhancing images. The API provides advanced capabilities including background operations (removal, replacement, blur), content manipulation (eraser, generative fill), image transformation (expansion, resolution increase), automatic cropping, person modification and automatic mask generation. **Asynchronous Requests and the Status Service** Bria API v2 endpoints process requests asynchronously by default. When you make an asynchronous request, the API immediately returns a `request_id` and a `status_url` instead of the final result. Use the Status Service to track the request's progress until it reaches a completed state. See the full guide at [Status Service Documentation](https://docs.bria.ai/status) for complete details and usage examples. **Advanced Customization and Access:** As part of Bria’s **Source Code & Weights** product, developers seeking deeper customization can access Bria’s source-available GenAI models via [Hugging Face](https://huggingface.co/briaai). This allows full control over fine-tuning, pipeline creation, and integration into proprietary workflows—empowering AI teams to develop and optimize their own generative AI solutions. ## Servers ``` https://engine.prod.bria-api.com/v2/image/edit ``` ``` https://engine.prod.bria-api.com/v1 ``` ## Download OpenAPI description [Overview](https://docs.bria.ai/_spec/image-editing-v2.yaml) ## v2 endpoints Endpoints that are part of BRIA API version 2. ### Eraser - [POST /erase](https://docs.bria.ai/image-editing-v2/v2-endpoints/erase.md): The enables the removal of elements or specific areas from a given image. You can define the area to be removed by providing a mask that outlines the region to be erased. There are two main ways recommended to generate these masks: 1. Masks can be created by allowing users to draw directly on the image with a brush, for example. To access the SDK that demonstrates how to implement a brush feature in your interface, please refer to the following link. 2. By using the route, which will generate all the possible masks for an image. - The modified image is returned at the original resolution, preserving full visual quality without any automatic resizing or downscaling. - All areas outside the provided mask remain completely unchanged, ensuring pixel-perfect preservation of unedited regions. ### Generative Fill - [POST /gen_fill](https://docs.bria.ai/image-editing-v2/v2-endpoints/gen-fill.md): The enables the generation of objects by prompt in a specific region of an image. You can define the area for object generation by using a mask that outlines the region where the object will be created. Our model is optimized to work seamlessly with blob-shaped masks. Masks can be created by allowing users to draw directly on the image with a brush, for example. To access the SDK that demonstrates how to implement a brush feature in your interface, please refer to the following link. - The modified image is returned at the original resolution, preserving full visual quality without any automatic resizing or downscaling. - All areas outside the provided mask remain completely unchanged, ensuring pixel-perfect preservation of unedited regions. - If the input includes an alpha channel and , the original transparency values (both full and partial) are maintained in the output. This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Validates the provided prompt and rejects requests containing unsafe or prohibited terms before processing starts. - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. ### Remove Background - [POST /remove_background](https://docs.bria.ai/image-editing-v2/v2-endpoints/background-remove.md): The Route can be used to remove the background of an image. This route leverages Bria's newest model, RMBG 2.0. For more details and to explore the model, check out the Hugging Face demo. This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. The Bria API currently supports only JPEG and PNG files in RGB, RGBA, or CMYK color modes. When the file is of a different type or color mode, the status code 415 will be returned. This endpoint returns an image where the background is removed, and the foreground remains with varying levels of transparency, allowing for smoother edges and more natural blending when placed over different backgrounds. Additionally, this unique output provides developers with the flexibility to binarize the result—transforming it into a binary mask—by setting a custom transparency threshold according to their specific use case. A binary mask is an image where pixels are either fully visible (foreground) or fully transparent (background), commonly used in visual generative AI and image processing pipelines. This capability enables seamless integration into workflows that require clear separation between subject and background, while still offering control over how strict this separation should be. Below is a simple Python script to demonstrate how you can binarize the output image from the API. This allows you to set your own threshold to determine which areas are considered "foreground" and which are "background." ### Generate Background - [POST /replace_background](https://docs.bria.ai/image-editing-v2/v2-endpoints/background-replace.md): The  is used to replace the background of any image. We offer a fast version of this feature, powered by Bria 2.3 Fast LoRA (model card on Hugging Face), which provides an optimal balance between speed and quality. This endpoint also allows replacing the background with a solid color of your choice. You can specify a hex color code (e.g., #FF5733) in the prompt to control the background color. Here are some examples: : : in a parking lot : This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Validates the provided prompt and rejects requests containing unsafe or prohibited terms before processing starts. - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. ### Erase Foreground - [POST /erase_foreground](https://docs.bria.ai/image-editing-v2/v2-endpoints/erase-foreground.md): The endpoint removes the primary subject (foreground) from the input image and intelligently generates the background to fill the erased area. - Returns the edited image at its original resolution, ensuring full visual fidelity without any automatic resizing or downscaling. - Only the foreground is removed; all other areas remain unaltered, preserving pixel-perfect accuracy in untouched regions. - When and the input image includes an alpha channel, the output maintains original transparency values (both full and partial). This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. ### Blur Background - [POST /blur_background](https://docs.bria.ai/image-editing-v2/v2-endpoints/blur-bg.md): The  is used to create a blur effect on the background of an image. This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. ### Expand Image - [POST /expand](https://docs.bria.ai/image-editing-v2/v2-endpoints/image-expansion.md): The  can be used to expand an image, by utilizing generative AI. You can decide on the image size of the final result as well as the position and size of the original image compared to the final result. Alternatively, you can define a desired aspect_ratio, and the service will automatically place the input image in the center and expand the canvas to match that ratio. In this way, you can create unique variations of your original image instead of cropping it into different aspect ratios and losing important details. If aspect_ratio is not provided, both original_image_size and original_image_location must be specified. * Ensure that the ratio of the input image foreground or main subject to the canvas area is greater than 15% to achieve optimal results. * The canvas size should be up to an area of 5000x5000 pixels. * This parameter is optional. If not provided or left empty, the service will automatically generate a prompt based on the input image. - If the input image includes an alpha channel and , newly generated pixels inherit the alpha values of the nearest original pixels along the same row or column, ensuring smooth transparency transitions. - If the input image contains fully transparent pixels along any edge (top, bottom, left, or right), the request is rejected with a 422 error. Expansion is not supported in such cases to avoid undefined transparency behavior. Here are some examples: : : : : This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Validates the provided prompt and rejects requests containing unsafe or prohibited terms before processing starts. - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. ### Enhance Image - [POST /enhance](https://docs.bria.ai/image-editing-v2/v2-endpoints/enhance.md): The endpoint improves the visual quality of an input image by generating richer details, sharper textures, and enhanced clarity. It also supports upscaling to higher resolutions and returns a refined version of the original visual. This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. Unlike the Increase Resolution route, this endpoint regenerates the image to enhance visual richness while preserving essential details. ### Increase Resolution - [POST /increase_resolution](https://docs.bria.ai/image-editing-v2/v2-endpoints/increase-resolution.md): The  is used to upscale the resolution of any image. This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. The Bria API currently supports only JPEG and PNG files in RGB, RGBA, or CMYK color modes. When the file is of a different type or color mode, the status code 415 will be returned. It's possible to increase the resolution of an image up to a total area of 8192x8192 pixels. Unlike the Enhance Image route, this endpoint does not add new details — it increases resolution using a dedicated upscaling method that preserves the original image content without regeneration. ### Crop out foreground - [POST /crop_foreground](https://docs.bria.ai/image-editing-v2/v2-endpoints/crop.md): The Crop Route is used to remove the background from an image and crop tightly around the foreground or remaining region of interest. It supports both images with and without a background. This endpoint includes granular content moderation controls to ensure safe usage across all stages of processing: - – Scans the uploaded image and stops processing if inappropriate or restricted content is detected. - – Evaluates the generated image and blocks the response if it violates safety guidelines. ## v1 endpoints Endpoints that are part of BRIA API version 1. ### Delayer Image - [POST /{visual_id}/image_to_psd](https://docs.bria.ai/image-editing-v2/v1-endpoints/image-to-psd.md): * The  is used to create a layered PSD file from any image. The image is divided into different layers (depending on the image): a background layer with all identified objects removed, a foreground layer without the background, and a layer for each object. You can also use this route on a modified image by providing the sid from the response of the previously used route. ### Get Masks - [POST /objects/mask_generator](https://docs.bria.ai/image-editing-v2/v1-endpoints/objects-mask-generator.md): * The is used to generate all possible masks for an image, creating a full segmentation of the image. The response contains a zip file named as the visual_id of the provided image. There are k mask files in the zip, each named with the visual_id and mask_id. The zip file contains an additional file whose name ends with "panoptic". It's not an image, it's a panoptic map. It can be transformed into a regular matrix. Each point in the image (x,y) is mapped to the mask that applies to that point. In the panoptic map, each pixel's grayscale value includes the mask_id. You can display those masks to the user, let them pick one or more masks, and use objects/remove route to remove the masked area. In order to use the objects/remove route on the mask the user selected, you should provide the mask_id, and use the parameter mask_source=generated. You can see below an example of the content of the zip: 92bf8ce17584de82_panoptic.png 92bf8ce17584de82_1.png 92bf8ce17584de82_2.png 92bf8ce17584de82_3.png ... 92bf8ce17584de82_86.png You can access the SDK that demonstrates how to use this endpoint in a UI in the following link. This API endpoint supports content moderation via an optional parameter that can prevent processing if input images contain inappropriate content - the first blocked input image will fail the entire request. ### Get Presenter info - [GET /{visual_id}/person/info](https://docs.bria.ai/image-editing-v2/v1-endpoints/person-info.md): * The is used to retrieve useful information on the people in a specific visual that was previously uploaded to the database. Additionally, it provides a description of each person within the scene along with its available changes, which are supported by the Bria API. This route should be used instead of the main /info route when you are only interested in information and available actions for the people in the image. With this route, you will save time by only obtaining information that is relevant to your needs. ### Modify Presenter - [POST /{visual_id}/create](https://docs.bria.ai/image-editing-v2/v1-endpoints/create.md): * The is used to create a new visual, based on the changes requested by the user for a previously uploaded visual. You can also use this route on a modified image by providing the sid from the response of the previously used route. This route returns both the URL and the sid associated with the updated image. Before making any modifications, please call the /info or person/info route to obtain information on the available presenters in the image, the available modifications, and their oracle values. You should always include all the required changes in the request if you want to use multiple changes on a single person. It is not supported to make one request on a person with one change, take the SId from the result, and then use it in another request with a different change. When you want to make changes on multiple people, you can make one request with all the desired changes on all the relevant people or make one request with all the desired changes on one person and then use the sid from the response in the request on the other person.