# Product Shot Editing Best Practices

Product Shot Editing is built to automate “commerce-grade” workflows: cutouts, packshots, consistent shadows, and lifestyle scenes (via text or reference image).

## The Big Picture: Pipeline Mindset

Before using the specific endpoints, it helps to understand the recommended workflow. Think in building blocks:

1. **Cutout** (creates a clean alpha)
2. **Shadow** (optional: adds consistent grounding)
3. **Output** (creates a Packshot or Lifestyle scene)


details
summary
strong
Q: Why is this pipeline important?
p
The 
code
/shadow
 endpoint is explicitly designed to work with 
code
/cutout
, 
code
/packshot
, or Lifestyle endpoints. If your input isn’t already a clean cutout with a transparent background, you must use 
code
/cutout
 first to ensure high-quality edges in the final render.
> **ℹ️ ASYNCHRONOUS ENDPOINTS:** > **Q: When should I use async vs sync?**
The general recommendation is `sync: false` (async) for optimal performance. Async is **required** when using `placement_type: automatic` or when `num_results > 1`. Use `sync: true` only for simple single-result requests where you need an immediate response (like quick prototyping).


## Catalog Workflows

### POST `/v2/product/cutout`

Use this whenever you need a clean product foreground (alpha channel) for reliable downstream composition. The output image is automatically cropped around the subject.

details
summary
strong
Q: What makes cutouts look better downstream?
p
Best practices for input photos:
ul
li
High contrast between product and background (even if the background is messy).
li
Avoid motion blur on edges (handles, hair-like fibers, transparent glass rims).
li
Ensure the product is fully inside the frame (no cropped edges).
details
summary
strong
Q: When should I force background removal even if there’s already an alpha channel?
p
Use 
code
force_rmbg=true
 if the input alpha channel is low-quality (haloing / jagged edges) or if the alpha includes unwanted transparent regions.
### POST `/v2/product/shadow`

Add a shadow when you want the product to feel grounded and consistent across a catalog. **Requires a transparent-background cutout (alpha) as input.**

details
summary
strong
Q: “Regular” vs “Float” shadow — which should users choose?
p
Use 
code
type="regular"
 for products that naturally sit on a surface (bottles, boxes, shoes). Use 
code
type="float"
 for “hover” aesthetics, beauty ads, or when you want an elliptical studio shadow.
details
summary
strong
Q: What are the 3 most important knobs for realism?
ul
li
strong
shadow_offset:
 Keep consistent direction across your whole catalog (e.g., [0, 15] default).
li
strong
shadow_blur:
 Softer edges = more studio-like; too sharp looks cut-out.
li
strong
shadow_intensity:
 Keep modest; over-dark shadows look pasted.
> **💡 DESIGNER TIP:** > Avoid adding a standalone shadow if your next step is `/image/edit/product/integrate` or `/product/lifestyle_shot_by_text`. Those endpoints handle blending and shadowing natively.


### POST `/v2/product/packshot`

Use when you need a “store-ready” catalog asset. The endpoint outputs a professional standard 2000×2000 image with best-practice product sizing and placement.

details
summary
strong
Q: What is the best workflow to create a professional packshot?
p
Use this three-step pipeline:
ol
li
code
/product/cutout
 — isolate the product with a clean alpha.
li
code
/product/shadow
 — add a realistic, consistent shadow.
li
code
/product/packshot
 — output the final asset.
p
Skipping cutout risks poor edge quality. Skipping shadow produces a floating, ungrounded look.
details
summary
strong
Q: How do users get the cleanest packshots?
p
Start from a clean cutout, avoid extreme perspective distortion (like wide-angle phone shots), and use 
code
background_color
 intentionally (e.g., 
code
#FFFFFF
 for marketplaces, a brand hex code for hero tiles, or transparent for UI compositing).
## Lifestyle Workflows

### POST `/v2/product/lifestyle_shot_by_text`

Use when you want Bria to generate a fitting lifestyle background around your product based on a text description. **Crucially, it preserves full product integrity**—the product itself isn’t regenerated, keeping labels, textures, and logos exactly as-is.

details
summary
strong
Q: What's the best workflow for exploring and finalizing a lifestyle shot?
p
Use a three-phase approach:
ul
li
strong
Phase A (Explore):
 Use 
code
placement_type: automatic
, 
code
high_control
 mode, and 
code
optimize_description: true
 to generate multiple layout and scene variants.
li
strong
Phase B (Lock composition):
 Pick the best placement and switch to 
code
manual_placement
 or 
code
manual_padding
 for precision.
li
strong
Phase C (Finalize):
 Lock the placement and description, then vary 
code
num_results
 to generate final alternatives.
details
summary
strong
Q: What's the single most important rule for writing a `scene_description`?
p
Write it like a photography brief: 
strong
Product + Scene + Connection + Lighting + Palette + Framing
.
p
The 
em
connection
 is especially important. Describe how the product physically relates to the surface (e.g., "resting on", "surrounded by"). 
em
Example: "A dark glass perfume bottle resting on a marble bathroom shelf, surrounded by soft eucalyptus sprigs. Soft diffused morning light from the left..."
details
summary
strong
Q: Which background generation mode should users pick?
p
Use 
strong
high_control
 (~90–110 words) for polished, creative marketing results. Use 
strong
fast
 or 
strong
base
 (~50–60 words) when high-throughput speed and strict product preservation are the top priorities.
#### Understanding Placement Types

The `placement_type` controls both the product position and the canvas size/whitespace.

details
summary
strong
Q: How should users choose the right placement mode?
ul
li
strong
original:
 Keep product framing exactly as-is.
li
strong
automatic:
 Explore multiple layout options at once.
li
strong
manual_placement:
 Use predictable marketing compositions (Rule of Thirds).
li
strong
manual_padding:
 Control exact whitespace around the product.
li
strong
custom_coordinates:
 Match exact design templates.
li
strong
automatic_aspect_ratio:
 Generate assets for a specific ratio (e.g., 16:9, 9:16).
> **💡 RULE OF THIRDS PLACEMENT:** > When using `manual_placement`, selecting `left_center`, `right_center`, `upper_center`, or `bottom_center` automatically aligns the product with the Rule of Thirds grid. This is highly recommended for ads, social media, and banners that require copy space.


details
summary
strong
Q: Which size parameter should I use?
p
It depends on your placement type:
ul
li
strong
original:
 No size parameter needed.
li
strong
automatic, manual_placement, custom_coordinates:
 Use 
code
shot_size
 (e.g., [1000, 1000]).
li
strong
automatic_aspect_ratio:
 Use 
code
aspect_ratio
.
li
strong
manual_padding:
 Size is implicitly defined by your padding values `[left, right, top, bottom]`.
### POST `/v2/product/lifestyle_shot_by_image`

Use when you have a reference image that defines the scene (room, surface, visual style) and want the result to inherit that look rather than describing it in words.

details
summary
strong
Q: How does this differ from Lifestyle by Text?
p
There is no 
code
scene_description
 and no generation modes (fast/high_control). Instead, it uses 
code
ref_image_influence
 to control similarity (0.0 to 1.0).
details
summary
strong
Q: What are the best practices for reference strength?
p
If you want the exact positioning/place, start high (0.75–0.9). If you want an abstract style match without copying literal objects, start mid (0.5–0.75).
details
summary
strong
Q: What does `enhance_ref_image` do?
p
When true, it refines lighting, shadows, and textures for authenticity. However, if it introduces unwanted artifacts or hallucinates details, set it to false.
### POST `/v2/image/edit/product/integrate`

Embed one or more products into a predefined scene at exact user-defined coordinates, while matching lighting, perspective, and aesthetics.

details
summary
strong
Q: How do users get realistic integration?
p
Make sure coordinates match the scene perspective (e.g., product size must be consistent with table depth). Iterate with a fixed seed until placement is correct, then vary the seed for alternative harmonizations.
## Advanced Workflows (LLMs & Vision Models)

When the workflow needs extra automation upstream (prompting, layout planning, QA) rather than just better pixels, consider integrating other models alongside Bria:

details
summary
strong
Q: How can LLMs improve Bria workflows?
ul
li
strong
Prompt Generation:
 Use an LLM to generate 10–30 
code
scene_description
 variants from a brand brief, then run Bria in 
code
high_control
 mode. 
em
(Note: Bria already includes 
code
optimize_description
 built with Meta Llama 3).
li
strong
Automatic Placement:
 Use a vision model to detect surfaces/planes in an image, then feed those exact coordinates to 
code
/image/edit/product/integrate
.
li
strong
Batch QA:
 Use a vision model to score generated outputs for artifacts or brand rules before publishing.