# Generate Structured Prompt - Tailored Model (V2) Description Creates a new detailed, machine-readable structured prompt in JSON format or refines an existing one using text inputs and a tailored model's visual schema (backbone). This endpoint uses the state-of-the-art Gemini 2.5 Flash VLM bridge. The tailored model's visual schema is provided as input to Gemini along with the user prompt to generate the structured prompt. It returns ONLY the JSON string and does not generate an image. Use Cases: Control & Auditability: Inspect or programmatically edit the JSON before* generating an image. * Consistency: Generate one structured_prompt and pass it to /image/generate/tailored multiple times. * Hybrid Deployment: Use Bria's VLM bridge via API while hosting the FIBO image model on a private cloud. Input Combination Rules The request body must use exactly one of the following combinations: * Text Only: prompt * Structured Prompt and Text: structured_prompt and prompt (Refinement) Model Compatibility: - Supports ONLY models with training_version = 'fibo'. - Legacy models are NOT supported. Endpoint: POST /structured_prompt/generate/tailored ## Header parameters: - `api_token` (string, required) ## Request fields (application/json): - `tailored_model_id` (string, required) The ID of the tailored model (must have training_version = 'fibo'). - `prompt` (string) Text-based instruction. - `structured_prompt` (string) JSON string from a previous response for refinement. - `seed` (integer) Seed for deterministic generation. - `sync` (boolean) If false, returns 202. If true, returns 200. - `prompt_content_moderation` (boolean) ## Response 200 fields (application/json): - `structured_prompt` (string) The generated structured prompt JSON string. ## Response 202 fields (application/json): - `request_id` (string) - `status_url` (string) ## Response 403 fields ## Response 422 fields ## Response 429 fields ## Response 500 fields