Skip to content
Last updated

The VGL Paradigm: Beyond Prompt Engineering

Generative AI has historically operated as a "black box": you provide a text prompt, and the model returns an image. While this enables rapid creative iteration, it often lacks the precision, consistency, and reproducibility required for enterprise workflows.

Bria introduces a paradigm shift with Visual GenAI Language (VGL). This section explains the fundamental concepts behind VGL and how it transforms visual generation from a game of chance into a structured engineering discipline.


The Shift: From Prompts to Parameters

To achieve full control over visual generation—and to enable true automation—developers and enterprises need a language that goes beyond natural language prompts. They need a structured, explicit representation where every parameter is defined, independent, and readable by both humans and machines.

VGL is that language. It represents a move away from ambiguous prompt-based generation toward structured specification.

  • Prompt-Based Generation: Relies on natural language. It is fast for creative brainstorming but interpretation varies between runs, making it difficult to reproduce specific results.
  • VGL-Based Generation: Uses a structured specification. It is built for precision, offering reproducible, auditable outputs and the ability to systematically modify specific parameters.

The "White Box" Architecture

Most generative models are opaque; you have no visibility into how the model interpreted your prompt or made its decisions. VGL changes this by "opening the black box".

The Fibo model family is trained natively on VGL. It is not a wrapper or a post-processing layer—VGL is the internal representation the models use. When you generate an image, you receive the complete VGL specification as part of the output.

This transparency means every visual attribute is exposed, including:

  • Objects: Positions, shapes, and relationships.
  • Lighting: Conditions, directions, and shadows.
  • Camera: Angles, depth of field, and lens choices.
  • Aesthetics: Mood, color palettes, and artistic styles.

Core Principles of VGL

VGL is designed to give you ownership over the AI's output through specific structural properties:

  • Explicit: Every visual attribute is fully declared. Nothing is implied or hidden. If a specific lighting condition is used, it is stated in the VGL,.
  • Disentangled: You can modify one parameter without affecting others. For example, you can change the lighting direction without accidentally changing the object's texture or the background setting,.
  • Physically-Grounded: Parameters map to real-world dimensions. VGL deals in pixel-level space, camera optics, and physical lighting conditions rather than abstract concepts,.
  • Natively-Typed: Attributes use their natural data representations—semantics are words, quantities are numbers, positions are coordinates, and colors are RGB values,.

Why This Matters for Developers and Enterprises

VGL transforms complex image manipulation into structured data operations. By treating images as data structures, VGL enables workflows that were previously impossible with standard prompting.

1. Precision Editing

VGL transforms what would be difficult pixel-level edits into simple structured text edits. Because the system understands the scene structure (e.g., knowing that a "person" is an addressable object with specific clothing properties), you can programmatically execute logic like "change shirt colors" without manual masking or reprompting.

2. Reproducibility and Auditability

In enterprise environments, knowing why an image looks the way it does is critical. With VGL, compliance teams can inspect every parameter used to generate an asset. Furthermore, the same VGL specification will produce the same output, allowing you to share specifications across teams or store them for future use,.

3. True Automation

Because VGL is machine-readable JSON, AI agents and programmatic systems can read, validate, transform, and chain VGL specifications without human interpretation,. This allows for the creation of "Agentic Workflows" where systems autonomously analyze and modify visuals based on logic.

4. Brand Consistency

Instead of asking a model to "make it look like our brand" (which is abstract and unreliable), VGL allows you to encode brand guidelines as concrete parameters. You can lock specific color palettes, lighting conditions, and moods to ensure every generated asset is compliant by definition,.


The Fibo Model Family

VGL is the universal language spoken by the entire Fibo model family. Whether you are using Fibo Generation for high-fidelity hero images, Fibo Lite for high-volume production, or Fibo Edit for refinement, the VGL specification remains portable and consistent.

A VGL file created in one part of your pipeline can be executed by a different model in another part without translation, ensuring a seamless flow from concept to production.

Ready to implement? Proceed to the API Reference to see how to structure VGL requests.