Business & Technology
Luma opens Uni-1.1 API to image developers in Europe
SOFIAH NICHOLE SALIVIO
News Editor
Luma has opened access to the Uni-1.1 API, extending its unified intelligence model to developers.
The API uses a REST interface built on what Luma describes as a single model for text and image generation, rather than separate systems combined at inference. The model is decoder-only and autoregressive, with text and image tokens handled in one sequence.
The release is another product step for Luma as it expands its presence in Europe. The company says it has a 200-person office in London, building on an expansion announced late last year.
Backers include Humain, AMD and Andreessen Horowitz. Luma positions Uni-1.1 as a system that can reason through visual and textual instructions in the same pass before producing an image.
Developer focus
Luma is targeting developers building creative and design workflows. One intended use is handling briefs with multiple constraints, including spatial logic and composition, within a single generation process.
Another feature is continuity across a series of outputs. Developers can submit up to nine reference images to maintain identity, style and composition across campaigns, reducing shifts in visual style between generations.
Luma has also introduced a modify-image endpoint for localised edits. It lets users request changes such as background swaps or lighting adjustments in natural language rather than through more detailed prompt construction.
According to Luma, the API generates an image in about 31 seconds. It says the service is designed to offer lower latency and lower cost than comparable models, while supporting multiple languages for broader distribution.
Competitive market
The launch comes as developers and creative software groups seek more reliable ways to generate and edit images through APIs. Many existing tools rely on separate language and image models linked together, an approach that can create inconsistency between the instruction stage and the final output.
Luma argues that its architecture addresses that issue by treating text and image tokens as part of one sequence. In practice, the system is designed to resolve structure and creative intent before image generation begins.
Luma also cited benchmark performance in support of adoption. It says Uni-1 leads RISEBench in spatial logic and ranks first on Human Preference Elo, though the announcement did not include further detail on testing conditions.
Industry users
The API is already in production or committed with several creative and developer platforms, according to Luma. It named Adobe, Envato and Freepik among creative industry users, alongside Fal and Krea on the developer side.
Those names suggest Luma is seeking to place its model within both established design software ecosystems and newer AI-native tools, giving it routes into professional creative teams as well as independent developers building image-generation and editing products.
The market for generative AI interfaces is increasingly crowded as model developers try to differentiate on quality, speed, controllability and price. In this release, Luma is emphasising consistency across campaigns, natural-language editing and a unified model design that it says avoids the weaknesses of stitched systems.
Its London operation may also matter as AI companies compete for engineers and customers in Europe while facing closer scrutiny over deployment, copyright concerns and the commercial use of generated media. A larger local presence can support hiring, partnerships and product support in the region.
For now, the immediate significance is that Uni-1.1 is moving from more limited availability to broader developer access.