According to Digital Trends, OpenAI has rolled out a major upgrade to the image generation capabilities inside ChatGPT, powered by a new flagship model. This follows last week’s upgrade to the GPT-5.2 model for the chatbot itself. The company claims the new ChatGPT Images model can generate pictures up to four times faster than before and delivers far more accurate results that closely follow user instructions. It also features a new dedicated workspace with preset filters and prompts. However, OpenAI admits the model has regressed in generating some art styles and struggles with maintaining the exact identity of people in images with many subjects.
Speed is nice, but reliability is key
Look, four times faster image generation is a fantastic headline. No one likes waiting for an AI to dream up a picture. But here’s the thing: speed was never the biggest problem with these tools. It was reliability. The promise that it now “follows instructions more reliably” is the real story, if it’s true. Getting an AI to actually give you what you asked for, and not a weird, mutated version of it, has been the eternal struggle.
So the improved text rendering and the ability to handle denser, smaller text? That’s a huge deal. That’s been a comical weak spot for almost every image AI. If this model can reliably put legible text on a sign or a t-shirt, that’s a meaningful step from a novelty toward a usable tool. The dedicated workspace is also a smart move. It signals that OpenAI sees this as a core product, not just a fun add-on. Giving people a sandbox with presets lowers the barrier to entry, which is crucial for adoption.
The weird regressions and limitations
Now, let’s talk about the caveats, because they’re fascinating. OpenAI openly states the model’s ability to generate “a few art styles” has regressed. Regressed! That’s a rare and honest admission in the world of AI, where everything is usually framed as relentless progress. What styles did it get worse at? Is it struggling with watercolor? Pixel art? They don’t say. But it tells us this new model isn’t a simple, across-the-board upgrade. It’s a trade-off.
And the limitation about having “difficulty maintaining the exact identity of every person” in a crowd scene? That’s a massive red flag for a lot of potential professional use. Think about generating a team photo, a family portrait, or any scene with multiple specific characters. If the AI can’t keep faces consistent, its utility plummets. It basically admits the model is still better at generating *a* person than *the* person you need.
Basically, this update makes the easy stuff faster and better, but the hard problems—true consistency, specific stylistic control—are still very much hard problems. It’s a leap, sure, but the landing is a bit wobbly. You can check out the official announcement over on OpenAI’s blog for their full spin.
