Meta is reportedly developing two new foundation models, one aimed at AI image and video generation and another focused on text, with an emphasis on coding. The Wall Street Journal reported the roadmap on Dec. 19, 2025, citing remarks shared internally by Meta’s chief AI officer, Alexandr Wang.
For solo professionals and small teams, the practical question is simple: will Meta ship creative and coding models that are good enough to replace a patchwork of tools. If you want ongoing coverage as these plans firm up, track our AI News archive.
Quick facts
- Company: Meta
- Model codenames: Mango (image and video), Avocado (text)
- Reported timing: First half of 2026
- Where it surfaced: Internal Q&A, reported by The Wall Street Journal
This matters because image and video generation has become a key differentiator for consumer AI products, and Meta has distribution that few companies can match. If Mango lands inside Meta’s apps, it could change which tools teams use for quick visuals, short video clips, and social-first creative.
What’s new in Mango and Avocado
This is not a product launch, it is a preview of Meta’s internal roadmap. The reported plan is to ship two separate models, one multimodal for image and video, and one text-based model intended to close gaps in coding and developer workflows.
The reporting also frames these as early outputs of Meta’s reorganized AI effort. Bloomberg has separately described Meta’s push to build more commercially focused systems, including a possible shift away from a purely open release strategy.
- Mango: A model focused on generating and editing images and video
- Avocado: A text model, reportedly aimed at stronger coding performance
- Release window: Reported target is the first half of 2026
- Direction: Work is described as including research toward “world models,” systems that reason over visual input
- Business context: Meta is attempting to narrow the gap with rivals that already ship image and video tooling
Codenames and timelines change. Treat Mango and Avocado as a planning signal, not a guarantee of features, licensing, or launch dates.
Why this update matters
If you run marketing, content, or product for a small team, video generation is moving from novelty to routine. Short clips for ads, explainers, and social posts can be produced faster when a model can follow a style guide, keep characters consistent, and support iterative edits.
If you build internal tools, the more important piece may be Avocado. A stronger coding model can reduce time spent on boilerplate, tests, refactors, and small scripts, especially when paired with clear requirements and code review.
Meta’s advantage is distribution, not just model quality. When a model is placed inside apps used by billions, even modest improvements can shift where teams do basic creative work, such as generating a first draft of a product image or a rough storyboard for a short video.
If you create content
Start listing the repeatable tasks you would automate first: resizing assets, swapping product shots, making short video variants, and generating on-brand thumbnails.
If you ship software
Prepare a small set of real repos and specs you can use to evaluate coding help, such as bug fixes, test coverage, and safe refactoring, not just toy prompts.
How to get access
There is no public release or signup for Mango or Avocado yet. The reporting points to a first-half 2026 target, but Meta has not published product pages, pricing, or API details for these codenames.
If you want to be ready without waiting for official packaging, use this period to set up a clean evaluation process:
- Define two or three workflows you want AI to improve, such as weekly creative variants or codebase cleanup.
- Gather example inputs and success criteria, including brand constraints, review rules, and security requirements.
- Plan how you will compare outputs across vendors, since competitors already ship image and video systems today.
Limits, trade-offs, and gotchas
Because this is a roadmap story, the biggest limitation is uncertainty. Details like licensing, availability by region, and whether the models are open or closed are not confirmed in the reporting.
If Meta positions these models inside consumer apps first, teams may face workflow friction. Consumer surfaces can be fast to try, but harder to integrate into production pipelines that need auditing, versioning, and repeatable outputs.
Finally, image and video generation brings rights and safety questions. Even with strong filters, teams should expect policy boundaries around people, brands, and sensitive topics, and they should plan for human review before publishing.
How it compares to current image and video tools
Meta is entering a market where several competitors already have products in active use. Google, OpenAI, and Adobe have all pushed hard on image and video generation, and some are pairing creation with editing features that fit creative workflows.
The difference is timing and distribution. Mango is described as a 2026 target, while competing tools are available now. Meta’s bet appears to be that a strong model, shipped at scale inside its apps, can win attention quickly once it launches.
FAQ
Is Mango available to try today?
Not based on current reporting. Mango is described as an internal project with a reported first-half 2026 target, with no public beta announced.
Will Avocado replace Meta’s existing Llama models?
It is too early to say. The reporting frames Avocado as a next-generation text model, but Meta has not published a migration plan or naming details.
Should small teams change tools now because of this roadmap?
Usually no. Treat this as a signal to prepare evaluation criteria, not as a reason to pause current workflows or contracts.
What is the most practical step to take right now?
Write down two production tasks you would want Mango or Avocado to handle, and collect real examples you can use later to test quality and consistency.

