Timeline Of Meta AI (Video Editing Released)

Explore Meta's AI journey with this comprehensive timeline. Discover their groundbreaking advancements in audio generation, image and video editing, and more.

Meta AI, the powerhouse behind Facebook’s innovative AI projects, has been on a roll.

This timeline will look at some of the major milestones on Meta’s AI journey, from deciphering brain signals to breakthroughs in areas like computer vision, natural language processing, and multimodal AI.

Let’s get started.

DateEventDetails
June. 2025Meta AI AppYou Can Now Edit Videos With Meta AI.
Apr. 2025Llama APIMeta released Llama API, which provides easy one-click API key creation and interactive playgrounds to explore different Llama models. Wait List
Apr. 2025Meta AI AppMeta AI is now an app! 📱 Experience voice conversations powered by Llama 4, image generation, and more. It’s also the companion for Ray-Ban Meta glasses. Seamless AI across devices! Source
Apr. 2025Llama 4Meta released the first models in the Llama 4 herd: Llama 4 Scout (a 17 billion active parameter model with 16 experts) and Llama 4 Maverick (a 17 billion active parameter model with 128 experts). Source
Dec 2024Llama 3.3Meta just unexpectedly dropped Llama 3.3—a 70B model that’s ~25x cheaper than GPT-4o. Source
Oct. 2024NotebookLlamaAn Open Source version of NotebookLM. Source
Oct. 2024LlamaMeta’s releasing its first lightweight quantized Llama models that are small and performant enough to run on many popular mobile devices. Source
Oct. 2024Spirit LMMeta announced Spirit LM, an open source language model for seamless speech and text integration. Source
Oct. 2024Movie GenMeta’s new video generation model. You can use simple text inputs to produce custom videos and sounds, edit existing videos or transform your personal image into a unique video. Source
Sep. 2024Llama 3.2Meta’s releasing Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions. Source
Aug. 2024SapiensMeta presents Sapiens for human-centric vision tasks. Source
Jul. 2024Even VideoMeta introduces the Segment Anything Model 2 (SAM 2), the first unified model that can identify which pixels belong to a target object in an image or video. Source
Jul. 2024AI StudioA place for people to create, share and discover AIs to chat with – no tech skills required. Source
Jul. 2024Llama 3Meta releases Llama 3.1, its latest instruction-tuned model available in 8B, 70B and 405B versions. Source
Jul. 2024Multi-Token PredictionThis new research from Meta FAIR replaces next token prediction tasks with multiple token prediction that can result in substantially better code generation performance with the exact same training budget and data. Source
Jul. 2024JASCOMeta JASCO (Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation), is capable of accepting various conditioning inputs, such as specific chords or beats, to improve control over generated music outputs. Source
Jul. 2024VLMsVision-language models (VLMs) are a challenging area of research that holds a lot of potential. Source
Jul. 2024ChameleonA family of models that can combine text and images as input, and output any combination of text and images with a single unified architecture capable of both encoding and decoding. Source
Apr. 2024ChatbotA new stand-alone Meta AI chatbot available on the web. Source
Apr. 2024Llama 3Meta introduced Llama 3, The most capable openly available LLM to date. Source
Apr. 2024MTIAMeta announced their next-generation Meta Training and Inference Accelerator. Source
Feb. 2024V-JEPAMeta’s AI experts have created a new model, called Video Joint Embedding Predictive Architecture (V-JEPA). Unlike other large language models (LLMs), it learns from videos rather than text. Source
Feb. 2024Labeling AI-Generated ImagesMeta is working with industry partners on common technical standards for identifying AI content, including video and audio. Source
Feb. 2024AudioSealA method for speech localized watermarking , with state-of-the-art detector speed without compromising the watermarking robustness. Source
Dec. 2023Seamless CommunicationA family of AI research models that enable more natural and authentic communication across languages. Source
Nov. 2023AudioboxMeta’s new foundation research model for audio generation. It can generate voices and sound effects using a combination of voice inputs and natural language text prompts. Source
Nov. 2023Emu EditPrecise image editing via recognition and generation tasks. Source
Nov. 2023Emu VideoA simple factorized method for high-quality video generation. Source
Oct. 2023Brain DecodingUsing MEG, this AI system can decode the unfolding of visual representations in the brain with an unprecedented temporal resolution. Source
Jul. 2023Llama 2The next generation of Meta’s open source large language model. Available for free for research and commercial use. Source
Jun. 2023I-JEPAThe first AI model based on Yann LeCun’s vision for more human-like AI. Source
May. 2023ImageBindThe first AI model capable of binding data from six modalities at once, without the need for explicit supervision. Source
Apr. 2023DINOv2A new method for training high-performance computer vision models. Source
Apr. 2023SAMSegment Anything: A step toward the first foundation model for image segmentation. Source

See Also:

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!