Timeline Of Meta AI

Explore Meta's AI journey with this comprehensive timeline. Discover their groundbreaking advancements in audio generation, image and video editing, and more.

Meta AI, the powerhouse behind Facebook’s innovative AI projects, has been on a roll.

This timeline will look at some of the major milestones on Meta’s AI journey, from deciphering brain signals to breakthroughs in areas like computer vision, natural language processing, and multimodal AI.

Let’s get started.

Last Updated: Feb 16, 2024

DateEventDetails
Feb, 2024V-JEPAMeta’s AI experts have created a new model, called Video Joint Embedding Predictive Architecture (V-JEPA). Unlike other large language models (LLMs), it learns from videos rather than text. Source
Feb, 2024Labeling AI-Generated ImagesMeta is working with industry partners on common technical standards for identifying AI content, including video and audio. Source
Feb, 2024AudioSealA method for speech localized watermarking , with state-of-the-art detector speed without compromising the watermarking robustness. Source
Dec, 2023Seamless CommunicationA family of AI research models that enable more natural and authentic communication across languages. Source
Nov, 2023AudioboxMeta’s new foundation research model for audio generation. It can generate voices and sound effects using a combination of voice inputs and natural language text prompts. Source
Nov, 2023Emu EditPrecise image editing via recognition and generation tasks. Source
Nov, 2023Emu VideoA simple factorized method for high-quality video generation. Source
Oct, 2023Brain DecodingUsing MEG, this AI system can decode the unfolding of visual representations in the brain with an unprecedented temporal resolution. Source
Jul, 2023Llama 2The next generation of Meta’s open source large language model. Available for free for research and commercial use. Source
Jun, 2023I-JEPAThe first AI model based on Yann LeCun’s vision for more human-like AI. Source
May, 2023ImageBindThe first AI model capable of binding data from six modalities at once, without the need for explicit supervision. Source
Apr, 2023DINOv2A new method for training high-performance computer vision models. Source
Apr, 2023SAMSegment Anything: A step toward the first foundation model for image segmentation. Source

See Also:

Leave a Reply

Your email address will not be published. Required fields are marked *