Meta AI, the powerhouse behind Facebook’s innovative AI projects, has been on a roll.
This timeline will look at some of the major milestones on Meta’s AI journey, from deciphering brain signals to breakthroughs in areas like computer vision, natural language processing, and multimodal AI.
Let’s get started.
Last Updated: Feb 16, 2024
|Meta’s AI experts have created a new model, called Video Joint Embedding Predictive Architecture (V-JEPA). Unlike other large language models (LLMs), it learns from videos rather than text. Source
|Labeling AI-Generated Images
|Meta is working with industry partners on common technical standards for identifying AI content, including video and audio. Source
|A method for speech localized watermarking , with state-of-the-art detector speed without compromising the watermarking robustness. Source
|A family of AI research models that enable more natural and authentic communication across languages. Source
|Meta’s new foundation research model for audio generation. It can generate voices and sound effects using a combination of voice inputs and natural language text prompts. Source
|Precise image editing via recognition and generation tasks. Source
|A simple factorized method for high-quality video generation. Source
|Using MEG, this AI system can decode the unfolding of visual representations in the brain with an unprecedented temporal resolution. Source
|The next generation of Meta’s open source large language model. Available for free for research and commercial use. Source
|The first AI model based on Yann LeCun’s vision for more human-like AI. Source
|The first AI model capable of binding data from six modalities at once, without the need for explicit supervision. Source
|A new method for training high-performance computer vision models. Source
|Segment Anything: A step toward the first foundation model for image segmentation. Source