Introducing Meta Llama 3: the next generation of our state-of-the-art open source large language model — and the most capable openly available LLM to date. These next-generation models demonstrate SOTA performance on a wide range of industry benchmarks and offer new capabilities such as improved reasoning. Details in the full announcement ➡️ https://go.fb.me/a24u0h Download the models ➡️ https://go.fb.me/q8yhmh Experience Llama 3 with Meta AI ➡️ https://meta.ai Llama 3 8B & 70B deliver a major leap over Llama 2 and establish a new SOTA for models of their sizes. While we’re releasing these first two models today, we’re working to release even more for Llama 3 including multiple models with capabilities such as multimodality, multilinguality, longer context windows and more. Our largest models are over 400B parameters and while they’re still in active development, we’re very excited about how they’re trending. Across the stack, we want to kickstart the next wave of innovation in AI. We believe these are the best open source models of their class, period — we can’t wait to see what you build and look forward to your feedback.
AI at Meta
Research Services
Menlo Park, California 773,672 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
- https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
Today's release of ExecuTorch Alpha brings with it full support for Meta Llama 2 in addition to early support for Llama 3!
Introducing ExecuTorch Alpha ⚡ ExecuTorch Alpha is focused on deploying large language models and large ML models to the edge, stabilizing the API surface, and improving installation processes. Learn more in our latest blog: https://hubs.la/Q02vzv0k0
-
Researchers at EPFL School of Computer and Communication Sciences and Yale School of Medicine teamed up to build Meditron, a suite of open-source large multimodal foundation models tailored to the medical field and designed to assist with clinical decision-making and diagnosis — leveraging Meta Llama 2. The model has been downloaded over 30k times in just the first months of its release, filling an important gap in innovation in low-resource medical settings. In just 24h after the release of Llama 3, the team fine-tuned the 8B model to deliver Llama-3[8B]-MeditronV1.0, which achieves strong results on standard benchmarks like MedQA and MedMCQA. More on this work, built with Llama ➡️ https://go.fb.me/j50jsz More details and preprint ➡️ https://go.fb.me/qmgfzw
-
It's been one week since we released Meta Llama 3, in that time the models have been downloaded over 1.2M times, we've seen 600+ derivative models and the repo has been starred over 17K times on GitHub. More importantly, we've seen the community do exactly what they do best: innovate. More on the exciting progress we're seeing with Llama 3 ➡️ https://go.fb.me/5p7eu4
-
In addition to the release of Meta Llama 3, we also published a paper on the GenAI research that’s enabling our newest image generation features in Meta AI. Read ‘Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation’ ➡️ https://go.fb.me/bc9yh9
-
We're rolling out Meta AI with Vision on Ray-Ban Meta! Details on the new styles, features and more we announced this morning ➡️ https://go.fb.me/x1ruf1
One of the coolest things about Meta AI is how we’ve taken the experience to extend beyond just our apps and onto our hardware devices. With the latest update, rolling out today on Ray-Ban Meta smart glasses, we’re taking things even further with multimodal AI that helps you understand and interact with the world around you in new ways. It’s a pretty awesome experience that you just have to try for yourself — it introduces new ways Meta AI can be helpful in your day-to-day life. Our team is hard at work on some exciting things for multimodal AI and I’m excited to share more in the coming months around how we’ll bring these capabilities to life on Meta AI.
-
AI at Meta reposted this
Introducing Llama Guard 2 and Code Shield! Last year, we announced our first set of developer tools to deploy models with trust & safety in mind. Today, with the Llama 3 release, we're excited to open source additional tools: ⚔ Llama Guard 2 is our best input and output safeguard LLM that achieves state of the art precision on content risk filtering on the recently announced MLCommons harm taxonomy, and can be easily adapted to developers' specific content standards. We've also tuned it to reduce likelihood that it refuses to answer benign prompts. 🛡 With Llama 3 providing strong performance on coding, Code Shield adds support for inference-time filtering of insecure code produced by LLMs. This will offer mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. 👩💻 We're also releasing CyberSecEval 2 to expand on its predecessor by measuring an LLM’s susceptibility to prompt injection, automated offensive cybersecurity capabilities, and propensity to abuse a code interpreter, in addition to the existing evaluations for insecure Coding Practices and Cyber Attack Helpfulness. Those tools are available in open source and Llama Guard 2 can be easily downloaded when you download Llama 3. More info: - Meta Llama Trust & Safety: https://lnkd.in/gN378Crw - Llama Guard 2 model card: https://lnkd.in/gx-uiESc - CyberSecEval 2 paper: https://lnkd.in/gc2qH9cs - Code Shield implementation walkthrough: https://lnkd.in/gp-dXdK8 And if you want to generate those videos, try /imagine Flash in www.meta.ai starting today.
-
Ready to start working with Meta Llama 3? Check out the updated repo. It includes code, new training recipes, an updated model card, details on our latest trust & safety tools and more. Official Meta Llama 3 repo on GitHub ➡️ https://go.fb.me/y96s0a
-
Today we’re releasing OpenEQA – the Open-Vocabulary Embodied Question Answering Benchmark. It measures an AI agent’s understanding of physical environments by probing it with open vocabulary questions like “Where did I leave my badge?” Details ➡️ https://go.fb.me/ni32ze Benchmark ➡️ https://go.fb.me/zy6l30 Paper ➡️ https://go.fb.me/7g8nqb We benchmarked state-of-art vision+language models (VLMs) on OpenEQA and found a significant gap between human level performance and even today’s best models. In fact, for questions that require spatial understanding, today’s VLMs are nearly “blind” – access to visual content provides only minor improvements over language-only models. We hope that by releasing OpenEQA, we can help to motivate additional research in this space. At FAIR, we’re working to build world models capable of performing well on OpenEQA, and we welcome others to join us in that effort.
-
Introducing the next generation of the Meta Training and Inference Accelerator (MTIA), the next in our family of custom-made silicon, designed for Meta’s AI workloads. Full details ➡️ https://go.fb.me/pnyh2l MTIA is a part of our growing investment in AI infrastructure to provide the most efficient architecture for Meta’s unique AI workloads and improve our ability to provide the best experiences for our users around the world.