AI at Meta

AI at Meta

Research Services

Menlo Park, California 773,672 followers

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

About us

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

Website
https://ai.meta.com/
Industry
Research Services
Company size
10,001+ employees
Headquarters
Menlo Park, California
Specialties
research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing

Updates

  • View organization page for AI at Meta, graphic

    773,672 followers

    Introducing Meta Llama 3: the next generation of our state-of-the-art open source large language model — and the most capable openly available LLM to date. These next-generation models demonstrate SOTA performance on a wide range of industry benchmarks and offer new capabilities such as improved reasoning. Details in the full announcement ➡️ https://go.fb.me/a24u0h Download the models ➡️ https://go.fb.me/q8yhmh Experience Llama 3 with Meta AI ➡️ https://meta.ai Llama 3 8B & 70B deliver a major leap over Llama 2 and establish a new SOTA for models of their sizes. While we’re releasing these first two models today, we’re working to release even more for Llama 3 including multiple models with capabilities such as multimodality, multilinguality, longer context windows and more. Our largest models are over 400B parameters and while they’re still in active development, we’re very excited about how they’re trending. Across the stack, we want to kickstart the next wave of innovation in AI. We believe these are the best open source models of their class, period — we can’t wait to see what you build and look forward to your feedback.

  • View organization page for AI at Meta, graphic

    773,672 followers

    Researchers at EPFL School of Computer and Communication Sciences and Yale School of Medicine teamed up to build Meditron, a suite of open-source large multimodal foundation models tailored to the medical field and designed to assist with clinical decision-making and diagnosis — leveraging Meta Llama 2. The model has been downloaded over 30k times in just the first months of its release, filling an important gap in innovation in low-resource medical settings. In just 24h after the release of Llama 3, the team fine-tuned the 8B model to deliver Llama-3[8B]-MeditronV1.0, which achieves strong results on standard benchmarks like MedQA and MedMCQA. More on this work, built with Llama ➡️ https://go.fb.me/j50jsz More details and preprint ➡️ https://go.fb.me/qmgfzw

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    773,672 followers

    We're rolling out Meta AI with Vision on Ray-Ban Meta! Details on the new styles, features and more we announced this morning ➡️ https://go.fb.me/x1ruf1

    View profile for Ahmad Al-Dahle, graphic

    VP, GenAI at Meta

    One of the coolest things about Meta AI is how we’ve taken the experience to extend beyond just our apps and onto our hardware devices. With the latest update, rolling out today on Ray-Ban Meta smart glasses, we’re taking things even further with multimodal AI that helps you understand and interact with the world around you in new ways. It’s a pretty awesome experience that you just have to try for yourself — it introduces new ways Meta AI can be helpful in your day-to-day life. Our team is hard at work on some exciting things for multimodal AI and I’m excited to share more in the coming months around how we’ll bring these capabilities to life on Meta AI. 

  • AI at Meta reposted this

    View profile for Vincent Gonguet, graphic

    Head of Product | GenAI Trust at Meta

    Introducing Llama Guard 2 and Code Shield! Last year, we announced our first set of developer tools to deploy models with trust & safety in mind. Today, with the Llama 3 release, we're excited to open source additional tools: ⚔ Llama Guard 2 is our best input and output safeguard LLM that achieves state of the art precision on content risk filtering on the recently announced MLCommons harm taxonomy, and can be easily adapted to developers' specific content standards. We've also tuned it to reduce likelihood that it refuses to answer benign prompts. 🛡 With Llama 3 providing strong performance on coding, Code Shield adds support for inference-time filtering of insecure code produced by LLMs. This will offer mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. 👩💻 We're also releasing CyberSecEval 2 to expand on its predecessor by measuring an LLM’s susceptibility to prompt injection, automated offensive cybersecurity capabilities, and propensity to abuse a code interpreter, in addition to the existing evaluations for insecure Coding Practices and Cyber Attack Helpfulness. Those tools are available in open source and Llama Guard 2 can be easily downloaded when you download Llama 3. More info: - Meta Llama Trust & Safety: https://lnkd.in/gN378Crw - Llama Guard 2 model card: https://lnkd.in/gx-uiESc - CyberSecEval 2 paper: https://lnkd.in/gc2qH9cs - Code Shield implementation walkthrough: https://lnkd.in/gp-dXdK8 And if you want to generate those videos, try /imagine Flash in www.meta.ai starting today.

  • View organization page for AI at Meta, graphic

    773,672 followers

    Today we’re releasing OpenEQA – the Open-Vocabulary Embodied Question Answering Benchmark. It measures an AI agent’s understanding of physical environments by probing it with open vocabulary questions like “Where did I leave my badge?” Details ➡️ https://go.fb.me/ni32ze  Benchmark ➡️ https://go.fb.me/zy6l30  Paper ➡️ https://go.fb.me/7g8nqb We benchmarked state-of-art vision+language models (VLMs) on OpenEQA and found a significant gap between human level performance and even today’s best models. In fact, for questions that require spatial understanding, today’s VLMs are nearly “blind” –  access to visual content provides only minor improvements over language-only models. We hope that by releasing OpenEQA, we can help to motivate additional research in this space. At FAIR, we’re working to build world models capable of performing well on OpenEQA, and we welcome others to join us in that effort.

  • View organization page for AI at Meta, graphic

    773,672 followers

    Introducing the next generation of the Meta Training and Inference Accelerator (MTIA), the next in our family of custom-made silicon, designed for Meta’s AI workloads. Full details ➡️ https://go.fb.me/pnyh2l MTIA is a part of our growing investment in AI infrastructure to provide the most efficient architecture for Meta’s unique AI workloads and improve our ability to provide the best experiences for our users around the world.

    • No alternative text description for this image

Affiliated pages

Similar pages