Whether it’s helping doctors detect diseases earlier or enabling people to access information in their own language, AI helps people, businesses and communities unlock their potential.

Introducing Bard

 Two years ago Google unveiled next-generation language and conversation capabilities powered by our Language Model for Dialogue Applications (or LaMDA for short).

They’ve been working on an experimental conversational AI service, powered by LaMDA, that they’re calling Bard. And today, Google is taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.

Google is releasing it initially with the lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling them to scale to more users (a direct poke at chatGPT!), allowing for more feedback.

Bringing the benefits of AI into our everyday products

BERT, one of Google’s first Transformer models, was revolutionary in understanding the intricacies of human language. Two years ago, they introduced MUM, which is 1,000 times more powerful than BERT and has next-level and multi-lingual understanding of information which can pick out key moments in videos and provide critical information, including crisis support, in more languages.

Now, Google’s newest AI technologies — like LaMDA, PaLM, Imagen and MusicLM — are building on this, creating entirely new ways to engage with information, from language and images to video and audio. We’re working to bring these latest AI advancements into our products, starting with Search.

Increasingly, people are turning to Google for deeper insights and understanding — like, “is the piano or guitar easier to learn, and how much practice does each need?” Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives.

AI can be helpful in these moments, synthesizing insights for questions where there’s no one right answer. Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner. These new AI features will begin rolling out on Google Search soon.

Mobile phone showing a Search query for "is piano or guitar easier to learn and how much practice does each need?" The search result page displays an answer to the question, powered by AI

Bold and responsible

In 2018, Google was one of the first companies to publish a set of AI Principles. They continue to provide education and resources for our researchers, partner with governments and external organizations to develop standards and best practices, and work with communities and experts to make AI safe and useful.

To reach our editorial team on your feedback, story ideas and pitches, contact us here.