Google expands access to generative AI in Search

  • Published
  • Posted in Google
  • 2 mins read

Making targeted improvements

SGE is rooted in the Search quality and safety systems we’ve been honing for years, which are designed to surface trustworthy, high-quality information. But just as we’re always working to improve Search, we’re taking the same approach with SGE. We’re using rigorous testing, red-teaming and evaluation to deliver a higher-quality, more helpful experience.

One area where we’re making targeted improvements is when a query includes a false or offensive premise – which can result in an AI-powered response that unfortunately appears to validate said premise. That can be the case even if the web pages themselves point to high-quality, reliable information. We’re rolling out an update to help train the AI model to better detect these types of false or offensive premise queries, and respond with higher-quality, more accurate responses. We’re also working on solutions to use large language models to critique their own first draft responses on sensitive topics, and then rewrite them based on quality and safety principles. While we’ve built a range of protections into SGE and these represent meaningful improvements, there are known limitations to this technology, and we’ll continue to make progress.

Overall, the quality of this experience continues to get better, and we’re working on ongoing and broad improvements, including to better showcase a range of perspectives and information in responses. All these improvements are designed to make the experience more helpful, informative and high-quality for the range of queries we get every day – including ones we haven’t seen before! As more people sign up to use this experiment, we look forward to how their feedback can help us continue to improve.

News Article Courtesy Of »