Google announced a number of new AI updates yesterday at its developer's conference and we've rounded up everything you need to know.
Here's what's new.
🔍 Generative AI Search
First, Google has added generative AI to Search.
The company is launching a test version of its search engine, called ‘Search Generative Experience' which includes text generation capabilities similar to ChatGPT.
The updated search engine still requires users to enter a query, and it will continue to display links to sites, content snippets, and ads. In some cases, however, the top page will feature AI-generated text gathered from across the web, along with links to those pages.
🗣️ Conversational Mode
Users will also see suggested next steps when conducting a search and tapping on these steps will initiate a new conversational mode where they can ask Google more about the topic they're exploring. Context will carry over from one question to the next.
The experience is integrated with Google Shopping and will connect consumers with information online. The platform’s Shopping Graph, which has more than 35 billion product listings, powers the new generative AI shopping experience.
Search ads will continue to appear in dedicated ad slots throughout the new generative experience, and the company says that ads will remain distinguishable from organic search results.
This new search experiment will be available to select users in the U.S. through a new feature called ‘Search Labs’ in the coming weeks.
⚒️ AI Tools in Workspace
The tech giant is also bringing more AI tools to Workspace, including automatic table generation in Sheets and image creation in Slides and Meet.
With Sheets, users can now generate tables by typing in a prompt of what they want to accomplish, and it will provide a personalized template with content.
This, however, does not include automatic formula generation.
As for Slides and Meet, the image generation feature lets users type in what kind of visualization they want and it will create that image for you. The use case for Google Meet is custom backgrounds.
Google Docs is also getting an update to its AI assistant, which can now surface smart chips for locations and status.
Looking ahead, Google plans to add a Bard/AI-chat style interface to Docs.
🎨 Partners with Adobe to Bring Art Generation to Bard
Meanwhile, Bard, Google's ChatGPT rival, is getting generative AI upgrades from Adobe.
Firefly, Adobe's recently introduced AI model for generating media content, is coming to Bard alongside Adobe Express, its free graphic design tool. This means that users will be able to generate images through Firefly and modify them using Adobe Express.
Within Bard, users will be able to select from various templates, fonts, and stock images, as well as other assets from Adobe's library. These updates will be available soon.
📸 “Magic” AI Image Editor
Google also introduced a new image editing feature called ‘Magic Editor‘ that uses generative AI to make complex edits to photos without professional tools.
With the tool, brands and marketers can:
- Remove unwanted elements from a photo
- Relocate and change the scale of the subject or product
- Create new content to fill in the gaps after repositioning the subject or product
You can't play with it yet though – the tool won’t be available until later this year.
🕵️ New Tool Exposes AI-Generated Images
And with all of these new tools for AI-generated images, Google has added a new feature to search that can identify whether an image has been created by our AI overlords, called ‘About this image’.
It provides information on when the image was first indexed by Google, where it may have appeared, and if it was featured on news or fact-checking sites.
Users can access it by clicking on the three dots above an image on search results, using Google Lens, or swiping up in the Google app.
Images: Google / Adobe