Google today announced the rollout of a new “multi-search” feature that allows users to search for text and images at the same time using Google Lens image recognition technology. Google first introduced the feature at its Search On event last September and said it will launch the feature in the coming months after testing and evaluation. Starting today, the new multisearch feature is available in the US as a beta feature in English.
To get started, you need to open the Google app on Android or iOS, tap the camera lens icon, and search for one of your screenshots or snapshots. You can then swipe up and hit the “+ Add to Search” button to add text. Google notes that users must have the latest versions of the app in order to access new features.
With the new multi-search feature, you can ask questions about the item in front of you, or narrow your search results by color, brand, or visual characteristics. Google told gaming-updates that the new feature currently produces the best results for store searches, with more uses coming in the future. This first run of the beta will allow you to do more than just buy, but that won’t be true for every search.
In practice, the new function can work like this. Let’s say you find a dress you like, but you don’t like its color. You can take a photo of the dress and then add the text “Green” to the search to find the dress in the color you want. In another example, you are looking for new furniture but want to make sure it matches your current furniture. You can take a picture of the dining area and search for “coffee table” to find the right table. Or let’s say you have a new plant and you don’t know how to properly care for it. You can take a photo of the plant and search for “Care Instructions” to learn more about it.
The new functionality can be especially useful for search queries that Google currently has when there’s a visible part of what you’re looking for that’s hard to describe in words. By combining the image and words into one search query, Google may have a better chance of providing relevant search results.
A Google blog post says of the announcement: “At Google, we’re always thinking of new ways to help you find the information you’re looking for, no matter how much you express what you need. Why not make it harder.” “That’s why today we’re introducing a whole new way to search: text and images at the same time. With multi-search in Lens, you can go beyond the search box and ask questions about what you see.”
Google says the new functionality is made possible by recent advances in artificial intelligence. The company is also exploring ways to improve multiple searches with MUM, the latest artificial intelligence model in search. The Multitasking Unified Model, or MUM, can simultaneously understand information in multiple formats, including text, images, and video, and create insights and connections between topics, concepts, and ideas.
Today’s announcement comes a week after Google announced improvements to its artificial intelligence model to make Google search safer and better able to handle sensitive issues including suicide, sexual assault, substance abuse. Topics included include abuse and domestic violence. It also uses other artificial intelligence techniques to improve its ability to remove unwanted overt or suggestive content from search results when people aren’t specifically looking for it.