Google began rolling out a new Search tool on Thursday that will allow users to search for information using both text and photos at the same time. The new multi-search capability is part of Google’s continuous attempts to employ AI to build genuinely conversational, multimodal, and individualized information experiences.
The multi-search capability is built into Google Lens, the image recognition tool available through the Google app. For the time being, the tool is only accessible in beta for users in the United States searching with text in English. It’s also optimized for shopping searches.
One may use multi-search to ask a question about an object or to narrow the search by color, brand, or a visual aspect. Try it by using Lens to:
- Take a picture of a stunning orange dress and enter the keyword “green” to locate it in another hue.
- Take a photo of the dining set and search for a “coffee table to discover a matching table.”
It’s also useful for non-commercial searches; for example, a user may take a photo of a rosemary plant and search for “care instructions” to learn how to care for their new plant.
Launch the Google app, press the Lens camera icon, and then search for a screenshot or take a new photograph to utilize the feature. Then, slide up and hit the “+ Add to your search” button to add text.
Google claimed that it is also looking into improving the function using MUM (Multitask Unified Model), Google’s latest AI model. The tech behemoth recently revealed how it utilizes MUM and other AI models to better efficiently convey information about crisis aid to those in need.