Google Lens “Multi Search” helps you find something you can’t explain

Rate this post
Google lens

There are many times when we’re looking for something and can’t come up with the right word to type in the Google search bar. To solve this problem faced by many of us, Google has introduced a new multi-search feature in Lens. First announced last year, this feature helps you search by both image and text. Here’s how this works.

Google lens multi-search function introduced

Google Lens’ With the multi-search feature, you can upload photos to find the specific ones you see. With the accompanying question to find the answer even when you can’t explain the question.

This is useful when searching for a dress you just saw or a decorative item you need for your home. Google says you take a picture of the object in front of you and “sophisticate” your search by any kind of attribute to the object.

To do this, open the Google app on your Android or iOS device, select the Google Lens icon next to the search bar, upload the image, and swipe up.[+検索に追加]You need to click. Press the button and write the text. Let’s take a look at the running process.

Beta version of Google Lens Multi Search Function

The company also mentions several use cases, including fashion and home decor, suggesting that it works “best” in shopping searches. There is another use case where you can attach an image of an object to get answers to related queries. Google’s example contains an image of a rosemary plant and an additional query on how to take care of it.

This feature is not based on a multitasking integration model, but is a result of advances in AI. For strangers, it enables enhanced searches by providing images of objects. Google has also elaborated on this, suggesting that it will soon be introduced to users.

Google Lens’s new multi-search feature has a been Introduced as part of beta on both Android and iOS, Currently available in English in the United States. We hope to reach more regions in more languages ​​in the near future.