Google Lens is coming for desktop Chrome and can quickly take over the textual content and picture search


  • Google Lens on Desktop Chrome! Just right click on a side and select the Lens context menu item.


  • You can cut out part of the page.


  • The results are displayed in this sidebar.


Google Lens, Google’s computer vision search engine, is coming to desktop Chrome. Google didn’t exactly share a timeline, but a teaser tweet showed what the feature will look like.

On desktop Chrome, you’ll soon be able to right-click an image and select “Search with Google Lens,” which dimms the page and brings up a clipping tool that lets you send a specific image to Google’s Photo AI can send. After a round trip to the internet, you’ll see a sidebar with multiple results.

While’s image search is just trying to find similar images, Lens can actually identify things in an image, such as people, text, math equations, animals, landmarks, products, and more. It can translate text through the camera and even copy real world text (using OCR) and paste it into an app. The feature has been around for some time on Android and iOS, first as a camera-controlled search that brings up a live viewfinder, then in Google Photos, and more recently as a long press option for web images in Chrome for Android.


  • That’s Google’s VP of Design, Matias Duarte, and now Google Lens is helping you steal its look.


  • You can ask the following questions about the lenses, such as “socks with this pattern,” and they supposedly understand.


  • This is a great use case for image search: “How do I fix that spiky thing on the back of my bike?” If you don’t know what something is called, it can be difficult to find Google.

  • Google says the problem is likely with the derailleur and you may ask how to fix it.


Google Lens is also getting a little smarter. The service has been expanded to include a new function with which you can ask follow-up questions about an image search. Google has two demos here that are very impressive. Have a user scan a picture of a shirt and ask for “socks with this pattern” before Google shows a match. Otherwise it would be almost impossible to look for a particular clothing pattern. You could enter descriptors like “flower pattern”, but that would result in similar patterns that you would have to scroll through, not the same pattern.

Another example is a really great use case for vision search: finding things whose names you don’t know. In the example, the user has a broken bike and needs to fix something with the rear gear. They don’t know what the rear gear change thing is called, however, so they just snap a picture of it and ask Google. Apparently it’s a “derailleur” and from there the user types in “How to Fix” and Google finds instructions.

Basically, Lens can search for images and text at the same time. Both are impressive examples, but they’re canned demos so it’s hard to know how well any of these actually work. According to Google, the feature will be available “in the coming months”.