{"id":3416,"date":"2023-06-10T18:57:52","date_gmt":"2023-06-10T18:57:52","guid":{"rendered":"https:\/\/eufad.com\/?p=3416"},"modified":"2024-01-04T16:07:19","modified_gmt":"2024-01-04T16:07:19","slug":"the-ai-takeover-of-google-search-starts-now","status":"publish","type":"post","link":"https:\/\/eufad.com\/?p=3416","title":{"rendered":"The AI takeover of Google Search starts now"},"content":{"rendered":"<p>The future of Google Search is AI. But not in the way you think. The company synonymous with web search isn\u2019t all in on chatbots (even though it\u2019s building one, called Bard), and it\u2019s not redesigning its homepage to look more like a ChatGPT-style messaging system. Instead, Google is putting AI front and center in the most valuable real estate on the internet: its existing search results.\u00a0<\/p>\n<p>To demonstrate, Liz Reid, Google\u2019s VP of Search, flips open her laptop and starts typing into the Google search box. \u201cWhy is sourdough bread still so popular?\u201d she writes and hits enter. Google\u2019s normal search results load almost immediately. Above them, a rectangular orange section pulses and glows and shows the phrase \u201cGenerative AI is experimental.\u201d A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more. To the right, there are three links to sites with information that Reid says \u201ccorroborates\u201d what\u2019s in the summary. <\/p>\n<p>Google calls this the \u201cAI snapshot.\u201d All of it is by Google\u2019s large language models, all of it sourced from the open web. Reid then mouses up to the top right of the box and clicks an icon Google\u2019s designers call \u201cthe bear claw,\u201d which looks like a hamburger menu with a vertical line to the left. The bear claw opens a new view: the AI snapshot is now split sentence by sentence, with links underneath to the sources of the information for that specific sentence. This, Reid points out again, is corroboration. And she says it\u2019s key to the way Google\u2019s AI implementation is different. \u201cWe want [the LLM], when it says something, to tell us as part of its goal: what are some sources to read more about that?\u201d<\/p>\n<p>A few seconds later, Reid clicks back and starts another search. This time, she searches for the best Bluetooth speakers for the beach. Again, standard search results appear almost immediately, and again, AI results are generated a few seconds later. This time, there\u2019s a short summary at the top detailing what you should care about in such a speaker: battery life, water resistance, sound quality. Links to three buying guides sit off to the right, and below are shopping links for a half-dozen good options, each with an AI-generated summary next to it. I ask Reid to follow up with the phrase \u201cunder $100,\u201d and she does so. The snapshot regenerates with new summaries and new picks.\u00a0<\/p><figcaption><em>These AI snapshots will appear at the top of Search and pull information from all over the web.<\/em><\/figcaption><cite>Image: Google<\/cite><\/p>\n<p>This is the new look of Google\u2019s search results page. It\u2019s AI-first, it\u2019s colorful, and it\u2019s nothing like you\u2019re used to. It\u2019s powered by some of Google\u2019s most advanced LLM work to date, including a new general-purpose model called PaLM 2 and the Multitask Unified Model (MUM) that Google uses to understand multiple types of media. In the demos I saw, it\u2019s often extremely impressive. And it changes the way you\u2019ll experience search, especially on mobile, where that AI snapshot often eats up the entire first page of your results.<\/p>\n<p>There are some caveats: to get access to these AI snapshots, you\u2019ll have to opt in to a new feature called Search Generative Experience (SGE for short), part of an also-new feature called Search Labs. Not all searches will spark an AI answer \u2014 the AI only appears when Google\u2019s algorithms think it\u2019s more useful than standard results, and some sensitive subjects in categories like health and finances are currently set to avoid AI interference altogether. (For others, you might see a \u201cthis is not legal advice\u201d disclaimer or something similar.) But in my brief demos and testing, it showed up whether I searched for chocolate chip cookies, Adele, nearby coffee shops, or the best movies of 2022. AI may not be killing the 10 blue links, but it\u2019s definitely pushing them down the page.<\/p>\n<p>SGE, Google executives tell me over and over, is an experiment. But they\u2019re also clear that they see it as a foundational long-term change to the way people search. AI adds another layer of input, helping you ask better and richer questions. And it adds another layer of output, designed to both answer your questions and guide you to new ones.<\/p>\n<p>An opt-in box at the top of search results might sound like a small move from Google compared to Microsoft\u2019s AI-first Bing redesign or the total newness of ChatGPT. But SGE amounts to the first step in a complete rethinking of how billions of people find information online \u2014\u00a0and how Google makes money. As pixels on the internet go, these are as consequential as it gets.<\/p><figcaption><em>The AI snapshots borrow colors from the content they discover and change depending on what you search.<\/em><\/figcaption><cite>Image: Google<\/cite><\/p>\n<h3><strong>Asked and answered<\/strong><\/h3>\n<p>Google feels pretty good about the state of its search results. We\u2019re long past the \u201c10 blue links\u201d era of 25 years ago when you Googled by typing in a box and getting links in return. Now, you can search by asking questions aloud or snapping a picture of the world, and you might get back everything from images and podcasts to TikToks.<\/p>\n<p>Many searches are already well served by these results. If you\u2019re going to Google and searching \u201cFacebook\u201d to land on facebook.com or you\u2019re looking for the height of the Empire State Building, you\u2019re already good to go.\u00a0<\/p>\n<p>But there\u2019s a set of queries for which Google has never quite worked, which is where the company is hoping AI can come in. Queries like \u201cWhere should I go in Paris next week?\u201d or \u201cWhat\u2019s the best restaurant in Tokyo?\u201d These are hard questions to answer because they\u2019re not actually one question. What\u2019s your budget? What days are all the museums open in Paris? How long are you willing to wait? Do you have kids with you? On and on and on.<\/p>\n<p>There\u2019s a set of queries for which Google has never quite worked, which is where the company is hoping AI can come in<\/p>\n<p>\u201cThe bottleneck turns out to be what I call \u2018the orchestration of structure,\u2019\u201d says Prabhakar Raghavan, the SVP at Google who oversees Search. Much of that data exists <em>somewhere<\/em> on the internet or even within Google \u2014 museums post hours on Google Maps, people leave reviews about wait times at restaurants \u2014\u00a0but putting it all together into something like a coherent answer is really hard. \u201cPeople want to say, \u2018plan me a seven-day vacation,\u201d Raghavan says, \u201cand they believe if the language model outputs, it should be right.\u201d<\/p>\n<p>One way to think about these is simply as questions with no right answer. A huge percentage of people who come to Google aren\u2019t looking for a piece of information that exists somewhere. They\u2019re looking for ideas, looking to explore. And since there\u2019s also likely no page on the internet titled \u201cBest vacation in Paris for a family with two kids, one of whom has peanut allergies and the other of whom loves soccer, and you definitely want to go to the Louvre on the quietest possible day of the week,\u201d the links and podcasts and TikToks won\u2019t be much help.<\/p>\n<p>Because they\u2019re trained on a huge corpus of data from all over the internet, large language models can help answer those questions by essentially running lots of disparate searches at once and then combining that information into a few sentences and a few links. \u201cLots of times you have to take a single question and break it into 15 questions\u201d to get useful information from search, Reid says. \u201cCan you just ask one? How do we change how the information is organized?\u201d<\/p>\n<p>That\u2019s the idea, but Raghavan and Reid are both quick to point out that SGE still can\u2019t do these completely creative acts very well. Right now, it\u2019s going to be much more handy for synthesizing all the search data behind questions like \u201cwhat speaker should I buy to take into the pool.\u201d It\u2019ll do well with \u201cwhat were the best movies of 2022,\u201d too, because it has some objective Rotten Tomatoes-style data to pull from along with the internet\u2019s many rankings and blog posts on the subject. AI appears to make Google a better information-retrieval machine, even if it\u2019s not quite ready to be your travel agent.\u00a0<\/p>\n<p>One thing that didn\u2019t show up in most SGE demos? Ads. Google is still experimenting with how to put ads into the AI snapshots, though rest assured, they\u2019re coming.\u00a0Google\u2019s going to need to monetize the heck out of AI for any of this to stick.<\/p><figcaption><em>Right now, AI hasn\u2019t really changed how Google ads work. But it will.<\/em><\/figcaption><cite>Image: Google<\/cite><\/p>\n<h3><strong>The Google Bot<\/strong><\/h3>\n<p>At one point in our demo, I asked Reid to search only the word \u201cAdele.\u201d The AI snapshot contained more or less what you\u2019d expect \u2014 some information about her past, her accolades as a singer, a note about her recent weight loss \u2014 and then threw in that \u201cher live performances are even better than her recorded albums.\u201d Google\u2019s AI has opinions! Reid quickly clicked the bear claw and sourced that sentence to a music blog but also acknowledged that this was something of a system failure.\u00a0<\/p>\n<p>Google\u2019s search AI is not supposed to have opinions. It\u2019s not supposed to use the word \u201cI\u201d when it answers questions. Unlike Bing\u2019s multiple-personality chaos or ChatGPT\u2019s chipper helper or even Bard\u2019s whole \u201cdroll middle school teacher\u201d vibe, Google\u2019s search AI is not trying to seem human or affable. It\u2019s actually trying very hard to not be those things. \u201cYou want the librarian to really understand you,\u201d Reid says. \u201cBut most of the time, when you go to the library, your goal is for them to help you with something, not to be your friend.\u201d That\u2019s the vibe Google is going for.<\/p>\n<p>The reason for this goes beyond just that strange itchy feeling you get talking to a chatbot for too long. And it doesn\u2019t seem like Google is just trying to avoid super horny AI responses, either. It\u2019s more a recognition of the moment we\u2019re in: large language models are suddenly everywhere, they\u2019re far more useful than most people would have guessed, and yet they have a worrying tendency to be confidently wrong about just about everything. When that confidence comes in perfectly formed paragraphs that sound good and make sense, people are going to believe the wrong stuff.\u00a0<\/p>\n<p>A few executives I spoke to mentioned a tension in AI between \u201cfactual\u201d and \u201cfluid.\u201d You can build a system that is factual, which is to say it offers you lots of good and grounded information. Or you can build a system that is fluid, feeling totally seamless and human. Maybe someday you\u2019ll be able to have both. But right now, the two are at odds, and Google is trying hard to lean in the direction of factual. The way the company sees it, it\u2019s better to be right than interesting.<\/p>\n<p>Google projects a lot of confidence in its ability to be factually strong, but recent history seems to suggest otherwise<\/p>\n<p>Google projects a lot of confidence in its ability to be factually strong, but recent history seems to suggest otherwise. Not only is Bard less wacky and fun than ChatGPT or Bing, but it\u2019s also often less correct \u2014 it makes basic mistakes in math, information retrieval, and more. The PaLM 2 model should improve some of that, but Google certainly hasn\u2019t solved the \u201cAI lies\u201d problem by a long shot.<\/p>\n<p>There\u2019s also the question of when AI should appear at all. Sometimes it\u2019s obvious: the snapshots shouldn\u2019t appear if you ask sensitive medical questions, Reid says, or if you\u2019re looking to do something illegal or harmful. But there\u2019s a wide swath of searches where AI may or may not be useful. If I search \u201cAdele,\u201d some basic summary information at the top helps; if I search \u201cAdele music videos,\u201d I\u2019m much more likely to just want the YouTube videos in the results. <\/p>\n<p>Google can afford to be cautious here, Reid says, because the fail state is just Google search. So whenever the snapshot shouldn\u2019t appear, or whenever the model\u2019s confidence score is low enough that it might not be more useful than the top few results, it\u2019s easy to just not do anything.<\/p>\n<h3><strong>Bold and responsible<\/strong><\/h3>\n<p>Compared to the splashy launch of the new Bing or the breakneck developmental pace of ChatGPT, SGE feels awfully conservative. It\u2019s an opt-in, personality-free tool that collates and summarizes your search results. For Google, suddenly in an existential crisis over the fact that AI is changing the way people interact with technology, is that enough?<\/p>\n<p>A couple of executives used the same phrase to describe the company\u2019s approach: \u201cbold and responsible.\u201d Google knows it has to move fast \u2014 not only are chatbots booming in popularity, but TikTok and other platforms are stealing some of the more exploratory search out from under Google. But it also has to avoid making mistakes, giving people bad information, or creating new problems for users. To do that would be a PR disaster for Google, it would be yet more reason for people to try new products, and it would potentially destroy the business that made Google a trillion-dollar company.\u00a0<\/p>\n<p>So, for now, SGE remains opt-in and personality-free. Raghavan says he\u2019s comfortable playing a longer game: \u201cknee-jerk reacting to some trend is not necessarily going to be the way to go.\u201d He\u2019s also convinced that AI is not some panacea that changes everything \u2014 that 10 years from now, we\u2019ll all do everything through chatbots and LLMs. \u201cI think it\u2019s going to be one more step,\u201d he says. \u201cIt\u2019s not like, \u2018Okay, the old world went away. And we\u2019re in a whole new world.\u2019\u201d\u00a0<\/p>\n<p>In other words, Google Bard is not the future of Google Search. But AI is. Over time, SGE will start to come out of the labs and into search results for billions of users, mingling generated information with links out to the web. It will change Google\u2019s business and probably upend parts of how the web works. If Google gets it right, it will trade 10 blue links for all the knowledge on the internet, all in one place. And hopefully telling the truth. <\/p>\n<hr>\n<h3>Related:<\/h3>\n","protected":false},"excerpt":{"rendered":"<p>The future of Google Search is AI. But not in the way you think. The company synonymous with web search isn\u2019t all in on chatbots (even though it\u2019s building one, called Bard), and it\u2019s not redesigning its homepage to look more like a ChatGPT-style messaging system. Instead, Google is putting AI front and center in [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3418,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[93,95,96,81,84,87,90],"class_list":{"0":"post-3416","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-google-fiber","8":"tag-android-phone-guide","9":"tag-android-phone-news","10":"tag-android-phone-reviews","11":"tag-google","12":"tag-google-guide","13":"tag-google-news","14":"tag-google-reviewsandroid-phone"},"_links":{"self":[{"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/posts\/3416","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/eufad.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3416"}],"version-history":[{"count":1,"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/posts\/3416\/revisions"}],"predecessor-version":[{"id":5115,"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/posts\/3416\/revisions\/5115"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/eufad.com\/index.php?rest_route=\/wp\/v2\/media\/3418"}],"wp:attachment":[{"href":"https:\/\/eufad.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3416"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eufad.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3416"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eufad.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}