Battle between Search and ChatGPT has just begun

Battle between Search and ChatGPT has just begun

Ramakrishna Prasad Nori (RK)

Founder - Head AI Research & Solutions | November 1, 2024

There was a gap in Google’s market and OpenAI created a market in the gap

First the hard facts. In 2022, Google’s key revenue driver since 2004 was down, according to media reports. The trend continued in Q1 of 2023, but the Q2 results reversed it showing an increase in ad revenues.

While the pressure on revenues has been attributed to challenges in the digital advertising market for both Google Search and YouTube — the two key Google products, conversations in 2023 have centred around the role of generative AI models. The worry was people would not ‘search’ but ask ChatGPT. That shift in behaviour could be a direct correlation to the loss in ad revenues for Google. The mood is amply reflected on social media platforms and one can clearly see that the knives are out and the battle has just begun.

Search, Ad Revenues & AI Research

Over the last two decades, we all have been accustomed to Googling for information. Simply put, Google is what Xerox was. For Search, Google uses its proprietary PageRank algorithm, developed by Larry Page, its co-founder. When users search on Google, the resulting web pages are ranked using the PageRank algorithm. That’s how it started.

With more and more people searching the internet using Google, it achieved an unprecedented monopoly. That heralded a new era of digital marketing — ad words and created the world’s largest advertising platform. It opened up an innovative way for businesses to extend their customer reach, better than what the traditional media offered. Google started to match ads with search results. Any advertiser who wants to be more relevant to their customer base can pay and bid on keywords, that can be displayed to relevant users as part of search. That is how Google generates its ad revenues.

Google’s Role in AI

For most of us, Google may be primarily seen as a search engine but it is no backbencher in the cutting-edge AI space. It has been at the forefront of AI research, especially in the language modelling space in the last decade. The focus on language is understandable because Google’s primary business is search and it was in English. To increase the user base, Google introduced Search in multiple languages. That’s a natural extension of their business model. The focus was on neural machine translation and they published many models. Till then, the research on the language modelling space was showing incremental results.

The language modelling space was suddenly disrupted in June 2017, when Google published a white paper called Attention Is All You Need. The paper introduced the Transformer architecture, with an encoder and decoder model for language translation. Till then researchers, even at Google were training large AI models but it was a daunting challenge beyond a certain point. The paper created a storm in the language modelling community. It’s been six years since the paper was published and the Transformers are going stronger by the day.

In 2018, Google introduced BERT (Bidirectional Encoders Representations of Transformers), a model powered by the Transformer architecture, capable of processing large volumes of text. They followed it up with the T5 (Text-To-Text-Transfer-Transformer) model. While both models advanced the research in the language modelling space, they were essentially different in the way they were trained. That is outside the scope of this article.

While the ad revenue business was growing, Google also continued the research on Large Language Models (LLMs) but did not produce any breakthroughs.

Genesis of ChatGPT

Genesis of ChatGPT Founded in 2015, OpenAI was looking to stamp its presence in the language modelling space that was dominated by Google. The Transformer architecture provided OpenAI the power to stamp their presence and they did that with authority. In addition to the Transformer architecture, they also use Reinforcement Learning From Human Feedback (RLFH), a technique that uses Reinforcement Learning which is then strengthened by human feedback so that the learning process is robust.

A cursory look at OpenAI’s path to releasing their Generative Pre-Trained Transformer (GPT) models”.

  • GPT-1 — released in 2018
  • GPT-2 — released in 2019
  • ChatGPT-3 — released in 2020
  • ChatGPT-4 — released in 2023

Market in the Gap

The biggest impact was felt in 2020, with the release of ChatGPT3. Within a very short period of time, after its release, ChatGPT’s user base swelled to about 100 million. This is considered a landmark in consumer product history, where there have been such high levels of mass adoption of any new technology.

OpenAI captured the user’s imagination. Users can ask questions or prompts to ChatGPT and with its ability to create human-like content, it responds. It can write poetry, letters, news articles, songs, tell jokes, and solve math problems (to some extent). The possibilities are limitless.

By looking at the response, one can only be amused. Yes, there are multiple challenges with the responses that cannot be ignored. The most important one being hallucinations. This can happen when the model emits a response that is not factual and accurate. But the truth is, everyone is loving it. There is a tectonic shift in the way people are extracting relevant information from the internet. The move from Search is slowly but steadily happening and will continue to do so with higher versions of ChatGPT in the pipeline.

As of today, ChatGPT has a clean user interface. There are no relevant ads, no unwanted ads that can be so intrusive most of the time. Users just ask a question or prompt and bingo, ChatGPT responds with an answer. There is no ranking either.

Open AI has 3 services:

  • Free — ChatGPT 3.5 No subscription fee
  • Plus — ChatGPT-4 $20 per user/month
  • Enterprise — ChatGPT4 Plus other features

It will be interesting to see how revenue models will evolve with time. Going by reports published in the media, OpenAI spends close to a staggering $7,00,000 per day. There has also been a news report that was published in July that OpenAI has lost its user base to other models that offer their service for free. Could that be a reason why Google’s ad revenues in the last quarter have gone up?

At some point, OpenAI will face the same problem that Google faced. Product innovations Vs sustainable revenue for profitability. That’s a conundrum that OpenAI has to face sooner or later.

Did OpenAI really know ChatGPT’s impact?

That’s a million-dollar question but with due respect to OpenAI, they too may not have fathomed ChatGPT’s massive impact as they launched their models. With the way ChatGPT3 took the world by storm, there may have been a possibility that OpenAI will be grappling now with the consequences of its innovation. There is serious talk of the disastrous impact of generative models and the need for regulations. The problems that OpenAI will face are no different from what other technology behemoths such as Google and Facebook faced.

Race from Here

In the past, I posed this question to around 2,000 engineering students across cities and only seven came up with the right answer. If we think of the question differently or if we apply lateral thinking, the most probable answer given by a smart human is: It depends on the size of the bucket.

But neither ChatGPT nor Llama2.ai (from Meta) came up with the right answer. Generative models are the way to go as researchers continue their quest to move from Artificial Intelligence (AI) to Artificial Generative Intelligence (AGI). As compared to an average human, the response that it came up with is similar. Can we then say that ChatGPT matches the thought of a human with an average IQ?

The generative models produce clear answers for questions that are direct or non-inferential in nature. It is to be seen if the generative models will scale faster to reduce the gap between how a human thinks and how a machine thinks.

Why Google went slow on Generative AI

For a company like Google that has been at the helm of AI research, extending its work into the generative AI space would not have been difficult at all. The question to ponder over is then what stopped them from doing so? Research or revenues, considering that the revenue from ads was fuelling the research!

This may have been a purely business decision. Google may have calculated the negative impact of generative models on their search and ad revenues and may have chosen not to launch their own models. This may have been like a Catch-22 situation for them. With the Transformer architecture launched and with the continuing AI research, they could have launched their own generative models, much before others. The fact is, owing to the OpenAI challenge, they launched their own generative model BARD only in 2023. Now we hear that Google is all set to launch Gemini, a multi-modal generative model.

In the past, there have been instances where market leaders killed their own successful products to launch newer ones backed by innovation. Toyota stopped the Qualis and launched Innova. The iPod met with the same fate. Apple introduced the iPad. What could have stopped Google from doing the same? Probably, they didn’t want to move from their comfort money-minting vertical.

As a matter of fact, even Google’s YouTube has a premium, no ad subscription model. It will be interesting to know the inside thinking behind such a marketing strategy. If it can be done for YouTube then it could have been done for its Search. That could have impacted their revenues but any which way they lost considerable ground to OpenAI.

As published in Telanagana Today

Leave a Comment