LLMs [that is, large language model generative AI systems] are essentially ‘conventional wisdom’ machines. Bruno Massarelli
Why is generative AI – like Chat GPT – such a threat to Google’s internet search revenue? To understand why, let’s engage in a thought experiment in which, some time in the future, we are trying to explain to our grandchildren how we did internet searches way back in the Dark Ages, i.e., 2022.
Granddaughter (laughing hysterically): Wait, Grandpa, let me see if I’ve got this right. Back in the Olden Days, when you did an internet search you typed your question into a little box and then you didn’t get back the answer?
Grandpa: That’s right, sweetie, all we got back was 2,000 pages of hyperlinks.
Granddaughter: And then you had to click on a bunch of those links and go read those webpages then maybe you got your answer?
Grandpa: Well, most of the time we had to go back and refine the search query because the links took us to the wrong kinds of pages. But in the meantime the companies we got linked to had to pay Google because they pay based on the number of people who clicked on their links, even if we didn’t buy anything.
Granddaughter: That’s almost as bad as the even Badder Old Days, when Great-Grandpa had to actually go to the library to find anything out.
In the Brave New World of generative AI, you type in your query and you get your answer. And you get it in a grammatically correct couple of paragraphs that look remarkably like something a human being would write. Does Bing/ChatGPT, we could be forgiven for thinking, hire millions of people who do nothing but respond to other people’s queries?
No, it’s just a computer – a bot – that responds, but that bot sounds a lot like a person. For Google, this development is a disaster because no one cares much any more about hyperlinks. We don’t need to click on any links because we’ve already got our answer. If the answer is slightly off, we can go back and refine our query and get another answer, but again, no hyperlinks. And no hyperlinks, no add revenue to Google.
Is this likely to happen? It depends on how good generative AI is. To assess how good (or bad) it already is, let’s look at a couple of examples.
Poetry. ChatGPT can write poetry, but is it any good? Recently a blogger asked ChatGPT to “write a six-line rhyming poem about horses in the forest on a sunny day in March.” This is the bot’s poem:
The horses prance in the bright sun rays,
As March brings us happy days.
Their coats shine with a golden hue,
On this beautiful day so new.
The wind blows through their manes of gold,
A perfect sight for us to behold.
If you like really bad poetry – say, Edgar Guest – you probably like this one. But keep in mind the short poem Dorothy Parker wrote about Guest:
I’d rather flunk my Wasserman test
Than read a poem by Edgar Guest.
As an aside, when I added myself to the waiting list for Bard, Google’s answer to ChatGPT, Bard sent me this poem to enjoy while I was waiting:
May your day be bright,
Your mood be light,
And your heart be filled with delight.
A lame Irish drinking toast?
Interestingly, ChatGPT is better at writing highly structured poems, like haiku. If you ask it to write a poem about daffodils the result will be execrable. But ask it to write a haiku about daffodils and the result, while not good, isn’t terrible at all. A high school English teacher would probably give this one an A-/B+:
Bend in the early spring wind
Sappy, but not emetic. So generative AI isn’t likely to replace poets any time soon, other than those in high school. But let’s look at a more promising example.
College essays. A not-very-diligent sophomore is enrolled in a literature course that involves reading books by Mark Twain and Charles Dickens. The professor has given the class this assignment: Write a 1,000-word essay on the proposition that Twain is a greater writer than Dickens.
Naturally, our sophomore has procrastinated until the last minute and, if this were 2022, she would now face an unhappy choice: staying up all night to write the essay or heading over to the sorority house and partying down.
But since it’s 2023 she simply opens an account with ChatGPT and gives it the exact assignment the professor has given the class. In a few seconds the bot has produced a 1,000-word essay that is grammatically correct, touches on the usual characteristics of Twain and Dickens, and concludes that Twain was arguably the greater writer because his writing challenged the social norms and conventions of his time, especially regarding racial inequality and the evils of slavery. Dickens, while critical of Victorian society, did not necessarily push the boundaries of accepted norms to the same degree as Twain.
Our sophomore doesn’t even bother to read the bot’s essay – she’s running late for the party – but simply turns it in and heads off.
The professor reads the essay, noting that it is reasonably well-written and grammatically correct, but without any flair or pizzazz. He observes that the essay touches on all the conventional wisdom about the two writers but adds no new insights. Finally, he notices with annoyance that the student doesn’t seem to have attended many of his classes, since the essay doesn’t include any of his own (profound) thoughts about Twain and Dickens.
The professor, who has never heard of generative AI, sighs. Students today, he thinks, are so conventional, so unable to think outside the box. It’s not a terrible essay but it’s nothing special. He’s in a charitable frame of mind and gives the essay a B-. Our sophomore is delighted.
So here, we find, is a good use for generative AI – cheating on your college essays if you don’t mind missing the Dean’s List.
But there are better uses for generative AI than writing bad poetry or B- college essays. Next week we’ll take a look at an example that is actually (somewhat) inspiring.
Next up: Am I a Bot? (Part 5)
[To subscribe or unsubscribe, drop me a note at GregoryCurtisBlog@gmail.com.]
Please note that this post is intended to provide interested persons with an insight on the capital markets and other matters and is not intended to promote any manager or firm, nor does it intend to advertise their performance. All opinions expressed are those of Gregory Curtis and do not necessarily represent the views of Greycourt & Co., Inc., the wealth management firm with which he is associated. The information in this report is not intended to address the needs of any particular investor.