The unrelenting hype that has followed ChatGPT over the last 3 months has eventually stirred Google into action, with news breaking that they will release Bard (it’s own conversational AI service).
This announcement comes as technology enthusiasts across the globe were commenting on the perceived lack of consumer output from Google’s Deep Mind with many commentators positing that “Google would never be first to market with a tool that potentially disrupts Google Search”.
How much truth is in the above statement? Well, we can’t be sure… but it definitely seems to have lit some fires.
In this article, we look to understand if ChatGPT really can become a Google search killer? and what the release of Bard signals for the AI infused information economy going forward?
Well, to answer these questions… we need access! C’mon Google.
On the 6th of February, Google promoted a press release that announced:
“We have been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.
Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.”
The key element of the above statement being ‘trusted testers’.
Invite-only public beta isn’t really news. Especially when we’ve all been filling our boots with as many queries as we can nerd out on thanks to OpenAI.
To truly understand the competitive landscape consumers need general access… but the initial commentary on Bard looks like it may have the advantage of its underlying dataset being completely up to date and learning as it is fed queries, which ChatGPT (in its current iteration) isn’t yet doing.
It is the first conversational AI service from Google that looks to leverage the next-generation language and conversation capabilities developed by LaMDA (Language Model for Dialogue Applications) to provide a digital tool that looks and feels a lot like OpenAI’s Microsoft backed ChatGPT.
Wait a second, I hear you shout! The same LaMDA that a whistleblower claimed had ‘become sentient’ in 2022?
Yes, the very same one.
And it may be just the tip of the artificial intelligence iceberg, as Google has announced plans to introduce 20 new AI products in 2023.
Whether all of these products will be underpinned by LaMDA remains to be seen, but the success of OpenAI is dominating the tech culture zeitgeist for the last few months has clearly moved Google to market quicker than expected.
In short, no. The reason being that these tools will be integrated over time rather than dragging general internet users from one interface to another.
Microsoft will incorporate ChatGPT into Bing and you would imagine Google will do the same with Bard in time.
Just like your email experience in Outlook & Gmail will have generative AI efficiencies and your Google Hangout or Teams call will start to deliver minutes & action points all by itself. The big initial benefit of these models is in the heavy-lifting of common problem solving and completion of mundane & repetitive tasks.
AI infused chat interfaces are essentially what open banking APIs were to the traditional financial services industry, a useful plug in… not an immediate competitive threat.
Well one thing is for sure and that is that the AI wars are just getting going.
It seems like the big technology trend that will drive the S&P 500 over the next 10 years will be an artificial intelligence one (ChatGPT is predicted to generate $1B in revenue by the end of 2024), as we look to our technology leaders to create simpler, quicker, more efficient digital tools that drive productivity and reduce cognitive load.
Is that a good thing?
It’s difficult to say. Did we imagine in 2008 that social media would become such a force for negative mental health outcomes? Did we imagine that cryptocurrency would become the foundation for monkey jpegs being worth $250K?
No, we didn’t… so with or without generative AI products, we (as a society) can be quite poor at predicting negative externalities.
One thing we do predict is that… in the future there will be significant growth in AI ethics jobs.
…But even AI ethics jobs might use AI to scale their efforts going forward.