28 Feb. 2023 – Every few years we see a hot new technology that generates a lot of excitement. It becomes the most mentioned topic at conferences, makes it to the front pages of newspapers, and leads investors to invest in the company behind that innovation. Well, it happened again with OpenAI’s chatbot ChatGPT when it was opened up to public use last November. But wait! In nine out of 10 cases, the “hot new technology” turns out to be more hype than reality, which is best summed up by the phrase “reality distortion field”- which was coined by Bud Tripple of Apple Computer in 1981. So is ChatGPT more hype than reality? Or are there some serious business use cases for it?
By Brian Pereira, Digital Creed
Generative AI, large language models and natural language processing are technologies that have been around for years. For instance, companies like Salesforce and Microsoft have infused AI into their enterprise products. The autocorrect features in Microsoft Word, the auto-complete features in your browser bar, and even grammar checkers like Grammarly, all use machine learning and AI.
But why was there so much excitement with ChatGPT?
Here’s what happens when you put a simple (bot) interface on top of a powerful technology (AI) and give it to everyone to use, for free:
- You get a million users in the first week and 100 million in two months (as in the case of ChatGPT).
- It becomes front-page news in mainstream newspapers.
- It is discussed in almost every session at the World Economic Forum in Davos.
- Many take it for a test drive and blog about it, becoming self-proclaimed experts.
- Investors rush to make multi-million-dollar investments in companies whose domains end with .AI
After numerous discussions with global AI experts and hours of reading, we have to conclude that there is potential and promise in this technology. My belief was affirmed when I read about Microsoft’s multi-billion dollar investment in OpenAI. And how Google, the big daddy of search engines, felt threatened for the first time in years.
But in the same breath, I also have to say that Generative Pretrained Transformer (GPT) models and natural language processing tools are not perfect, and can err. And those errors can be a lot more than embarrassing – they can wipe out billions of dollars in a company’s market value, as we saw with Google recently. The search giant saw $100 billion wiped out of its market capitalization when its chatbot bard goofed up. And Microsoft had to gag Bing’s version and limit its responses to five per user query when its own chatbot started generating weird responses as users pushed it further by refining prompts.
See also: ChatGPT: What Are Its Business Use Cases?
It’s still in Beta
In the software world, developers do alpha and beta testing in closed groups to get feedback on the early versions of their products. Based on the feedback, they refine the product, correct the bugs, and improve the user interface and features.
In opening ChatGPT to the public last November, OpenAI was essentially conducting a mega beta test. OpenAI may disallow free public use in future and you may have to pay to use it. You may remember that Gmail was a beta product for years and it’s now near perfect.
What OpenAI and others are doing is continuously refining the data sets on which these Generative AI and LLMs depend. They call it “reinforcement learning” with “humans in the loop” or “supervised learning”.
Since ChatGPT is still in beta, it has its limitations. Users need to understand what it was designed to do, and what it cannot do – before setting high expectations.
For instance, it is not good at math and logic. But it is creative. I see it more as a right-brained personality.
It can go through large swathes of text and cull information, to produce articulate documentation, with elegant prose or verse. So, while it can write legal documents and Shakespearean-like scripts, it may flunk a math exam.
If you rerun a question you could get a different result each time, which may be accurate or inaccurate. So it is inconsistent.
What’s more, the responses are influenced by the human prompt. Like asking a question and giving it clues for the answer.
Do see the feature I wrote for CIO Inc about the possibilities and limitations of ChatGPT.
And I recently wrote another feature about business use cases.
AI models are not ready for productization yet. But they will come closer to perfection by the end of this decade.
It also worries me to think what would happen when AI gets a conscience like human beings, and becomes headstrong and acts on its own.
Developers need to put a “kill switch” in there so that they can pull the plug when things get out of hand.
Remember the film Terminator and Skynet Corporation?
See also: Cutting Through the Reality Distortion Field of ChatGPT
See also: Gartner Identifies Top Trends Impacting Technology Providers Through 2025
Updated: We checked if ChatGPT is still available for free use on 28 Feb. 2023. And we were able to log in and use it.