[Founder’s Real Talk #17] What does ChatGPT mean for the future of AI?

Charlie Liu
5 min readDec 12, 2022

This article is translated by ChatGPT from my original Chinese post, and was intentionally not proofread — so I can demonstrate the capability and shortcomings of AI translation at its best. As a starter, a lot of the cultural analogies are missed — such as the story of Jia Baoyu, some Confucious quotes, and a famous song from JJ Lin.

The topic explosion caused by ChatGPT has been a hot topic since last week. On Twitter and in friend circles, eight or nine out of ten posts are related to it, either sharing screenshots of their own experiments with strange questions and answers, or expressing their own judgments about the possibilities of the future. No matter what their backgrounds are, everyone is curious and wants to try it out.

People’s experiments can generally be divided into three categories.

The first category is acquiring knowledge, and its basic usage is similar to a search engine (which has led to the widely recognized object that may be replaced in the future — Google). The second category is creating new content, such as judging and predicting certain hypothetical scenarios, as well as more commonly replacing humans in writing poems, articles, and even writing or debugging code (which has led to another widely recognized object that may be replaced in the future — Stack Overflow). The third category is expanding cognitive limits, such as predicting and judging certain assumed scenarios, and there are also many challenges to ethical boundaries, such as destroying the world, racial discrimination, or the famous question of who Jia Baoyu should marry.

For the first category, acquiring knowledge, after seeing more people’s attempts, most people find that ChatGPT cannot replace Google. In essence, the principles of ChatGPT and Google are very different.

Google sorts the knowledge content about the topic being searched on the internet based on relevance and indexes the best one for you. This means that at a certain time, if the given input and conditions are the same, the output result is stable and unified, and is based on “facts”, that is, the conclusions that have been written on the internet by previous generations. And it is probably a relatively accurate conclusion that has been argued by many people, because otherwise, a more accurate conclusion will be recognized and followed by more people, and therefore mentioned and quoted more and more and have a higher index position.

ChatGPT adjusts its several trillion deep learning engine parameters based on all the data and content it has read on the internet, and then “pattern matches” your question to give you an answer it thinks is probabilistically correct. And its answer is not completely the same as any existing answer (because it needs to “anti-plagiarism”), and often is different to the point of absurdity. So many authoritative commentators, such as MIT Technology Review, Fortune Magazine, etc., call this kind of answer a “confident bullshitter”.

To look on the bright side, because AI engines have stronger learning abilities, as long as there is enough data to correct these wrong answers, over the long term, the probability of them giving correct answers will increase. And this is a normal learning and trial and error process. Human history also took hundreds of years to correct many cognitive errors, and many people paid a heavy price for this lesson.

Correcting an AI engine is much easier than correcting a stubborn or habitual person when you already know that a judgment is wrong or biased.

ChatGPT can attract millions of users to accumulate learning data in just a few days, it can be imagined that its upgrade speed is amazing, even faster than AlphaGo learning how to beat Ke Jie.

Just like when we were in college, the professor strictly prohibited quoting Wikipedia directly when writing papers, partly to cultivate the critical thinking ability to judge the accuracy of arguments, and partly because at that time Wikipedia had not yet accumulated a certain volume of information verifiers, and a large amount of information was uncorrected errors. Although Wikipedia is not perfect now, it has made a qualitative leap compared to more than a decade ago, so it is believed that the answers given by AI engines in the future will be more accurate.

For the second type, the demand for creating new content can be said to be a shortcut that everyone has been forced out of the industry. Too many marketing needs to write copy and articles to attract attention, so when there is an AI engine that can generate these contents that look passable in large quantities, why not. Although there are too many contents that are empty talk with no nutrition at first glance, and even the logic before and after is not particularly smooth, it doesn’t matter, the purpose of attracting attention and click-through rate is achieved, and the tasks to be completed are also completed, everyone is happy.

In the foreseeable future, most of the propaganda copy and articles on the market will be replaced by machines. Under the temptation of low-cost substitution of human labor, bad currency drives out good currency, and most marketing content will become unreadable. Of course, there will be some people paying for slightly higher cost content, so there will also be some content with manual verification and modification components, and there will always be high-end marketing content from senior creative personnel (or partly assisted by AI), but the share of the bottom of the pyramid will become larger and larger.

Those less creative jobs, such as debugging code, are not suitable for human brains to deal with because of the complex relationships between instructions and parameters in many codes. Why not let the machine do it, it will be more efficient and accurate than human. As for learning to write code, I still believe that easy-to-come knowledge will not be cherished, so don’t expect ChatGPT to replace learning, although GitHub Copilot has become a necessary partner for young people to learn code, but I still believe that the most advanced AI technology cannot replace human creativity and wisdom.

For the third category, although ChatGPT has already done better than GPT3 on some factual and ethical boundaries, such as determining that Columbus is not a 21st century figure, there are still many loopholes. Like my judgment on the first category of problems, although ChatGPT currently still makes stupid mistakes like pairing Jia Mu with Jia Baoyu, with the help of a large number of users and data, it will quickly discover and correct errors, making the AI engine more powerful.

Thinking back, hundreds of years ago, humans still thought that consanguineous marriage was correct, and that the lord was the lord, the minister was the minister, and the father was the father and the son. When I was a child, I was scolded when I quarreled with my grandfather. Problems must be seen from a development perspective. When you enjoy yourself from its stupid answers, it is updating and improving itself day and night.

Of course, an insurmountable boundary is the difference between carbon-based life and silicon-based life, as well as the cognitive differences in life and survival brought about by this essence of life and survival. But I believe that as long as someone continues to simulate the ethical framework of human nature and human society with more accurate parameters and models, the AI engine can make better value judgments based on these parameters and models. Even if someone is able to model “love” well (if we humans really understand love), AI may be able to evolve the ability to love, like the numbered 89757 that a certain singer imagined.

--

--

Charlie Liu

Co-Founder & COO @ Sora Union | ex-Strike, Adyen & Templeton Global Macro | Storyteller @wearemeho | Sommelier/Winemaker