hatGPT is the latest and most impressive artificially intelligent chatbot yet. It was released two weeks ago, and in just five days hit a million users. It’s being used so much that its servers have reached capacity several times.
OpenAI, the company that developed it, is already being discussed as a potential Google slayer. Why look up something on a search engine when ChatGPT can write a whole paragraph explaining the answer? (There’s even a Chrome extension that lets you do both, side by side.)
What it can (and can’t do)
ChatGPT is very capable. Want a haiku on chatbots? Sure.
How about a joke about chatbots? No problem.
ChatGPT can do many other tricks. It can write computer code to a user’s specifications, draft business letters or rental contracts, compose homework essays and even pass university exams.
Just as important is what ChatGPT can’t do. For instance, it struggles to distinguish between truth and falsehood. It is also often a persuasive liar.
ChatGPT is a bit like autocomplete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on pretty much all of the web, and can therefore complete whole sentences – or even whole paragraphs.
However, it doesn’t understand what it’s saying, just what words are most likely to come next.
Open only by name
In the past, advances in artificial intelligence (AI) have been accompanied by peer-reviewed literature.
In 2018, for example, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT is too), the methods were published in peer-reviewed scientific papers, and the code was open-sourced.
And in 2021, DeepMind’s AlphaFold 2, a protein-folding software, was Science’s Breakthrough of the Year. The software and its results were open-sourced so scientists everywhere could use them to advance biology and medicine.
Comments
Post a Comment