How Artificial Intelligence Wrote an article in ‘The Guardian’

Zihad Hossain
8 min readDec 3, 2020

Artificial Intelligence or AI is the buzzword of this century, just like television was in the last century and computer has taken its place at the end of the century. So now starting from a little undergraduate varsity project to advanced medical technology — in all cases you will see AI, machine learning or deep learning is mentioned. At the same time, there is a fear in everyone’s mind — artificial intelligence may be going to be superior to humans. As rich businessmen like Elon Musk have often said, “artificial intelligence will one day establish a dictatorship in the world.” The Guardian recently published an article, almost entirely generated with a language model called GPT-3. Now in this article I am going to give a simple explanation, how the computer does this job like intelligent. Then we will discuss how artificial intelligence will be able to occupy the world. I may not be able to explain everything straightforwardly, but writing will feed your thoughts — I hope so.

Photo by Andy Kelly on Unsplash

There are many indicators of intelligence. The unique feature of human intelligence is the use of language. By using language in a meaningful way, we can write and save our words somewhere, and also express abstract thoughts to others. The original name of the work that is done in computer science with language is Computational Linguistics, which is now widely marketed as Natural Language Processing or NLP.

NLP has many applications in real life. What we usually see in Google Assistant, Siri, Cortana and Alexa. These technologies enable us to have human-like conversations with us. So how can these technologies learn language like humans?

There are several steps involved in acquiring language skills. The next step in learning pronunciation is to create a meaningful sentence using different words in a language. Let’s think of a sentence like, “Shakib is the best all-rounder in the world” — I had to put every word in place to say this sentence. There is also information here. This is a daunting task. First you have to explain to the computer which word after word will be meaningful. Then running a meaningful conversation through question and answer is a bigger challenge.

Researchers have come up with a number of methods to explain to a computer which word should be placed after which word. But we can understand it very easily. For example, the word ‘king’ has a relation with the word ‘queen’, or the word ‘mother’ has a relation with the word ‘father’. Again, if I talk about cricket, the word ‘cricket’ has to do with batsmen, balls, wickets, Shakib, Mushfiqur, World Cup etc. We can easily understand these. But how do I explain the computer with algorithms? An easy way to do this is to convert words into vectors. If you have studied physics or mathematics in college, you will easily understand what a vector is. A vector is basically a directional star. That is, a vector will not only express a number value, but also a direction.

This is what happens when the word ‘cat’ is expressed in vectors [1, 5, 2]. You can now easily plot this vector in a three-dimensional (3D) structure. So in this case there will be only three dimensions which can be divided into X, Y and Z axes. Maybe it’s easy to think of it as three-dimensional. But the more dimensions can be taken, the better the results. When I was working with Bangla words, I converted every word in the dataset to 32 dimensions.

The advantage of converting to vectors is that we can easily find the distance of one word vector from another word vector or the angle included between them. Then the relationship between them is well understood. If you plot the vectors in a graph, you will see that similar sounds are taking close positions. It also makes it easier to teach computers that word after word can be placed. The technique of extracting these vectors is also quite interesting. First a vector is given for each word as per one’s wish. One word in each sentence is then captured in a neural network, and the previous words are allowed to be predicted before that. This is how a neural network is taught that one word can be followed by another. In this way, the vectors captured randomly while training the neural network are converted into vectors to continue working properly.

The relationship between words can be easily determined by converting them into vectors. The lower the value of the included angle of the two word vectors, the greater the relationship between the two words.

Probability is also related to this work. The computer is basically predicting one word after another according to the data given earlier. The goal of the researchers is to make this hypothetical output as meaningful as possible. There is a deep problem here. It is also seen as a limitation, but also as a philosophical problem. The computer is producing such an output, but unconsciously, that is, the computer itself is not aware of that output.

The philosopher John Surley has presented this problem in the last century with a beautiful example. He thought of a closed room. There is a man sitting in that room, to whom you give an English document through the window and he will translate it into Chinese. But the twist is, the man in that room doesn’t know Chinese. He has a book where some rules are written. An English document can be converted to Chinese if those rules are followed properly. But the man who is following these rules has no idea what is actually going on there, whether it is actually being translated into Chinese. That is, the person inside the house is completely unconscious about this process. Even then we are getting proper output from that room. The house is working as a black box here. Just like today’s computer. Where the computer has no sense of its own, no awareness of its work, it still answers our questions correctly (GA, Siri, Contana and Alexa). At the time, many artificial intelligence researchers were trying to answer Searle’s question. But it is still an unresolved issue. And for this, many researchers have objections to the name of Artificial Intelligence. The technologies that are being sold today under the name of AI are not basically AI. We still have to wait to get real AI.

Now the question is can such an unconscious device or system really dominate us?

At this stage we have to look at the current way of life. Aren’t we already relying on such an intelligent system? For example, from finding low-traffic roads on Google Maps to searching for products of your choice on e-commerce sites, or looking at any type of post on Facebook, there are suggestions on the YouTube homepage. In fact, there is the so-called AI behind all this.

How to use a technology now but at the end of the day is up to us. The same is true of AI. And AI is not just a mad scientist’s garage project. It involves business. Stuart Russell gave a very good example in this regard. Remember, at one point we started having cows, goats and cats as well as robots at home. But these robots will surely eat like useless cats and will not fall asleep. They will do the housework. Will clean the house, will cook.

What if this robot doesn’t understand your cat’s relationship with you and cooks and serves cats for dinner to surprise you? Busy! As many cats as there are pet customers, the industry will lose them instantly. There will be a storm of criticism. Stock prices will fall. If you run this kind of product in the market without solving this small but seemingly sensitive problem, it will not take long for the business to turn on the red light. The industry is thriving on such small risks, which are not only detrimental to people, but also detrimental to their business. So how much control will be given to AI, the matter is in the minds of researchers.

“There’s also no better way to kill the field of AI than to have a major control failure, just as the nuclear industry killed itself through Chernobyl and Fukushima. AI will kill itself if we fail to address the control issue. ”
- STUART J. RUSSELL

The GPT-3’s output to The Guardian is truly remarkable. In fact, scientists have reached a milestone in generating astonishing predictive output. Although this GPT-3 is nothing new. Many experts say that there is little difference from the previous version. Even then, there has been some improvement. But the thing is — was the output creative? Was there any information that was new to us? Or can we really say, based on this output, that the instrument has become truly intelligent?

In magazines, that’s really awesome. In fact, scientists have reached a milestone in generating astonishing predictive output. Although this GPT-3 is nothing new. Many experts say that there is little difference from the previous version. Even then, there has been some improvement. But the question is — was the output creative? Was there any information that was new to us? Or can we really say, based on this output, that the instrument has become truly intelligent?

Input the first sentence of the short text in this figure. The rest is generated with GPT-2. Like The Guardian, the robot wrote. courtesy: huggingface

But one thing is for sure, this type of model is further improving the technology. Google Mail has long used a model that predicts consistent wording when composing a mail, and predicts the next syntax when it comes to coding. Some models can automatically write a whole function. This type of model makes our job easier. We also feel comfortable with this kind of technology. Because if we benefit from something, we must accept it. Now how many would want to cut through the forest with blunt Stone Age weapons?

Many of the problems we see right now in the case of AI, such as racism and misogyny, are largely the fault of human beings with consciousness. Amazon’s artificial intelligence system for hiring employees has been seen to discriminate against women. Very normal! Because the data with which it was trained was biased. That data highlights the reality of the misogynistic workplace. If you allow the machine to make some predictions based on such data, it will give biased output. But now researchers are trying to de-bias such datasets.

After all, the technologies that are now available in the market with artificial intelligence are still in a state of disarray. By fooling you, they are proving themselves intelligent. At least that is the reality by philosophical standards. However, these technologies are having a huge impact on your life right now, and you have been relying on these sophisticated technologies for a long time to give them a little bit of control over your life. But for what? Quality service, comfort and convenience.

If you get some benefit from this story, then don’t forget to clap and follow me for more stories like this!!

--

--

Zihad Hossain

Hello friends! I am Zihad. And I love to writing and learning new things.