Elon Musk's lesser-known genius innovations - The GPT-2 Model and the future of Machine Learning

The world has changed a lot since the days of newspapers. From hand-held smart-computers to electric cars, we have essentially fully integrated technology into every aspect of our daily lives. And as we continue to do so, we are empowering and expanding, not just our own senses but also paving the way to a new generation of computers that is so fundamentally different from what we are used to seeing or for that matter, expecting from machines. Artificial Intelligence is not just a buzz-word in tech anymore, it is THE word for tech. Now, I am not a big fan of all the AI-takes-over-the-world theories or such and only a few months ago I could have confidently told you, “Trust me, none of that’s gonna happen anytime soon.” But now, I am not so sure anymore. Over the past few months, I had been researching a lot of the most recent Machine Learning and General AI-based Models for my project and I came across some serious implications that these advanced AIs can have indirectly.


The OpenAI GPT-2 Model


For those of you who don’t know, OpenAI is an Open-Source RnD Company founded by the same guy who is chalking out a plan to put men on Mars by 2027, Elon Musk. It is primarily based on Open-Source AI research and has turned a lot of heads in recent years with its innovations in the field. Starting from the DOTA2 AI that beat the World DOTA2 Champions a few years back all the way to the amusing MuseNet that generates beautiful music in the style of many greats from Mozart to the Beatles. And now, towards the end of last year, they released the 2nd version of their famous Text-Generation Model. The idea was simple. They built a high accuracy text generation model (and here, I am oversimplifying things because it was not as easy as it sounds) and then trained it on nearly 40 GB of text from all over the Internet. Now for videos or images, 40 GB is relatively not a lot of data. But when it comes to text, you can realize how massive that same amount is. This GPT-2 model is better, faster, and a lot more powerful in general compared to its predecessor, the GPT model.

The company is founded on the principles of Open-Source and as such, most of their work is made Open-Sourced as soon as they are done developing it. However, this was not the case with this one. An article on the OpenAI Blog titled “Better Language Models and Their Implications” was published in February 2019. This was the first post where they revealed their GPT-2 model and talked about the ethical reasons why they were not going to publish this model to the Open-Source community at that time.


The Ethical Question


The question in the spotlight here was originally this – The GPT-2 model demonstrated state-of-the-art text generation and the outputs were exceptionally convincing. There is now a famous example of an article generated by this model on the shocking discovery of a unicorn species in a remote part of the Andes Mountain that I believe even you would find convincing at first glance. At worst, the words are so perfectly put that one would never believe it was not written by a human. Now, what happens if the same model is used to generate fake news? One small convincing article on an obscure news website is all it takes to start a riot or fuel a war. Even if you don’t believe that one article will have a major impact, there are other potential areas where a malicious user can use the model to, for instance, scam people on an online forum or generate illegitimate comments on YouTube or essentially any social media. They are doing these even now, but imagine the exponential amount of damage they could yield with this technology by their side. This contributed to the company going against its own guidelines and not releasing the source too soon.

Although they released the source code and some smaller trained models in the following months, it was not until full 9 months of heated debate on the Open-Source community forums that OpenAI finally decided to release the complete model. And this was not totally unplanned. In the meantime, a group of MIT-IBM Watson AI Lab researchers in collaboration with researchers from Harvard NLP created a tool named GLTR (Giant Language model Test Room) that can be used in, as they say, “Catching a Unicorn”. The tool can fairly accurately detect generated text, especially by the GPT-2 model, that would otherwise be very difficult for humans to recognize.


The Human factor


While discussing the possible ways AI can harm us, we often forget that it is just another step in the direction of progress for humanity and like any other tools, from the early flintstones that sparked our first fires all the way to modern computers, it was not the technology that was inherently bad, but rather the intentions of its users. If we can change our perspective towards the use of technology – and I realize how difficult that is – we can surely shape a future for disruptive innovations that overcome the worst of our problems. But if we can’t, then the same innovations will always stay a threat to our safety. So, in conclusion, we can design the machine part of the equation to be as safe as possible, but the real question is what are we going to do about the human part of the equation?


References


Better Language Models and Their Implications - openai.com/blog

Catching a Unicorn with GLTR - gltr.io


Tags: #ai #openai #future

23 views