“When the dopamine subsides and the excitement of winning fades, one thing becomes palpably clear. It is the bitter taste of defeat, the mind-numbing monotony of the grind, and the maddening despair of falling short — over and over again — that makes winning worth wanting.”

The quote you just read wasn’t said by some great philosopher, a world leader, or even a pro athlete. No, it was generated by DeepLeffen, a natural language bot based off the 1558M version of OpenAI’s GPT-2 and GPT-3 Model.

If you don’t know what half those words mean, don’t worry. GPT (meaning “generative pre-trained transformer”) is a program used to autocomplete text. When it's given a prompt or a sentence, it’ll finish it with what it thinks is the best option. “Apple” might yield “apple pie” or “apple tree” or “Apple iPhone.” It probably won’t yield “apple sympathetic” or “apple hypothesis.”

The big thing about GPT-3 is how BIG its library is -- the English version of Wikipedia makes up only 0.6% of its total library. The program is fed everything from web articles to cooking recipes and fanfiction, allowing it to respond to a staggering variety of prompts and situations. Furthermore, users can pick specific texts that they want the bot to imitate. DeepLeffen is trained on the 22.9k tweets of famous Super Smash Bros. Melee player William “Leffen” Hjelte and comments from Reddit’s r/smashbros community.

Programmed to generate pseudo-philosophical sentences and surreal anecdotes, DeepLeffen has amassed 64,500 followers since its creation in April 2020. It also isn’t the only bot out there -- GPT can power numerous programs; it can solve language puzzles, generate code, make text-based games, and even write music.

Newsletter signup for email alerts

While GPT-2 and 3 have taken the tech world by storm, the development and usage of these language models has kept largely out of public view. That might be for good reason, too. Due to its all-encompassing and ever-expanding library, the model has also been fed the worst of what the internet has to offer: fake news and propaganda. The program is able to generate extremely authentic-sounding articles at an extremely high rate. According to OpenAI themselves, they released a smaller sample of GPT-2’s code for public use due to the program’s “safety and security concerns.”

GPT-3, however, is different. OpenAI has released an API on their website that allows any reliable organization to request access, saying “We will terminate API access for obviously harmful use-cases, such as harassment, spam, radicalization, or astroturfing.” Users can input their own text or code or what-have-you and see what the model can do for them.

Is this the start of a robot takeover? The concept of an AI that can answer any question and write any article sounds scary, but a few particular examples might make you think otherwise. The program, after all, is built off of the relations between words. The bot sometimes struggles with cause and effect or physics. A sentence saying “The cheese will melt in the ____” might cause the bot to respond with “fridge,” due to a fridge’s common association with food and temperature. The program is reliant on consistent, reliable information -- which is why its library is so huge.

Regardless, GPT-3’s incredible utility cannot be overstated. The program could find use in a large variety of sectors, from programming to authoring children’s books. A human hand might be needed to guide it at this point, but there’s no telling where its 177 million parameters might take it. After all, the library can always expand.

Daniel Ma will be a senior at Century High School. Send comments on teen columns to Jeff Pieters, jpieters@postbulletin.com.