In an introductory computer science lecture, my class was assigned a homework assignment to write lines of code in Python to output a Fibonacci sequence using recursion. A relatively difficult concept for novice programmers, recursion is a function that “calls” itself to solve a problem, rather than requiring user execution.
Later that night, I sat in my dorm drinking Red Bull while error-checking the code, pulling my hair for hours trying to conceptualize the problem until the program finally worked.
When OpenAI released last fall ChatGPT, an artificial intelligence chatbot, I asked it to answer the same question.
ChatGPT generated the code instantly.
The world has sat in anticipation of the development and implementation of AI for decades. Concerns over mass automation, worker displacement and fear of a “Terminator” style hostile machine takeover make the disruptive technology’s rollout the most polarizing since the advent of nuclear weapons.
ChatGPT’s sudden entrance into public life became of universal interest both in its current functionality and future applications. In just two months, it smashed records for the fastest-ever adopted technology, reaching over 100 million users in that time.
The chatbot uses an advanced set of “neural networks,” or layers of input, processing and output systems. These “neurons” attempt to model the program based on the functions of the human brain. Once the system is developed, it is trained on vast amounts of test data. The final product looks like a Slack channel, but on the other side is a supercomputer capable of answering a near-infinite number of questions with shockingly thorough results.
The seemingly instantaneous breakthrough of ChatGPT is the culmination of a long process of adopting new types of computer processors.
Previously, central processing units (CPU) were the building blocks of all computers. The “brain” of electrical systems, they help process, store and output data as well as perform operations such as arithmetic and logical models. ChatGPT, and newer forms of AI, utilizes graphical processing units (GPU) because of its ability to handle large amounts of data and perform multiple tasks at once.
“The methodology of using GPUs, graphical processing units, kind of like graphics cards,” said Abe Kazemzadeh, a professor at the University of St. Thomas. “That kind of really sped up and allowed neural networks to scale to much bigger networks.”
The new technology, while currently confined to a souped-up call-and-response mechanism, has sent shock waves across every industry as people come to the realization the nature of work is set on an unknown trajectory.
Look no further than the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strike in Hollywood, where writers and actors are both demanding higher pay and safeguards over the adoption of AI.
In many creative professions, content creators are worried the free response capabilities of technology such as ChatGPT threaten their livelihood. Among the many party tricks Open AI’s chatbot can do, it includes writing movie and television scripts.
In an already cutthroat industry, many financially struggling writers and actors are worried the value of their work will diminish — if not totally disappear — when AI is fully integrated into Hollywood’s business model.
Technology like ChatGPT produces scripts by utilizing its neural networks to model existing stories and write novel content based on user suggestions.
“Stories were one of the training data sources. So that is potentially one of the reasons why it’s so good at generating stories,” Kazemzadeh said.
Even more shocking, actors are worried deepfake-related AI can effectively clone their image and likeness and use it in an infinite number of productions without ever needing to be present on a set.
The SAG-AFTRA strike is a microcosm for the broader sphere of concern over AI, both in the implication its workers face and its uncertain outcome.
The manufacturing industry has already displaced large swaths of its workforce with automated production processes; however, the current wave of AI threatens several professions, including but certainly not limited to paralegals, financial advisors, bookkeepers, customer service and programmers themselves.
On top of that, the nature of privacy, academic integrity, intellectual property and copyright laws all come into question.
Attempting to draw out a timeline for AI is a futile effort. Developments in the field of technology are sporadic, and any day a breakthrough or setback could dramatically alter the trajectory of AI.
Yet, existing AI does have limitations. The open-ended nature of ChatGPT itself can hamper some of its own utility.
“It may generate something that is not an exact match of what you’re trying to generate, so it’s not exactly a category you asked for and it can also be somewhat biased by the training data,” Kazemzadeh said.
As a casual tool, today’s AI advancements can be incredibly powerful, but for more advanced tasks, further specialization is needed.
“So, the fact that it’s very open-ended also makes the limitation,” Kazemzadeh said.
While the downsides of AI are easy to identify and incredibly well-litigated, in areas such as medicine, transportation and fraud detection, everyday people will see incredible benefits from innovation in the industry.
Like previous disruptive technologies like the steam engine, lightbulb, computer and cell phone, AI will dramatically alter how individuals work, interact with each other and live their everyday lives.
However, unlike previous times in history, today’s major inventions do not seek to change how people live but replicate people themselves.
The public has largely accepted the adoption of AI as a quasi-fourth law of motion. If that is the case, what will the equal and opposite reaction be?