For decades, those who works closely with technology have been watching the evolution of artificial intelligence with equal parts excitement and trepidation. While AI has opened the door for new breakthroughs in innovation and productivity, its rapid growth has also raised questions about the morality of machines handling tasks once performed by humans. And now, the emergence of generative AI has renewed those questions in a major way.
Keep reading to learn what generative AI is, why it’s been garnering headlines and raising controversies, and what it might mean for the future of work.
Generative AI refers to a form of artificial intelligence that generates new material, such as written content, music, visual artwork, program code, and more. Using enormous groups of data, these tools function by prompting a user to input a series of descriptors and then using that information to create unique material.
Generative AI still requires human input, of course; the user has to enter a specific prompt in order for the model to create content – and then review and make edits to the work the model produces.
While tech companies have been experimenting with and fine-tuning this technology for years, generative AI stepped into the mainstream in 2022, largely thanks to two specific tools: ChatGPT and DALL-E.
ChatGPT, a tool launched by technology company OpenAI in November of 2022, produces an essay-style answer to a question or request inputted by the user. And on the visual side, platforms like DALL-E have taken the internet by storm over the past year, churning out graphic designs based on details the user requests.
From detail-rich abstract artwork to complete essays, the work that these tools can produce in a matter of minutes is indistinguishable from something a human – and very talented – artist could make.
Tools like ChatGPT and DALL-E may have produced a lot of laughs on social media, but their impact on the relationship between humans and technology is very serious. Investment in Generative AI from venture capitalists has increased to $2.1 billion, a 425% growth since 2020 – a sign that these platforms represent a trend that extends far beyond fun and games.
Concerns about the overreach of artificial intelligence are not exactly new. Automation anxiety – the common fear among workers that that they’re going to lose their jobs to AI – has spiked in recent years, as rapid acceleration of digital transformation has coincided with periods of economic downturn. “Now would be the time,” many employees have thought, “for them to replace me with a tool they pay a few thousand bucks a year for.”
The high-profile arrival of tools like ChatGPT and DALL-E has only exacerbated these concerns. Moreover, the ease with which users are generating AI-produced art has led to a broader conversation about the devaluation of creative work.
We’ve already seen the ripple effects of generative AI take many different forms. Take the school system, for example. ChatGPT’s ability to generate unique wording and provide many different answers to the same prompt has made it virtually impossible for an educator to differentiate an authentic essay from one written by, or with the help of, an AI tool. OpenAI, the company responsible for ChatGPT, has promised to develop new mitigations to address these concerns.
But the impacts of generative AI extend beyond some teachers worrying about a student or two cheating on their history essays. We’ve even seen fine-arts competitions get hit with major controversy and debate after a winner used generative AI to complete his first-place work.
It’s understandable for artists to see all the AI-generated pictures and poetry on social media and respond not with humor or delight but genuine dread. But here’s the reality of generative AI: it’s a lot better at helping humans work than it is at replacing human work.
Experiment with these tools enough and you’ll see what I mean. Ask ChatGPT to write a blog post and what you’ll get back might be serviceable – it’ll be grammatically sound and adhere to a logical structure, with enough sense of authorial voice to pass as something a human wrote. But it won’t be very engaging, and will surely have a handful of flaws and awkward passages.
However, use generative AI to produce a list of concepts for a blog post, and you might find an idea that would’ve taken you hours of staring at the wall to come up with. Or, use the tool’s stilted copy as a means to avoid the “fear of the blank page” (or blinking cursor, more commonly), and you might get your own first draft done faster.
We’re seeing IT professionals and programmers reap similar productivity benefits from generative AI. They can’t rely solely on AI to produce software code – but are becoming more productive by leaning on generative AI to learn specific code and update it, or generate an initial code that an IT pro can then analyze and correct any errors.
Does this form of machine-assisted work still raise ethical concerns? Absolutely. As the lines between human creativity and AI generation get blurrier, there will be more bad actors on both sides of the fence: people trying to pass off a machine’s work as their own, companies considering adopting technology instead of hiring workers, and so on. Individuals, corporations, and government authorities will be having plenty of conversations over the next several years to determine what checks and balances need to be implemented in regards to this kind of complex AI technology.
But the big fear of AI replacing human workers is still the stuff of science fiction movies, not our immediate future in the real world. No matter how much generative AI evolves, it’ll always be more effective as an assistant rather than a replacement.
Even if that reality still sounds a bit sci-fi – the smart machines and the smart people working together! – it’s one that creatives and IT professionals alike can accept with more optimism than dread.
*No AI was used in the creation of this article – any boredom or confusion you’ve experienced is the fault of the writer alone.