2025: what a ride.

In a year replete with remarkable achievements – the first AIDS patients being cured with stem cells, restoring sight to the blind using checks notes human teeth , jettisoning Katy Perry to space (She came back. So not a net ‘win.’) – humanity also reached its Apex of Hubris:

Making ourselves obsolete.

For the first time in the history of our species, the quantity of articles being written by AI has surpassed the quantity of articles written by human beings.

Tech Bros and the utterly intolerable “I’m An Early Adopter Of Innovation” Crowd (ie. my editor) would have you believe that this is the inevitable (and even desirable) outcome to humanity’s long and storied history of developing machines that make our lives better. They’ll tell you that today’s naysayers are yesteryears’ kooks from the 1880s who feared “the vapors” would get them if they fitted their homes with electricity.

Conversely, the Tinfoil Hat Folks warn that we’re facilitating a dystopian disaster that’s equal parts Skynet, Idiocracy, Mad Max, and Wall-E. Better stock the bunkers, boys, ‘cuz the T-1000s take no prisoners and gold will be the only reliable form of currency!

But while the Sam Altmans of the world battle it out with the Bunker Bros as to the brightness or bleakness of humanity’s AI future, the rest of us seem caught somewhere in the middle. We recognize that AI makes things faster, easier, more accessible. We also know it’s commandeering more water than Immortan Joe and currently has difficulty drawing hands (though that’s a glitch in The Matrix that will soon be rectified.)

What we don’t know are the real and unforeseen ramifications of what it means to increasingly turn our workload and our creativity over to The Machines.

Computer controlled algorithms are already deciding the content we see, and the results of our “engagement” with this “content”? Sharp declines in markers of human intelligence and competence. (Not to mention attention spans.)

Now that computers are both creating and pushing the content, the potential negative outcomes are so astronomical and myriad that I may have to ask AI to run the calculations for me.


My husband and I used to travel pretty extensively. To plan our trips, we’d buy books, consult various internet sources, interview friends who’d been there, parse through travel magazine articles… It was an exhausting process that typically took weeks (even with the ease of internet booking), and even then we’d usually conclude the trip feeling like we needed to return, because there was inevitably something major we’d missed.

Yesterday ChatGPT planned a complete itinerary for our upcoming trip – where to stay, where to eat, what we could feasibly do each day to maximize our experience – in less than 20 seconds.

It was awesome.

Similarly (and more profoundly), I have a rare and difficult-to-treat disease. I have a litany of specialists on my medical team, and I follow medical journals and medication breakthroughs as if they were my religion.

But I’ve been sick for 10 years, and myself and my doctors are out of options and ideas. So, like a few folks with similar struggles, I’ve admittedly gone to ChatGPT with my diagnoses, my medical test results, and the litany of treatments I’ve tried.

The result? It’s pointed me to facilities, specialists, and research frontiers that – despite my, again, religious levels of fervor and dedication to researching the topic – I had not found on my own.

In one afternoon I went from a chronic illness patient who was depressed and out of options to someone who could make it through the day because I was shown that there were paths I’d not yet taken and that some very smart people were working to ensure that patients like me have hope on the horizon.

Outcomes like these could be revolutionary for medicine. But to do so they must rely on the continued contributions of the best and brightest humans.

And I think that last point illustrates so plainly both my hopes and my fears regarding our AI future:

For the first 300,000 years of human evolution, “necessity was the mother of invention.” When problems arose, the best and brightest of us had to figure it out – we had to use our own problem solving, our own intellect, and think “outside the box” to move forward.

Human ingenuity drove every advancement, and, crucially, every artistic endeavor as well. I am frequently moved to tears by the incalculable capacity of the human mind: its creativity, its fortitude, its cunning, its compassion.

We are unique among all species for the degree to which we can create and problem solve.

And while AI can replicate that genius – while it can compile it, put it in a chart, maybe even mimic it in seconds – it is at best a simulacrum of the extraordinary power of the human mind.

And it has no soul.

When I think of the sheer brilliance of the Einsteins, the Oppenheimers, the Curies, the Euclids and Pythagorases and Hypatias and Wildes and Shakespeares and Klimts and DeGalls and da Vincis and Platos… their brilliance was born of nature, yes, but also of strife. Of the human experience of overcoming obstacles and pioneering techniques to create or illuminate in ways that no algorithm or computer code can.

It’s my fear – and the fear of so many, especially in the arts community – that we will largely lose our intellectual and creative spark to a soulless entity built on the (often pirated) genius of people who have not consented. And that real human artistry and ingenuity will suffer because the tool has learned the trade.

A tool is designed for human use. And those of us who do fear, fear the day that the tool will replace the user.

It’s already happening.

People are turning to AI for therapy and for friendship, while direct human contact is declining. Hell, some folks are even leaving their human spouses to “marry” AI.

And while these are undeniably extreme examples, the replacement of art and artists absolutely is not.

As mentioned, for the first time in the history of our species, the quantity of articles being written by AI has surpassed the quantity of articles written by human beings.

That has very real implications for me, for journalists worldwide, for authors. And it’s not just our jobs that are at stake: it’s the quality of content that readership the world over is consuming.

As a writer of some talent, I have noticed the declining caliber of journalism in recent years. The reasons for this are numerous, but AI is increasingly a contributive aspect.

Last week I asked AI to give me an unbiased take on a major world issue that’s getting a lot of media coverage right now. And the results it gave me – I asked AI to cite its sources – came from outlets with known, unmistakable biases and, frankly, concerning origins. (Think: reporting on the current war in Ukraine as filtered through Russian State TV News.)

When I pointed out to AI that it was drawing from biased and problematic sources, it dove deeper and told me I was correct. Far from feeling validated, this scared and saddened me. I am a media professional with two degrees and 20+ years of journalism experience. I’m more media savvy than most, which is why, when I saw AI’s sources, I balked.

The majority of the world’s population does not have that same level of media literacy. In fact, if recent studies are to be believed, media literacy is at an all-time low. And what happens when AI, which cannot seem to distinguish between reliable and unreliable sources, is answering complex questions for people with low media (or general) literacy?

Bad things.

The answer is bad things.

What happens when human beings no longer write books or scripts because it’s faster and more profitable for publishers to have AI do it? What happens to movies once human screenwriters are obsolete? What happens to visual artists when AI does it faster and cheaper? To architects who would spend months on a design that AI could do in the time it takes to make a sandwich?

AI doesn’t innovate. It replicates.

And as much as I want to think that that distinction alone will keep humanity in the mix, I tested the theory, bit the bullet, and asked AI if it could write like me.

And, again, in less than the time it takes to make a sandwich, it provided a passible version of my voice written out on the assigned topic.

For all you know, it may have even been this topic.

Statistically, as of this year, it is more likely that these words you’re currently reading were written by AI than by an actual human being. And, as I’ve used em dashes multiple times in this piece already, you may never know.


Tagged: