
In early May, I broke my finger, a boo-boo from bad basketball that, at the time, was certainly painful but, it seemed, destined to be not much more than that. In a few weeks time, I figured, I would be back on the court, back on the keyboard, back at everything I was doing (however erratic that happened to be) before I sustained my proximal interphalangeal interruption. My pinkie break.
At the time. It seemed. I figured.
Well … for a lot of reasons, most having little to do with my little finger, things didn’t quite work out that way. Things, it seems, often don’t. Weeks turned to months turned to damn near half a year. No basketball yet (maybe none ever again?), a few tentative golf swings, just last week a cautious 18 holes (not bad, considering), barely any writing worth noting — barely any at all — and way too much wallowing. All that has been accompanied, as is now my M.O., by a gnawing, growing, alarming sense that I need to get off my pinkie and do something while I still can.
So let’s do some blogging! I knocked out my yearly (if you don’t count that damn pandemic) vacation post. Now onto other stuff.
—***—
Of all the weighty social questions of the day — Is it too late to clean up the wreckage we’ve made of the environment? Are the yahoos we’ve elected to serve in Washington finally going to drag democracy into the crapper? Will the wars in Europe (Ukraine, Israel) engulf us all? What do we do with all the immigrants? Can’t we figure out some gun laws to satisfy those who regard the Second Amendment as some sort of 11th Commandment? Really: Donald Trump and Joe Biden, again? Is the pitch clock a good thing? — the most intriguing, and possibly the most disconcerting, is this: Are we about to be owned by Bard?
You know: Artificial Intelligence. ChatGPT. The new SkyNet, for you Schwarzenegger fans. It’s all over the news these days. You have to have seen it.
A lot of very smart people consider AI — we’re not talking Philadelphia’s favorite No. 3 — a very real threat to humans. Existential, even. Some of them created a group, the Center for AI Safety, where they place the unbridled advancement of AI on the same threat level as pandemics and nuclear weapons. (And that’s just one of several groups addressing AI’s evolution.)
Yet a lot of those very smart people continue to spend millions of dollars to develop the latest, smartest, scariest kind of artificial intelligence, the kind that critics fear will lead to the bleakest of ends. They want their AI cake, but they don’t want it to eat them and everybody else.
“This could be cool. (This could be dangerous.) This could make us a lot of money. (But this could be dangerous.) We can’t let our competitors get the edge on us. (But this could be dangerous.) We have to keep spending and taking chances if they are. (But this could be dangerous). We’re all for a free market, of course, unfettered by government regulations. Regulations suck, you know? Capitalism rules! Let the market decide! (But this could be dangerous.) So … maybe someone should step in here?”
Even very smart people can be very, very stupid sometimes. Not to mention dangerous.
—**—
The threat of AI already is proving to be more than bad 1980s fiction. I used to do quite a bit of freelance work for a certain web site that, in the last few months, has laid off most of its editorial staff. It’s now stuffed with AI-generated content. Producing it these days is as simple as instructing Bard, or some other chat-based AI tool, to “generate a blog post on the compatibility of Scorpios with other Zodiac signs.”
Obviously, there’s not much nuance to the “content” these things blurt out. They’re all painfully straightforward. There’s little insight from, to take the above link for an example, an astrologer. The content doesn’t have any, for lack of a better term, humanity. No humor. No pizzaz. No style or punch.
But as far as filling up a blog, giving some site producer something to slap an ad or 12 onto, it does the job. These new auto-posts are cheap and easy and, for people who are interested in the subject, at least base-line informative.
If this is the future, we all have plenty of reason to be nervous.
The worry over AI, though, goes much deeper and wider than some dumb posts on just another site. Hollywood writers have gone on strike in part to try to get a handle on what AI’s use means to the industry. The problem is, it might already be too late. In one example of AI run amok, a 2021 documentary used an AI program to recreate the voice of chef-traveler Anthony Bourdain. Three years after his death. A tad unethical, for sure, and more than a little spooky.
Tom Hanks, for another example, one of the most decorated actors of our time, complained recently about some company using an AI-generated version of his mug to push a dental plan.
With computer-generated images, voices, and storylines, the “content” we constantly digest might actually produce itself someday, with no help from writers, actors, directors, key grips (whatever it is they do), makeup artists, caterers, lighting people, sound people, camera operators, agents, marketing folks, advertisers, or reviewers. They’re all replaceable.
The writing part is certainly easy enough. Really.
Hey Bard (I actually asked recently): Please knock out a short play about a rat and a woman who fall in love, create a sandwich with a drug in it that enables the rat and the woman to control the minds of all who eat the sandwich, and the rat and woman take over the world with the help of their friend, a spoon.
https://g.co/bard/share/911b60e64d97
Boom! Thanks, Bard-O. Someone get me WarnerBrothers/Discovery/AT&T.
Granted, RatWoman is not going to win any awards. But it can’t be much worse than most of the stuff on Netflix these days. Ryan Reynolds could play the rat.

—**—
Back in real life, a lawyer in New York, in practice for more than 30 years, directed ChatGPT to find some legal citations he could use in a suit against an airline. He promptly incorporated them into a brief for a federal judge in Manhattan. But many of the cases cited, it turned out, were entirely made up. Completely. Pulled from thin air. They didn’t exist. Never did.
“I did not comprehend that ChatGPT could fabricate cases,” the chagrined chap told the judge.
You might think that a setback like that would stop lawyers from using AI. Or at least slow down those who are increasingly leaning on AI. You’d also think you’d should know lawyers and other smart guys better.
Autoworkers are on strike, worried that AI will blow up their jobs. Already, automated manufacturing lines, fueled by AI, have radically changed not only what automakers produce but how. Today’s autoworkers are nothing like the grease-stained, ratchet-slinging assembly line automatons of yesteryear. What will tomorrow’s look like? Will there be human autoworkers tomorrow? That’s what has everybody spooked.
Bard, Google’s generative artificial intelligence chatbot, and his (her? its?) ilk are a long way from being perfect, or even right much of the time (as that lawyer in New York found out). But there’s no denying; with the help of those smart guys who can’t help themselves, AI is getting there. And quick.
(I wasn’t quite sure, in my reading for this post, what “generative” AI was. So I asked Bard:)
Generative AI is a type of artificial intelligence (AI) that can create new and original content, such as text, images, audio, and video. It does this by learning patterns and relationships in existing data, and then using this knowledge to generate new data that is similar to the training data.
Generative AI models are typically trained on large datasets of existing content. For example, a generative AI model that can generate text might be trained on a dataset of books, articles, and other written materials. Once the model is trained, it can be used to generate new text, such as poems, stories, or code.
It took about three seconds for Bard to bang out that explanation. And that’s only about a third of what it came up with.
Software engineers. Teachers. Students. Cooks. Clerks. Actors. Mechanics. Painters. No profession, no vocation, no endeavor, no matter how mundane or extraordinary it may seem, will be untouched by AI in the too-near future.
—**—
What’s it all mean? Should we just give into the inevitable machine-learning takeover? Will it be as bad as James Cameron made Terminator out to be? (But Arnold was kind of a good guy in that, right?) Some — probably those who see a buck or a billion in this — envision a kind of AI-generated utopia where we all sit back on our ever-expanding tushes and barely work. Jamie Dimon of JP Morgan has floated the possibility of a Monday-through-Wednesday-noon workweek. You gotta admit: That sounds pretty sweet.
The Wachowskis saw human batteries.
If anything is clear in this chilling new world where machines not only do our drudgework but our thinking, it’s that this cyber-cat already is out of the bag. This is real. This is now. It’s time, probably past time, to stop messing around.
So: How much influence, how much power will we allow this very human creation to wield? Do we still have a say in it? If, by some stroke of luck, we do, who is going to say no to the people who can’t help themselves from unleashing it further?
In this chilling new world, who will step up to stop RatWoman?