When you cross your fingers, you’re either blindly hoping for the best, or you’re lying. Or both.

As a director for a media literacy non-profit who is in fairly regular contact with a couple big-tech/media leaders on the front lines in the AI field, I’ve been deep-diving into this for months, now, and here’s where I’m at:

I believe the proper analogy is: AI is crypto, social media algorithms, and nuclear tech all rolled into one.


Like crypto, the positives are over-hyped (in the case of AI, that’s due to the hallucination problem, which will require human oversight for anything important). But, the similarity is striking: in its current iteration, there turns out to be very few good use-cases, really…except to facilitate crimes. I’ve been told the answer to this (y’know, for self-driving cars or medical diagnoses) is to have, non-AI algorithms to supervise and quality control all AI decisions.

Uh, okay. But where do humans fit into all of this again? Tell me again how much of a hit in the job market this will cause and why it’s worth it?

Like Social Media algorithms, a really robust regulation regime might possibly leverage it into some real positives; but, instead, untested and unregulated prior to rollout, it will accelerate the deterioration of society (genocides and political lurching toward authoritarianism, genocides in places where institutions are already weakened, all Art will continue to regress toward the mean, and negative mental health outcomes will continue to spike…all while we continue to largely ignore other existential problems like Global Warming and Pandemic readiness).

And like Nuclear Tech—due to its sheer power, emergent ‘theory of mind’ properties and exponential growth—the downside is existential and probably no upside would really be worth it, anyway.

But, the genie is out of the bottle, so that’s where we’re at.

The (I think, obvious, and only truly ethical) answer is to use the “Precautionary Principle” and stop it as much as possible until its tested and a robust international regulatory regime is in place.

Meanwhile, AI tech leaders are asking for regulation even while they race to roll it out: Fingers Crossed, indeed.

No, no, no, no…

I was just recently introduced to the word “theytriarchy”.

And I must thank whoever showed it to me; because, I feel I have, now, been exposed to, in fact, the dumbest ****ing word ever created.


My writing practice (aka “The Fell Beast”), was, in fact, a beast that fell.

In a pre-Substack blog post last year , I set out the two big projects that The Fell Beast (the working moniker for my writing practice) would encapsulate: a suite of short stories and a sequel to my novel.

Then, over the holidays, my mom died—and my mind set about reshuffling the deck, until I was (once again) ready and able to deal the cards.

Suddenly, my writing brain was broken, except for poetry. Short stories sputtered and faded from consciousness. And the sequel was stalled, altogether. My writing practice became semi-random ministrations. In fact, as I look back, those efforts were more akin to menstruations—failing to give birth to something, I was left to periodically expel what I could.

But, times change. Progress pretends to happen. Or we happen upon some. Either way, recently, I’ve come to realize that the two projects I had planned were really just one project, a project that I was over-complicating. And so the sequel will be a mosaic novel.

Meanwhile, my poetry is still a thing (notice the elegant word choice—yeah, that’s right, I got game). As I mentioned in my Substack, a couple pieces have already been excepted by a journal for 2024.

And I’ve even been able to generate a couple of nifty, new short story ideas that are tentatively standing on their own like newborn foal.

The Fell Beast has awakened.