AI in the Trenches — Generative vs Creative

Bryan Zepp Jamieson

April 2nd, 2024

Peter Cawdron is one of the most prolific writers around. Since 2011, he’s written 27 novels with the common theme of First Contact, and with two exceptions, all are stand-alone works, each with its own world, cast of characters, and aliens. Quite often the premise is based on the outline of a science fiction classic (“Ghosts,” the exploration of a seemingly dormant extrastellar object, borrows the premise from Arthur C. Clarke’s “Rendezvous with Rama” but, like all of Cawdron’s novels, is a wholly original take.) He also has at least 12 other novels, plus several compilations of short fiction, and has edited several anthologies. By any metric, it’s an extraordinarily prodigious output. In a review of his next-to-latest offering, “The Artifact” I remarked that he made Stephen King look like George RR Martin.

You might think that with a production load like that, Cawdron is just another by-the-numbers potboiler hack. You couldn’t be more wrong.

His latest is a novel that gives a nod to “Anatomy of Courage: The Classic WWI Study of the Psychological Effects of War” written by Winston S. Churchill’s personal doctor, Sir Charles Watson, Lord Moran. Cawdron’s novel depicts the brutality, ugliness and futility of trench warfare. I’ll be reviewing it on zeppjamiesonfiction.com later this week for anyone interested. Like his previous half-dozen books, this one is superior.

Cawdron always has an afterword to his novels which is worth reading. He’ll discuss the scientific theory underlying that particular story, explain how it was influenced by a classic work of hard SF, and discuss the political and social elements. He’ll often assert a personal note about his own thoughts and feelings as he wrote the story. They make for engaging sequelae.

In his “Anatomy of Courage,” he noted that based on the quality of his past half dozen novels, all written in a year, some people were gossiping online that he was using AI – artificial intelligence – to write the books, that he couldn’t have possibly done all that quality work by himself.

Well, it’s the internet. People talk shit. But any self-respecting writer would be at the very least irritated by that. Cawdron noted that he had written several really good books in an amazingly short time, and with most people I would take his umbrage as a humblebrag. (“Please don’t hate me because I’m beautiful”). But he HAS done exactly that. He does go on to explain the recent boost in his output, but that’s his story to tell, and if you want to know it, then buy the book. It’s on Amazon and Goodreads.

The allegations are utter crap, and I’ll tell you why I’m convinced of that.

I’ve written a lot in my time. Two novels, a couple of dozen short stories, about 1500 eclectic columns, and about 300 reviews. Writing the novels in particular gives me a certain insight into the writing process of another writer. I’m pretty good, I think, at spotting moments where, usually in the first draft, a writer is struck by a stray thought, leans back, considers, and then with a grin, starts writing or revising. First drafts tend to have a lot of those. (There’s a dictum: write the first draft for yourself, the second for your readers, and hope what remains survives the copy editors.)

I’ll give you an example of how it works. Your character, and let’s risk a lawsuit from Neal Stephenson and call him “Hiro Protagonist,” is standing in a park. What kind of park? Well, a city park. Does it have grass? Trees? A lake? Is there a breeze? Does the sun shine, turning ripples into a disco ball? Are there kids playing? Two old farts playing chess in a pagoda? What else?

Well, pigeons. Don’t most parks have pigeons?

I have a picture my dad took of me when I was seven. I was standing in Trafalgar Square in London, attired in my prep school uniform, and I have my right arm out in front of me, bent at the elbow. On my forearm is a big, well fed pigeon who is eyeing a piece of bread in my left hand with proprietary interest. The expression on my face (“He’s rather … large … isn’t he?”) is a mixture of fascination and intimidation. Presumably I gave the bird the bread without losing any fingers and we both flew away peacefully.

That infuses a vision of what a couple of pigeons are doing in my park. They’re squabbling over a bit of popcorn.

That process leads to a throwaway line in the story. “Near the end of the bench, a pair of pigeons had a lively debate over a kernel of popcorn. The larger one flicked his head lightning fast and flew off with his meal, leaving the other to squall in frustration and give Hiro an appealing, appraising glance.”

That little bit of color is something no AI can manage. Tell an AI to write a scene about a man standing in a park waiting for someone, and the AI might mention the park bench, the trees, the grass, maybe something about the other people. Depends how good at plagiarism it is.

But that bit about the pigeons is something no AI can do. It might mention pigeons if it’s exceptionally well trained, but that little drama about the popcorn, the slight hint of aggression and menace between the birds, that comes from a human mind sharing a human experience.

If you write a lot, you come to be very familiar with that process, and you learn to spot it in the writings of others, especially those whose writing you want to learn from. Cawdron’s books, backed by meticulous research, affinity for solid detail and depending from a vivid imagination, are replete with such.

AI can do a lot, for better or for worse, but the deterministic chaos of the human mind, with its emotion, volition, confusion and empathy, cannot be duplicated in code. AI might be good enough to confuse a casual reader, but it will rarely fool a constant reader, let alone a writer who can guess what went into seemingly unimportant passages that provide color and tone and humanity to a story, making a decent story great.

They may make AIs generative. But they can’t make them mimic human creativity.

It won’t hurt to learn to look for the trade secrets behind the words. You’ll appreciate the works of someone like Cawdron more, and it will make you a bit better, intellectually and in the ability to discern what is human…and what is not.

Quadrophobia — Working the Numbers

Quadrophobia

Working the Numbers

Bryan Zepp Jamieson

November 26th, 2023

www.zeppscommentaries.online

In the huge uproar surrounding the sudden sacking, and even more unexpected rehiring of Sam Altman at ChatGPT, and the revitalization of discussion over the benefits and perils of Artificial Intelligence, there was a throw-away line in one article that seized my attention.

The line stated, with no elaboration, that their AI program had solved a math formula problem that it hadn’t seen before. Being a mainstream media source, it didn’t elaborate, since numbers bigger than about six hurt readers, make branes hurt.

Michael Parekh on his blog “AI: Reset to Zero” elaborated in a much more meaningful way:

“In the following months, senior OpenAI researchers used the innovation to build systems that could solve basic math problems, a difficult task for existing AI models. Jakub Pachocki and Szymon Sidor, two top researchers, used Sutskever’s work to build a model called Q* (pronounced “Q-Star”) that was able to solve math problems that it hadn’t seen before, an important technical milestone. A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety.

“The work of Sutskever’s team, which has not previously been reported, and the concern inside the organization, suggest that tensions within OpenAI about the pace of its work will continue even after Altman was reinstated as CEO Tuesday night, and highlights a potential divide among executives. (See more about the safety conflicts at OpenAI.)

“Last week, Pachocki and Sidor were among the first senior employees to resign following Altman’s ouster. Details of Sutskever’s breakthrough, and his concerns about AI safety, help explain his participation in Altman’s high-profile ouster, as well as why Sidor and Pachocki resigned quickly after Altman was fired. The two returned to the company after Altman’s reinstatement.” ( https://michaelparekh.substack.com/p/ai-doing-the-reasoning-math-reliably )

Solving previously unseen math problems is HUGE. It involves extrapolative logic, something computers have not been able to accomplish. The vast majority of humans can’t manage that. I’m going to give you an example:

3 + 14t − 5t2 = 0

OK, most of you will recognize that with widely varying degrees of fondness from middle school or perhaps high school. It’s called a quadratic, and it was a sneaky introduction to the basics of calculus. Most teachers couldn’t be arsed to explain what those were for back in my day, and the best ones would come up with seemingly incoherent examples, such as measuring the perimeter area of a room around a carpet, or what happens if you toss a ball in the air at a certain speed. There are, in fact, a lot of occupations where they can be massively useful, but for most students they were just an annoying form of sudoku. Just to add to the general merriment, quadratics had two solutions, one of which was physically impossible. In this case, the solutions are t =−0.2 or t = 3. Three is the one that is possible. You could make a graph from quadratics which is where my brane broke and it had to be sent off to the knackers. I was left with a choice: be an innumerate annoying smart-ass, or be a Republican. You decide.

Now suppose you had never ever seen a quadratic before in your life. Would you be able to figure out what its purpose was? From that, could you solve it, knowing you would have to factor it and possibly use the imaginary number i, the square root of minus one?

Hell, most of you DID take quadratics in grade seven, taught by Ben Stein’s boring brother, and you couldn’t even begin to start on it. I did, but I admit I looked up the answer to make sure I hadn’t embarrassed myself. Just don’t ask me to draw a chart. The results would probably be painful for both of us.

OK, so this algorithm looked at some math function it hadn’t seen before, and, understanding only the variables, the operatives, and the numbers themselves, worked out the correct answer on its own. I don’t know that it was a quadratic that was the formula or what, but it represents a huge step forward, the first time a computer has demonstrated autonomous intellectual logic.

There are a lot of very genuine concerns about AI (I recently read a very good SF novel about an AI tasked with preparing the North American West Coast against a Cascadian fault movement of 9.0, forestalling the quake itself by planned explosives and moving fifty or so million people out of harm’s way. Someone made the horrible mistake of feeding the AI a concept of Occam’s Razor, “The simplest solution is usually the best.” Armed with that, the AI realizes the smart and efficient thing is to just let ‘er rip, cost less, and have much less rebuilding to do afterward because there would be less people. So it let the earthquake proceed.)

Of course AI has been a popular notion in SF going back to the first robot novel, “RUR,” back in 1927. Even in the 60’s, it was assumed that if you just gave a computer enough processing power and data, it would “wake up,” like Mycroft Holmes (Mike) in Heinlein’s “The Moon is a Harsh Mistress.” It’s clearly much more involved then that (even back then, I viewed the “wake up” notion as being similar to stacking hamburger meat seven feet high and getting a basketball player).

But it also appears that the point of self-awareness is now very near, and autonomous decision making really does need something like Isaac Asimov’s Three Laws of Robotics:

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If we want machines to have autonomous judgment, we need to up our game and have some autonomous judgment of our own. Asimov made a career of finding loopholes and workarounds in his own three laws. For us, the work will be far more difficult, and the consequences far further reaching.

error

Enjoy Zepps Commentaries? Please spread the word :)