Quadrophobia
Working the Numbers
Bryan Zepp Jamieson
November 26th, 2023
www.zeppscommentaries.online
In the huge uproar surrounding the sudden sacking, and even more unexpected rehiring of Sam Altman at ChatGPT, and the revitalization of discussion over the benefits and perils of Artificial Intelligence, there was a throw-away line in one article that seized my attention.
The line stated, with no elaboration, that their AI program had solved a math formula problem that it hadn’t seen before. Being a mainstream media source, it didn’t elaborate, since numbers bigger than about six hurt readers, make branes hurt.
Michael Parekh on his blog “AI: Reset to Zero” elaborated in a much more meaningful way:
“In the following months, senior OpenAI researchers used the innovation to build systems that could solve basic math problems, a difficult task for existing AI models. Jakub Pachocki and Szymon Sidor, two top researchers, used Sutskever’s work to build a model called Q* (pronounced “Q-Star”) that was able to solve math problems that it hadn’t seen before, an important technical milestone. A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety.
“The work of Sutskever’s team, which has not previously been reported, and the concern inside the organization, suggest that tensions within OpenAI about the pace of its work will continue even after Altman was reinstated as CEO Tuesday night, and highlights a potential divide among executives. (See more about the safety conflicts at OpenAI.)
“Last week, Pachocki and Sidor were among the first senior employees to resign following Altman’s ouster. Details of Sutskever’s breakthrough, and his concerns about AI safety, help explain his participation in Altman’s high-profile ouster, as well as why Sidor and Pachocki resigned quickly after Altman was fired. The two returned to the company after Altman’s reinstatement.” ( https://michaelparekh.substack.com/p/ai-doing-the-reasoning-math-reliably )
Solving previously unseen math problems is HUGE. It involves extrapolative logic, something computers have not been able to accomplish. The vast majority of humans can’t manage that. I’m going to give you an example:
3 + 14t − 5t2 = 0
OK, most of you will recognize that with widely varying degrees of fondness from middle school or perhaps high school. It’s called a quadratic, and it was a sneaky introduction to the basics of calculus. Most teachers couldn’t be arsed to explain what those were for back in my day, and the best ones would come up with seemingly incoherent examples, such as measuring the perimeter area of a room around a carpet, or what happens if you toss a ball in the air at a certain speed. There are, in fact, a lot of occupations where they can be massively useful, but for most students they were just an annoying form of sudoku. Just to add to the general merriment, quadratics had two solutions, one of which was physically impossible. In this case, the solutions are t =−0.2 or t = 3. Three is the one that is possible. You could make a graph from quadratics which is where my brane broke and it had to be sent off to the knackers. I was left with a choice: be an innumerate annoying smart-ass, or be a Republican. You decide.
Now suppose you had never ever seen a quadratic before in your life. Would you be able to figure out what its purpose was? From that, could you solve it, knowing you would have to factor it and possibly use the imaginary number i, the square root of minus one?
Hell, most of you DID take quadratics in grade seven, taught by Ben Stein’s boring brother, and you couldn’t even begin to start on it. I did, but I admit I looked up the answer to make sure I hadn’t embarrassed myself. Just don’t ask me to draw a chart. The results would probably be painful for both of us.
OK, so this algorithm looked at some math function it hadn’t seen before, and, understanding only the variables, the operatives, and the numbers themselves, worked out the correct answer on its own. I don’t know that it was a quadratic that was the formula or what, but it represents a huge step forward, the first time a computer has demonstrated autonomous intellectual logic.
There are a lot of very genuine concerns about AI (I recently read a very good SF novel about an AI tasked with preparing the North American West Coast against a Cascadian fault movement of 9.0, forestalling the quake itself by planned explosives and moving fifty or so million people out of harm’s way. Someone made the horrible mistake of feeding the AI a concept of Occam’s Razor, “The simplest solution is usually the best.” Armed with that, the AI realizes the smart and efficient thing is to just let ‘er rip, cost less, and have much less rebuilding to do afterward because there would be less people. So it let the earthquake proceed.)
Of course AI has been a popular notion in SF going back to the first robot novel, “RUR,” back in 1927. Even in the 60’s, it was assumed that if you just gave a computer enough processing power and data, it would “wake up,” like Mycroft Holmes (Mike) in Heinlein’s “The Moon is a Harsh Mistress.” It’s clearly much more involved then that (even back then, I viewed the “wake up” notion as being similar to stacking hamburger meat seven feet high and getting a basketball player).
But it also appears that the point of self-awareness is now very near, and autonomous decision making really does need something like Isaac Asimov’s Three Laws of Robotics:
-
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If we want machines to have autonomous judgment, we need to up our game and have some autonomous judgment of our own. Asimov made a career of finding loopholes and workarounds in his own three laws. For us, the work will be far more difficult, and the consequences far further reaching.