Facebook Caprice — Meta is ever-more capricious, sneaky and unfair—or is it just incompetence?

Facebook Caprice

Meta is ever-more capricious, sneaky and unfair—or is it just incompetence?

Bryan Zepp Jamieson

May 18th 2024

www.zeppscommentaries.online

It started out simply enough. My Linux system was over ten years old and beginning to gasp and wheeze a bit. My Windows system was newer and more powerful, and I thought, well, I’ll make that my new Linux system and get a new Windows machine. Which is what I did.

I’ve done this sort of thing a lot of times during the nearly forty years that I’ve had computers, and can avoid most of the pitfalls. My work and personal hard drives are all kept carefully off line while I juggle new installs and getting software passwords and codes updated. Windows, of course, is far more complex since you have to reinstall commercial software and you better have the access codes for that or you risk paying again for a program that was barely worth it in the first place.

In an operation this complex, you’re bound to miss a spot or two. Not too long ago—three months, perhaps—I had changed my Facebook password. I had been worried that I might have made myself vulnerable to my account getting hacked—unfounded, I’m happy to say—but the PW was due to be changed anyway.

My password minding app didn’t know about the change. I’m guessing I was in a hurry, and when it asked if I wanted to update, I told it “later” and then forgot. My browser also tracks my PWs, and it did notice. Since I could still log on without any problem, I forgot about it.

Both my old browser, and the password for my browser sync file, were on the old system. Sloppy, sloppy, sloppy. What was the password? Chinese city, more than 10,000 population, but was it in Cantonese or Mandarin? Tch. What did I have for breakfast? Um, erm, food?

Well, no worries. I have double verification code set up, so all I had to do was hit “change password” and my cellphone would get a six-digit code which I would enter, and then go ahead and create a new password (I hear “password” is a good one).

Got the code, went back to the page and entered it. Facebook came up with a “This function is not available at this time.”

Eh. Facebook. Whaddyagonna do?

Well, I still had other computer tweaking to do. And other chores.

I tried a few hours later, and got the same message. “Come on, Facebook,” I thought, “Get your act together.”

So the next morning I tried again. This time the results were jarring. It told me that I was on a restricted status, followed by a list of policy violations that may or may not apply to me (and most of them didn’t apply at all) and therefore the function of changing my password was not available to me.

To say I was flabbergasted is an understatement. They had a little box giving me 500 characters to respond, so I sent a message asking what this was about, and noting that it had been over a year since I had any kind of run-in with Facebook proctors. (That was incorrect, as it turns out; the last such thing was well over TWO years ago, in March 2022.)

From experience, I wasn’t going to hold my breath waiting for a reply. So I fired up my old Linux machine and retrieved my hopefully-still-active password along with a few other odds and ends I had missed on the first go around.

I retrieved my password and fired up the ‘new’ machine. Entered password, got a “we don’t recognize this device” message and a promise to send a six-digit checksum code to my cell. Heart sinking, I checked the phone, entered the code. It mulled it over for about ten seconds, and then let me in.

No remonstrations from Facebook, and I could post at will.

Now, I’ve had my share of run-ins with Facebook proctors, and been in Facebook jail on three-day stints three times. The third one outraged me enough that I left Facebook for a while. At the time, I emailed friends the following, with present day annotations in square brackets:

As of March 2nd , [2022] I’m no longer maintaining an active account on Facebook. Their censorship has been irrational to begin with, and now it is flat-out capricious.

A few months ago, someone asked jokingly if it was legal to shoot Republicans in California. I replied that you had to get a license, which was expensive, and there was a ton of paperwork involved. Three days in Facebook jail for hate speech. [Apparently it’s hate speech to discourage shooting Republicans in California. Who da guessed?]

The last one was when a friend posted a link from Weatherwest (a blog I habituate) that gave a rather dire forecast for the rest of February, promising intensifying drought. Riffing off Shakespeare, I wrote, “First thing we do, let’s kill all the meteorologists.” Unlike the first suspension, which I was able to successfully appeal, there was no appeal. Facebook cited COVID [disinformation] as the reason. [No, I don’t get that at all. And yes, apparently, that’s hate speech, too. I don’t know if Shakespeare got banned. After all, he made mention of the weather, too.]

I discovered yesterday that I had been suspended again for hate speech. No reason was given. I thought at first it was perhaps because I remarked that for the Russian people, the best thing that could happen would be if Putin was deposed or assassinated, but that particular post was still up, so I have no idea why I was banned. [In Facebook jail on a three-day, not actually banned.]

In any event, I’m out of there. Yes, Facebook has the right to control who posts what, but when it becomes illogical and even capricious, it’s a bad business model and not a forum I want to waste time in.

A couple of weeks later, I reconsidered. There were a couple of pages where I do volunteer work, posting events and news and ferrying information between their Facebook pages and their websites. It didn’t seem fair to short them because I wasn’t getting on with the powers-that-be at Facebook.

I resumed posting. Facebook had said that unspecified limitations and restrictions would be applied to my account, and I figured that meant they would keep me on a short leash to see if I behaved. (I maintain that I hadn’t actually misbehaved, not by any sane metric.)

In fact, the opposite seemed to be the case. Not only were there not any restrictions or limitations that I could discern, but they seemed content to back off and leave me alone. I only had one minor incident, about six months ago. At the height of the “Barbieheimer” fad, a user in a private group I moderate posted a picture of a bare-chested Putin atop a bright pink horse with a servile Trump holding the reins. It was blurred “for content some might find objectionable.” I got a thing from Facebook saying that as moderator I had a responsibility to ensure that my users didn’t violate Facebook policy. I wondered in the group if it was Trump, the pink horse, or Putin’s nips that triggered some proctor. The next time I logged in, the picture was un-blurred.

And that’s been it. So I can’t explain this week’s problem.

It’s possible that my initial suspicion, that the problems could be laid to Facebook’s incompetence, was all that was going on. Incompetence, by its nature, resembles capriciousness.

But there’s one thing that leaves me wondering if it wasn’t something more deliberate. As mentioned, I used the “forgot your password” function twice, and then had to affirm my new machine with them once I retrieved my old password. I happened to look at my email queue that day I returned, and noticed that in all three instances, the code given was identical.

It doesn’t work that way. Most outfits give you a certain time to respond, and if you don’t make it, you have to reapply for a new code. And it’s always different each time. Always. It’s a security thing: the code is a back door for anyone spying on an unsecured connection.

Except In my case with FB. Am I being paranoid, or was that code assigned to me, one that says something like, “This guy’s a troublemaker, don’t cooperate.”

It’s the passive-aggressive quality that I don’t understand. Our last contrétemps was in March 2022, and I never did find out what they were annoyed about. And everything seemed normal. Until this week. Then suddenly Facebook “remembered” that long-ago incident, and levied a strange hidden punishment on me and so I’m not allowed to change my password. Maybe?

Remember, it DID let me change my password about three months ago.

Does this seem paranoid, or have others had this experience?

Facebook’s no help: I haven’t gotten a human response from them on anything in three years. Maybe not talking to me is part of their punishment, I don’t know.

But if I do suddenly vanish, don’t assume the worst. I may have broken one of Facebook’s unguessable interpretations of their own rules and received an invisible and unexplained penalty. Or I forgot my password (in Mandarin, of course).

The people who run Meta, and various other major corporations, want to run America. That’s why most of them support the GOP.

Figure that if they do get total control, this incident is a pretty good example of the ‘justice’ you might expect.

Hell, bizarre as it is, it’s probably the best you can hope for.

AI in the Trenches — Generative vs Creative

Bryan Zepp Jamieson

April 2nd, 2024

Peter Cawdron is one of the most prolific writers around. Since 2011, he’s written 27 novels with the common theme of First Contact, and with two exceptions, all are stand-alone works, each with its own world, cast of characters, and aliens. Quite often the premise is based on the outline of a science fiction classic (“Ghosts,” the exploration of a seemingly dormant extrastellar object, borrows the premise from Arthur C. Clarke’s “Rendezvous with Rama” but, like all of Cawdron’s novels, is a wholly original take.) He also has at least 12 other novels, plus several compilations of short fiction, and has edited several anthologies. By any metric, it’s an extraordinarily prodigious output. In a review of his next-to-latest offering, “The Artifact” I remarked that he made Stephen King look like George RR Martin.

You might think that with a production load like that, Cawdron is just another by-the-numbers potboiler hack. You couldn’t be more wrong.

His latest is a novel that gives a nod to “Anatomy of Courage: The Classic WWI Study of the Psychological Effects of War” written by Winston S. Churchill’s personal doctor, Sir Charles Watson, Lord Moran. Cawdron’s novel depicts the brutality, ugliness and futility of trench warfare. I’ll be reviewing it on zeppjamiesonfiction.com later this week for anyone interested. Like his previous half-dozen books, this one is superior.

Cawdron always has an afterword to his novels which is worth reading. He’ll discuss the scientific theory underlying that particular story, explain how it was influenced by a classic work of hard SF, and discuss the political and social elements. He’ll often assert a personal note about his own thoughts and feelings as he wrote the story. They make for engaging sequelae.

In his “Anatomy of Courage,” he noted that based on the quality of his past half dozen novels, all written in a year, some people were gossiping online that he was using AI – artificial intelligence – to write the books, that he couldn’t have possibly done all that quality work by himself.

Well, it’s the internet. People talk shit. But any self-respecting writer would be at the very least irritated by that. Cawdron noted that he had written several really good books in an amazingly short time, and with most people I would take his umbrage as a humblebrag. (“Please don’t hate me because I’m beautiful”). But he HAS done exactly that. He does go on to explain the recent boost in his output, but that’s his story to tell, and if you want to know it, then buy the book. It’s on Amazon and Goodreads.

The allegations are utter crap, and I’ll tell you why I’m convinced of that.

I’ve written a lot in my time. Two novels, a couple of dozen short stories, about 1500 eclectic columns, and about 300 reviews. Writing the novels in particular gives me a certain insight into the writing process of another writer. I’m pretty good, I think, at spotting moments where, usually in the first draft, a writer is struck by a stray thought, leans back, considers, and then with a grin, starts writing or revising. First drafts tend to have a lot of those. (There’s a dictum: write the first draft for yourself, the second for your readers, and hope what remains survives the copy editors.)

I’ll give you an example of how it works. Your character, and let’s risk a lawsuit from Neal Stephenson and call him “Hiro Protagonist,” is standing in a park. What kind of park? Well, a city park. Does it have grass? Trees? A lake? Is there a breeze? Does the sun shine, turning ripples into a disco ball? Are there kids playing? Two old farts playing chess in a pagoda? What else?

Well, pigeons. Don’t most parks have pigeons?

I have a picture my dad took of me when I was seven. I was standing in Trafalgar Square in London, attired in my prep school uniform, and I have my right arm out in front of me, bent at the elbow. On my forearm is a big, well fed pigeon who is eyeing a piece of bread in my left hand with proprietary interest. The expression on my face (“He’s rather … large … isn’t he?”) is a mixture of fascination and intimidation. Presumably I gave the bird the bread without losing any fingers and we both flew away peacefully.

That infuses a vision of what a couple of pigeons are doing in my park. They’re squabbling over a bit of popcorn.

That process leads to a throwaway line in the story. “Near the end of the bench, a pair of pigeons had a lively debate over a kernel of popcorn. The larger one flicked his head lightning fast and flew off with his meal, leaving the other to squall in frustration and give Hiro an appealing, appraising glance.”

That little bit of color is something no AI can manage. Tell an AI to write a scene about a man standing in a park waiting for someone, and the AI might mention the park bench, the trees, the grass, maybe something about the other people. Depends how good at plagiarism it is.

But that bit about the pigeons is something no AI can do. It might mention pigeons if it’s exceptionally well trained, but that little drama about the popcorn, the slight hint of aggression and menace between the birds, that comes from a human mind sharing a human experience.

If you write a lot, you come to be very familiar with that process, and you learn to spot it in the writings of others, especially those whose writing you want to learn from. Cawdron’s books, backed by meticulous research, affinity for solid detail and depending from a vivid imagination, are replete with such.

AI can do a lot, for better or for worse, but the deterministic chaos of the human mind, with its emotion, volition, confusion and empathy, cannot be duplicated in code. AI might be good enough to confuse a casual reader, but it will rarely fool a constant reader, let alone a writer who can guess what went into seemingly unimportant passages that provide color and tone and humanity to a story, making a decent story great.

They may make AIs generative. But they can’t make them mimic human creativity.

It won’t hurt to learn to look for the trade secrets behind the words. You’ll appreciate the works of someone like Cawdron more, and it will make you a bit better, intellectually and in the ability to discern what is human…and what is not.

Quadrophobia — Working the Numbers

Quadrophobia

Working the Numbers

Bryan Zepp Jamieson

November 26th, 2023

www.zeppscommentaries.online

In the huge uproar surrounding the sudden sacking, and even more unexpected rehiring of Sam Altman at ChatGPT, and the revitalization of discussion over the benefits and perils of Artificial Intelligence, there was a throw-away line in one article that seized my attention.

The line stated, with no elaboration, that their AI program had solved a math formula problem that it hadn’t seen before. Being a mainstream media source, it didn’t elaborate, since numbers bigger than about six hurt readers, make branes hurt.

Michael Parekh on his blog “AI: Reset to Zero” elaborated in a much more meaningful way:

“In the following months, senior OpenAI researchers used the innovation to build systems that could solve basic math problems, a difficult task for existing AI models. Jakub Pachocki and Szymon Sidor, two top researchers, used Sutskever’s work to build a model called Q* (pronounced “Q-Star”) that was able to solve math problems that it hadn’t seen before, an important technical milestone. A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety.

“The work of Sutskever’s team, which has not previously been reported, and the concern inside the organization, suggest that tensions within OpenAI about the pace of its work will continue even after Altman was reinstated as CEO Tuesday night, and highlights a potential divide among executives. (See more about the safety conflicts at OpenAI.)

“Last week, Pachocki and Sidor were among the first senior employees to resign following Altman’s ouster. Details of Sutskever’s breakthrough, and his concerns about AI safety, help explain his participation in Altman’s high-profile ouster, as well as why Sidor and Pachocki resigned quickly after Altman was fired. The two returned to the company after Altman’s reinstatement.” ( https://michaelparekh.substack.com/p/ai-doing-the-reasoning-math-reliably )

Solving previously unseen math problems is HUGE. It involves extrapolative logic, something computers have not been able to accomplish. The vast majority of humans can’t manage that. I’m going to give you an example:

3 + 14t − 5t2 = 0

OK, most of you will recognize that with widely varying degrees of fondness from middle school or perhaps high school. It’s called a quadratic, and it was a sneaky introduction to the basics of calculus. Most teachers couldn’t be arsed to explain what those were for back in my day, and the best ones would come up with seemingly incoherent examples, such as measuring the perimeter area of a room around a carpet, or what happens if you toss a ball in the air at a certain speed. There are, in fact, a lot of occupations where they can be massively useful, but for most students they were just an annoying form of sudoku. Just to add to the general merriment, quadratics had two solutions, one of which was physically impossible. In this case, the solutions are t =−0.2 or t = 3. Three is the one that is possible. You could make a graph from quadratics which is where my brane broke and it had to be sent off to the knackers. I was left with a choice: be an innumerate annoying smart-ass, or be a Republican. You decide.

Now suppose you had never ever seen a quadratic before in your life. Would you be able to figure out what its purpose was? From that, could you solve it, knowing you would have to factor it and possibly use the imaginary number i, the square root of minus one?

Hell, most of you DID take quadratics in grade seven, taught by Ben Stein’s boring brother, and you couldn’t even begin to start on it. I did, but I admit I looked up the answer to make sure I hadn’t embarrassed myself. Just don’t ask me to draw a chart. The results would probably be painful for both of us.

OK, so this algorithm looked at some math function it hadn’t seen before, and, understanding only the variables, the operatives, and the numbers themselves, worked out the correct answer on its own. I don’t know that it was a quadratic that was the formula or what, but it represents a huge step forward, the first time a computer has demonstrated autonomous intellectual logic.

There are a lot of very genuine concerns about AI (I recently read a very good SF novel about an AI tasked with preparing the North American West Coast against a Cascadian fault movement of 9.0, forestalling the quake itself by planned explosives and moving fifty or so million people out of harm’s way. Someone made the horrible mistake of feeding the AI a concept of Occam’s Razor, “The simplest solution is usually the best.” Armed with that, the AI realizes the smart and efficient thing is to just let ‘er rip, cost less, and have much less rebuilding to do afterward because there would be less people. So it let the earthquake proceed.)

Of course AI has been a popular notion in SF going back to the first robot novel, “RUR,” back in 1927. Even in the 60’s, it was assumed that if you just gave a computer enough processing power and data, it would “wake up,” like Mycroft Holmes (Mike) in Heinlein’s “The Moon is a Harsh Mistress.” It’s clearly much more involved then that (even back then, I viewed the “wake up” notion as being similar to stacking hamburger meat seven feet high and getting a basketball player).

But it also appears that the point of self-awareness is now very near, and autonomous decision making really does need something like Isaac Asimov’s Three Laws of Robotics:

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If we want machines to have autonomous judgment, we need to up our game and have some autonomous judgment of our own. Asimov made a career of finding loopholes and workarounds in his own three laws. For us, the work will be far more difficult, and the consequences far further reaching.

error

Enjoy Zepps Commentaries? Please spread the word :)