Kamala’s AI plan isn’t smart

Her speeches often sound as if they had been scripted by an early or pre-intelligent incarnation of ChatGPT

AI
(Getty)

Try to think of some leading names in the field of Artificial Intelligence. Kamala Harris is probably not the first that springs to mind. The woman can barely talk. But she is the vice president of the United States of America and as such she’s in London, about to give a speech ahead of Prime Minister Rishi Sunak’s big AI summit. Brace yourselves.

Harris’s speeches often sound as if they had been scripted by an early or pre-intelligent incarnation of ChatGPT. But as sentient beings we can safely predict the gist. The advent of artificial intelligence…

Try to think of some leading names in the field of Artificial Intelligence. Kamala Harris is probably not the first that springs to mind. The woman can barely talk. But she is the vice president of the United States of America and as such she’s in London, about to give a speech ahead of Prime Minister Rishi Sunak’s big AI summit. Brace yourselves.

Harris’s speeches often sound as if they had been scripted by an early or pre-intelligent incarnation of ChatGPT. But as sentient beings we can safely predict the gist. The advent of artificial intelligence brings many opportunities, Harris will say, but to ensure everyone can benefit from its wonderful promise, the technology must be carefully regulated to ensure it doesn’t become harmful. And that, in Kamala Harris’s world, means heavily regulating all AI systems to bring them in line with the Democratic Party’s ultra-progressive and hyper-politically correct values.

The Biden-Harris administration is trying to force all AI to pursue the goal of racial equity

We know this because we can see that yesterday, in advance of Harris’s arrival in the United Kingdom, President Joe Biden issued his Executive Order on Safe, Secure and Trustworthy Artificial Intelligence — in the words of a White House communique, “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.” We can predict, with a 91 percent degree of certainty, that Harris will robotically echo its directives today. America calls the shots. Rishi Sunak wants the United Kingdom to be a global hub of AI regulation, because those who can’t innovate, regulate. But it is the United States’s federal government, overseeing as it does Silicon Valley and the most advanced technology companies in the world, which truly calls the shots.

Biden’s executive order on AI safety is sweeping alright. It focuses on a lot of areas which any good government should focus on. The need to ensure AI doesn’t instruct lunatics in how to make a biological weapon, for instance, or perpetuate fraudulent schemes by impersonating real people. It calls for “privacy-preserving techniques” and seeks to impose government-on-government restrictions to stop federal agencies, as much as private companies, gathering too much information.

That’s the sane stuff. But there is another all-too sweeping section: on using AI to advance “equity and civil rights” and that’s where sensible governance creeps towards social engineering on a truly terrifying scale. We know already that ChatGPT, for all its brilliance, is conditioned to be what Elon Musk has called “Woke AI” — it refuses to address scientific enquiries which might offend people, for instance. ChatGPT will make jokes about Jesus, but not Mohammed. That political correctness may be irrational, but it was at least imposed by OpenAI, a private profit-seeking company. Other companies developing generative systems do not have to follow suit.

What the Biden-Harris administration is trying to do is force all AI to pursue the goal of racial equity — a nebulous concept that the White House is top-down transposing on American education, housing, healthcare, military contracting and much else. Team Biden wants to stop “AI algorithms from being used to exacerbate discrimination.” But given Biden’s “equity agenda” so far — take housing grants, for instance — we know this often means discriminating against white people in order to somehow equalize the historic oppression of black people.

It’s here that the White House’s order takes on a chillingly dystopian tone: the Biden administration wants to “address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.”

On criminal justice, the Biden administration wants to develop “best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.” The language is vague, almost suspiciously, but what this boils down to is an attempt by the Biden administration to plug the Democratic Party’s identity-oriented values into all its programs.

ChatGPT is intelligent in large part because, similar to how we think the brain works, the system learns through the “backward propagation of errors.” This is complicated stuff, but it essentially means it can be conditioned to think along certain pathways. This means it can be forced to learn things that are not rational or true. What the Biden administration is doing, by inserting its controversial equity agenda as “safety” regulation, is politicizing AI. It is on the brink of forcing companies to reinforce AI learning away, potentially, from the truth. In other words, AI may say things that are false in order to comply with a “woke” vision of the world that isn’t based on reality. One expert I spoke to suggested that this was comparable to the government hijacking Google in the year 2000 and controlling it to only share far-left websites. Unless you happen to be a far-left radical, the consequences of that cannot be good.

Tomorrow, Rishi Sunak will talk to Elon Musk, who is at least aware of this danger. Let’s hope he makes that point loud and clear. Not that Kamala Harris, a not very intelligent system, would ever listen.

This article was originally published on The Spectator’s UK website.

Comments
Share
Text
Text Size
Small
Medium
Large
Line Spacing
Small
Normal
Large

Leave a Reply

Your email address will not be published. Required fields are marked *