We’ve had about week now to let the three Artificial Intelligence (AI) Executive Orders (EOs) released last week sink in. I’m going to review each one as well as The National AI Action Plan in the following 4 posts. Today’s post will focus on the most controversial part of those 4 items, the EO entitled “Preventing Woke AI in the Federal Government.”
You may be asking yourself at this point exactly what is Woke AI - I asked myself that question as well. Unfortunately, in Section 2, Definitions of the EO - they’ve left out the definition for the thing that they are supposedly so intent upon preventing. This is the first and perhaps single biggest problem with the EO and we’ll explain why in more detail in a moment. But let’s step back for a moment and talk about what’s in the Executive Order.
Executive Orders - What are They?
For anyone not familiar with them, they represent the highest-level guidance from the Executive Branch and typically come in a ‘summary format.’ The associated implementation details are left for later (and often vary across agencies) and sometimes there really aren’t any meaningful follow-on regulatory details - sometimes the EO is simply meant to express an intent but not the manner in which that intention is to be pursued. Do they matter? Yes - because many of them do end up being codified in one way or another and there are also times - such as in this case, where the policy goal crosses both government and industry. So, while the title states that this is about restricting something in Government, it is also telling industry that if they don’t comply, they wont get those huge government contracts. In fact, in the case of this particular EO, it might be argued that the true intended audience is not the various government agencies impacted or even American citizens at large, but rather the companies which produce AI products.
This EO contains several brief sections:
Section 1: Purpose - this section is a political screed attacking the evils of DEI, not a good start for the rest of the document.
Section 2: Definitions - as noted, these are somewhat lacking, so we get Section 3 instead). Worth noting here though is the focus on LLMs, we’ll come back to that in a moment.
Section 3: Unbiased AI Principles - On the face of it, these descriptions sound good, but in the context of the assault on DEI, we know that they aren’t (that they are a facade for various political litmus tests yet to be defined or named).
Section 4: Implementation - e.g. what the agencies will be instructed to do later.
What’s Wrong with this Executive Order?
There are several significant problems and they are bigger than one might imagine given the relatively simple nature of the guidance; those problems include:
The implied political agenda. So, while the EO says in Section 3 that the models should be Ideologically Neutral and Truth-Seeking; Section 1 has already defined what the government considers the truth - and it’s not political neutral.
Without the Top-level definition, how can it be interpreted or enforced? Few people actually know what the GOP or MAGA means when they refer to “Woke;” the term could realistically become anything that this political coalition happens to disagree with. And this AI EO is purposefully - not - defining it here so that it can continue to be used in just such a manner. This means that the “truth-seeking” from Section 3 is amorphous and dynamic. Ultimately, this implies that no criteria other than what the assessor is thinking at the moment may apply in regards to knowing whether an AI model will be labeled ‘Woke’ or not. Thus the entire process of accepting any models will become a political litmus test more-so than a technical evaluation.
The EO is limited to Large Language Models (LLMs) and yet Doesn’t understand LLMs. Practically speaking, there will likely be quite a few AI products up for contract review that may not include LLMs in them within a short span of time. It’s not clear whether that will be a problem for the vision being expressed here in the EO, but it reflects an ‘immediate’ view of what AI is that won’t age well. Secondly, if the folks who wrote this really believed LLMs were the end all, be all of what the technology could achieve, then they should consider how they work. An LLM is only as good or Woke or Truthful as the data / information it’s trained on. If you choose to train your LLM on Reddit (like OpenAI did), you’re going to get a lot of crap coming out of it - this is based on the old principle of “Garbage in, Garbage out.” In other words, rather than encouraging the use of better quality training data, they’re likely implying that companies should use even worse sources filled with political propaganda and bot-generated nonsense (because it is probably more closely politically aligned with whomever is supposed to assess the models).
The Anti-Woke EO Singlehandedly Destroys AI Trust. This is the most important problem that will eventually result from this guidance. As the government applies more pressure to obtain ideologically pure models by applying arbitrary political litmus tests, the models will become less trustworthy (and these will also be the same models that everyone else has to use as well because all of the big AI companies will want to compete for Government funding). If you purposefully begin designing something to tell you exactly what you want to hear, it’s eventually going to do it - and then what? Essentially we’ll end up with intelligences dedicated serving political interests rather than doing all of the sorts of jobs it should be doing. This is a very real manifestation of the Big Brother concept from 1984 and is also consistent with what the Chinese government is already doing in their efforts to control every aspect of the lives of their less-than-free citizens. Do we really want to go down that path too?
Where Did all of this Come From?
About a year ago, several Conservative Bloggers used deliberate prompts to generate what they considered to be “Woke” images - supposedly they had come up as a result of model tweaking by the companies in question to make the output more ‘representative’ but that was never proven. My own guess is that these bloggers were likely inspired by Hamilton and or some of the recent spate of Netflix series showing folks of color in seemingly improbable historical contexts and decided to replicate that on AI platforms as way of influencing their development. The images passed around during this debate included African American Nazis and Revolutionary War officers (again I think they were going for Hamilton). The montage below was posted on various conservative media outlets such as the NY Post in order to provoke outrage at the notion that AI models would try to take Hamilton to the next level.
The thing is, the only folks who were getting these outputs seemed to be conservative bloggers. Very few if any other people were complaining about these types of outputs occurring. The other consideration is this - if you were to prompt any of the major AI platforms to produce an “African American George Washington,” guess what - it will (or at least it would). In other words, there was very little credible evidence that Woke AI ever happened or was a problem. It’s much more likely that the Far Right decided it wanted to control AI output in much the same way as its managed to dominate and / or bully Social Media by generated a false Woke AI scandal.
The “Woke AI” Executive Order signed last week represents a fairly blatant attempt by a very ideologically aggressive Administration to direct future AI development in a way that ensures that it agrees with the worldview of the folks in currently in power. This is a formula for both for the abuse of power as well as for the acceleration of Dystopian AI outcomes - the same Dystopian outcomes people like Elon Musk have been warning about for years - yet seem keen to encourage.
Copyright 2025, Digital Perspectives