This article originally appeared in The Tracinski Letter and has been reposted with the author's permission.
I recently posted the next chapter of my book in progress: The Prophet of Causation, a guide to Ayn Rand’s philosophy from the perspective of the central role of the concept of causation. Chapter 3 is “A Causality Walk for Consciousness.” If you are familiar with the idea of a “causality walk,” you’ll have some idea where I’m going with this. If you aren’t, check out the chapter.
This chapter is primarily dedicated to understanding what consciousness is in the first place, and it identifies a wrong view of consciousness known as representationalism, which is lurking behind a bunch of disastrous philosophical errors from Descartes through Hume through Kant, up to the present day.
I cover a few recent examples, but there’s another one you’ll see everywhere right now: artificial intelligence. A lot of the fear (or excitement) about AI becoming sentient and “superintelligent” stems from a representationalist view of
consciousness. If consciousness is just a stream of images or data manipulated on a kind of internal video screen—which is the key error of representationalism—then it seems plausible that an AI chatbot scraping digital text from the internet could be conscious and acquire an independent intelligence. If consciousness instead requires direct and independent contact with the world, then it’s not plausible. See my previous argument on that.
To get an idea of the extent to which people are going off the rails on this, check out a long overview of the strange rise and fall of an “AI doomer” cult.
According to the Effective Altruism movement, the most pressing problem in the world is preventing an apocalypse where an Artificial General Intelligence (AGI) exterminates humanity.
A thorough investigation of the EA movement’s public-facing discourse versus its inward-facing discourse found that it used “Bait-and-Switch” tactics to attract new members, who were led from global poverty to AI doomerism. The guidance was to promote the publicly facing cause (less controversial causes like “giving to the poor”) and keep quiet about the “core EA” causes (existential risk/AI safety). Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.
This report focuses on one organization in particular:
Leverage Research ran “a psychology and human behavior research program.” An old version of its website (Wayback Machine, June 2013) indicates that the program involved “the simultaneous execution of a number of difficult projects pertaining to different facets of the human mind.”
It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics. According to various allegations, it led to dissociation and fragmentation, which they have found difficult to reverse.
In 2021, Zoe Curzi published a detailed post entitled “My Experience with Leverage Research.” “I was part of Leverage/Paradigm from 2017-2019,” she wrote, and experienced “narrative warfare, gaslighting, and reality distortion.” Each day included hours of “destabilizing mental/emotional work.” She went through many months of “near constant terror at being mentally invaded.” Her psychological distress and PTSD were due to Leverage’s “debugging sessions,” which aimed to “jailbreak” the members’ minds and (through confrontations) to “yield rational thoughts.”
We can pretty much stop right here, because no organization doing actual “psychological research” performs that research on its own members. It is an obviously invalid method because of course you will not get objective results from research performed on insiders who already know what outcome they’re looking for.
But you know what kind of organization likes to subject its members to intense psychological scrutiny and pressure, particular from those higher up in the organization? That’s right, cults. And that’s basically what this one became.
To be sure, movements based on apocalyptic beliefs—the robot apocalypse is nigh—tend to attract fanatics. But this one is interesting for two reasons.
First, it came out of a movement called “Effective Altruism,” which as far as I can tell is just retreaded utilitarianism, in which we will scientifically determine what is the greatest good for the greatest number. Altruism inherently makes people vulnerable to abuse, because it teaches them to subordinate their minds and well-being to demands imposed on them by others. As Ayn Rand put it in her own critique of altruism, “The man who speaks to you of sacrifice, speaks of slaves and masters. And intends to be the master.” This is particularly true of utilitarianism, because it encourages the sacrifice of actual, individual human lives for the supposed “aggregate utility” of society as a whole—as judged by the self-appointed spokesmen for “society.”
Combine this with AI doomerism, and you get a trap to catch people who are predisposed to subordinate themselves to a social cause.
A post in the LessWrong Forum, “Common knowledge about Leverage Research 1.0,” claimed that the stated purpose of Leverage was to discover more theories of human behavior and civilization by “theorizing” while building power and then literally taking over US and/or global governance (the vibe was “take over the world”). The narrative within the group was that they were the only organization with a plan that could possibly work and the only real shot at saving the world; that there could be no possibility of success at one’s goal of saving the world outside the organization.
What is even more fascinating is that this spread among the so-called “Rationalist” community, which was formed with the stated goal of helping people arrive at more rational conclusions. But it is notoriously easy to substitute rationalization for rationality, and contemporary “Rationalists” often end up exemplifying the sense in which Objectivists use that term: to refer to the error of relying on a floating chain of deductions rather than grounding one’s views in reality.
This particular cult also involved, inevitably, a cryptocurrency scheme, which makes a good follow-up to my examination of Sam Bankman-Fried’s FTX fraud. It’s another example of how “Effective Altruism,” like more traditional forms of this moral philosophy, tends to end in fleecing the rubes.
Finally, here’s a follow-up I just realized I left out of my recent post on our do-nothing Congress. I wrote about Congress’s abdication of substantive legislation in favor of culture war posturing. Here’s what that looks like in practice as it filters down to the state level, according to a June 17 Associate Press report.
Oklahoma’s top education official ordered public schools Thursday to incorporate the Bible into lessons for grades 5 through 12, the latest effort by conservatives to incorporate religion into classrooms.
The order sent to districts across the state by Republican State Superintendent Ryan Walters says adherence to the mandate is compulsory and “immediate and strict compliance is expected.”
“The Bible is an indispensable historical and cultural touchstone,” Walters said in a statement. “Without basic knowledge of it, Oklahoma students are unable to properly contextualize the foundation of our nation which is why Oklahoma educational standards provide for its instruction.”
Actually, what he said was even worse.
“The Bible is a necessary, historical document to teach our kids about the history of this country,” Walters told the State Board of Education last month. Therefore, he went on, “every teacher, every classroom in the state, will have a Bible in the classroom, and will be teaching from the Bible in the classroom.”
That means every teacher, no matter what subject is being taught. Get ready for Biblical math, everyone.
Or maybe not.
But it’s not clear if Walters has the authority to mandate that schools teach it. State law says individual school districts have the exclusive authority to decide on instruction, curriculum, reading lists, instructional materials, and textbooks.
And as kids head back to school, what are local districts deciding to do? In larger part, they’re ignoring it, for very sensible reasons.
“If there is no curricular standard that ties with that particular classroom, what would be the purpose of a Bible if not for pure indoctrination?” said Bixby [a suburb of Tulsa] Superintendent Rob Miller….
School districts also have been offered guidance from law firms that represent them and the state’s largest teachers union, the Oklahoma Education Association, that the superintendent doesn’t have the unilateral authority to issue such a requirement and that the edict is unenforceable.
But that won’t stop them from trying, as in a similar proposal in Texas. And all that “anti-woke” stuff you’ve been hearing about the danger of “indoctrination” in the classroom will not prevent conservatives from trying to impose their own form of indoctrination.
Rob Tracinski a étudié la philosophie à l'université de Chicago et est écrivain, conférencier et commentateur depuis plus de 25 ans. Il est rédacteur en chef de Symposium, une revue de libéralisme politique, chroniqueur pour le magazine Discourse et écrit The Tracinski Letter. Il est l'auteur de Alors, qui est John Galt de toute façon ? Guide de lecture de l'Atlas Shrugged d'Ayn Rand.