I'm working on the last chapter of the vibes book, which is about critical phenomenology and ethics. Try as I might, I could not avoid talking about the TESCREAL bundle, or the hegemonic ideologies among techbros today. Happy reading!
The “TESCREAL bundle” is a collection of overlapping philosophies that are broadly hegemonic in the 2020s tech industry. As Timnit Gerbu and Emile P. Torres explain, the “‘TESCREAL bundle’…denotes “transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism”[7].” As they argue, these philosophies all share a collection of utilitarian and eugenicist views, basically adding cybernetics and space exploration to the same underlying biopolitical formula set out in the late 19th and early 20th century eugenics programs of people like Francis Galton. They all gesture towards the idea of transcending the limits of present-day humans, often through technological enhancements of some sort. Though the TESCREAL bundle philosophies were fairly nascent when Melinda Cooper published Life as Surplus in 2008, from the perspective of 2024 they appear to clearly exemplify the “speculative impulses” from neoliberal market ideologies that reimagine biological life as “surplus value.”
One of the common threads uniting the various constituents of the TESCREAL bundle is the belief that it is morally and/or existentially imperative to build the most advanced artificial intelligence (a.k.a. “AGI” or “Artificial General Intelligence”) as soon as possible. Often framed in explicitly consequentialist terms, the imperative to build AGI is typically framed as either a way to create the most overall good or prevent apocalyptically bad situations from arising. The vibes could be fantastic or they could be full of doom, but in either case AGI will supposedly save us.
Whereas twentieth-century eugenicists used Gaussian human sciences research and discourses of normativity to justify their views, the TESCREAL bundle uses discursive analogs of the mathematical models fueling early 2020s AI like GPT—vibes and horizons—to justify theirs. As I explained in chapter 2, the math behind contemporary tech and finance plots out speculative future horizons that facilitate present-day decision making, such as what song or TikTok to stream next, or the answer to a prompt. TESCREAL bundle evangelists do the same thing with words and narrative that their code does with data: they sketch out a presently-counterfactual narrative that orients decision-making. From their perspective, the coming existence of superhuman artificial intelligence could bring either utopian or apocalyptic vibes. As OpenAI’s company blog puts it,
Regardless of which extreme end of the vibes spectrum they appeal to, TESCREAL bundle evangelists use rhetorical and narrative devices to sketch out speculation futures whose orientations make it easier to justify the diversion of resources to the development of AGI and away from the current and future needs of the general public. Here, utopian and apocalyptic vibes about futures that don’t even exist yet guide present-day decision making about the allocation of life chances. For example, Gerbu and Torres report that “TESCREALists Greavesand MacAskill (2019) write that “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects.” Pushing the horizon where consequences will be measured a millennium in the future orients decision-making away from impacts on living people and their children. Just as New York Governor Kathy Hochul sent National Guard troops to patrol the NYC subways on the basis of vibes rather than hard statistical crime data to the contrary, the TESCREAL bundle makes decisions on the basis of future vibes rather than hard data about the past and present.
Narratives like longtermism or Effective Altruism orient decision-making to the very sorts of speculative futures that justify throwing traditionally dysgenic populations under the bus. As Gerbu and Torres put it, “the AGI race not only perpetuates these harms to marginalized groups, but it does so while depleting resources from these same groups to pursue the race.” Philosopher Charles Mills argued that ethical theories built on “idealized models” of society are “ideologies” that naturalize presently-existing inequalities. The TESCREAL bundle is techbro ideal theory that uses a novice style of phenomenology to orient financial and policy decisions toward them and their patriarchal racial capitalist projects and away from groups who are not so fully aligned with their vision of the legitimate order of ruler and ruled.
This metaphor of “alignment” is the primary way these philosophies compare values--so much so that talk of alignment has migrated, like Agile, from tech-specific jargon to general corporate-speak. For example, OpenAI named its project to “steer and control AI” “superalignment.” In this context, for AI to be “aligned” means it “follow[s] human intent.” Grouping the vast plurality of humans’ intentions into a single and coherent concept of “human intent,” this definition of alignment would never pass scrutiny in an introduction to philosophy class; it makes the classic ideal theory move of misrepresenting the interests of privileged elites as the interests of people in general. Nevertheless, they do clarify that “alignment” means something more or less like a phenomenological orientation: to be “aligned” with something means to exhibit an analogous direction or teleology. With this focus on “alignment,” tech industry AI ethics is both a kind of corporate phenomenology and a reworking of the way the math fueling LLMs and the like compare clusters of data into qualitative ethical terms.
This idea of alignment has led OpenAI to develop their own version of Bentham’s panopticon. “Superalignment” is their vision of a practice where AI could supervise itself: if “humans won’t be able to reliably supervise AI systems much smarter than us,” then there need to be ways to make AI “internalize” their human supervisors’ gaze, much in the same way Bentham’s panopticon forces prisoners to internalize the gaze of the guards by obscuring it from prisoners’ view. Just as the architecture of the panopticon makes the guards’ labor more efficient by offloading that work onto the prisoners, OpenAI aims “to train models to be so aligned that we can off-load almost all of the cognitive labor required for alignment research.” The idea is that AI, like prisoners, can police itself. In Foucault’s analysis, prisoners learned to internalize disciplinary norms that rendered them and their behavior docile: don’t make too much noise, don’t shank your bunkmate, don’t dig an escape tunnel, just follow the rules. The point in superaligning superhumanly intelligent AI isn’t to render it docile--it still needs to do superhuman things--but to make the superhuman things it does further the aims of the elite. Instead of forcing AI to conform to prescribed norms, superalignment aims to orient AI so that the humanly-ineffable things it does orient the world according to the priorities of patriarchal racial capitalism’s most privileged groups.