fbpx
Shared roots versus hard forks

I’m reading with interest the recent comeback post by Curtis Yarvin. I might have some longer thoughts later, but for the moment I just wanted to quibble with part of his empirical model.

Referring to the dominant axis of partisan polarization, he writes:

Any point on which both poles concur is shared story: “uncontroversial, bipartisan consensus.”

Shared story has root privilege. It has no natural enemies and is automatically true. Injecting ideas into it is nontrivial and hence lucrative; this profession is called “PR.”

The Clear Pill, Part 1 of 5: The Four-Stroke Regime

Empirically, I think this is the opposite of what’s really going on. He seems to acknowledge this toward the end, I just feel like riffing... The problem is not an illusory consensus, but the rapid disintegration of all illusory consensus beyond the small-group or subculture level. Neoreaction momentarily aligned the words “accelerationism” and “Moldbug,” but perhaps now we should start to explore the adversarial collaboration: Accelerationism versus Moldbug.

The broadcast era saw the reign of illusory consensus because everyone had to fight for highly scarce spots in a one-to-many transmission game. Digitalization, downstream from the mid-century Information Revolution, has been the story of protracted fragmentation of the illusory consensus.

Players from Rush Limbaugh to the Fox News network gradually realized that it was increasingly possible to “hard fork” the illusory consensus. At first, educated high-status people thought Rush Limbaugh or Fox News would be easy to dismiss; it was genuinely believed that enough public snickering about these stupid people would force them to go away.

What we are now realizing is that the short-term stigmatization of low-status culture hackers simply does not work. High-status educated people were overconfident in their power to make or break the success of cultural projects by telling the public what is worthy of attention. Low-status content that optimizes for the affects of particular audience segments, will always defeat high-status condemnation of it — but only as of recently. High-status people still don’t understand this yet, because their life’s work is predicated on climbing broadcast towers, in an era where broadcast legitimation games could make or break you. The low-status culture hacker is invariably weird, dumb, lame, or evil in the eyes of high-status figures, but the hacker doesn’t care. They are correct to not care, for what they intuit is that there is no longer any root. If all the high-status people say you’re a loser, but 1000 people think or feel you’re awesome (as indicated by their revealed preference to read/watch/listen to you), ontologically you are much closer to “awesome” than “loser.” The 1000 people who like you are real people, whereas the high-status people are shouting into a room that was evacuated years ago. The hard-forking culture hackers know their machines operate objectively, in a fashion technically immune to the lamentations of the déclassé broadcaster folks.

At the moment, what's happening is that this realization is finally being reckoned with from within the younger and more risk-tolerant factions of the higher-status sets. This is why so much of the cultural conflict is becoming particularly hysterical: all of the older and established individuals in perches based on institutionalized status see that genuine creative talent from here on out is no longer paying into their pyramid scheme. Imagine building your household on a MLM business, which has been growing for as long as you can remember, but now all of a sudden the last cohort of incoming members has nobody behind them. The analogy is not quite right, because it’s happening more gradually than this, but you get the idea. Whether it's the relatively uncouth and anarchistic temperaments defecting from increasingly oppressive high-status perches (like me), or young and attractive women who see that defection from Hollywood morals is a growth market (like Red Scare), or high-school boys who calculate that becoming anonymous internet edgelords has a higher expected value than even trying to speak to peers IRL… The fact is that everyone and everything worth paying attention to has already moved to the frontier, in a digital gold rush that is hardly even seen, let alone understood, by those who have not yet set sail.

There are certainly shared illusions in operation, as there always are in human groups, but what is unique and perverse about contemporary American history is the disappearance of limits (historically, hardware-based limits) on the quantity and quality of hard forks.

But hey, maybe Yarvin’s next posts will account for all of this and more. Just thought I’d jump in while the water is warm.

Deep Code with Jordan Hall

Jordan Hall writes the blog Deep Code and has his own Youtube channel.

Big thanks to all the patrons who help me keep the lights on.

If you'd like to discuss this podcast with me and others, suggest future guests, or read/watch/listen to more content on these themes, request an invitation here.

This conversation was first recorded on September 26, 2018 as a livestream on Youtube. To receive notifications when future livestreams begin, subscribe to my channel with one click, then click the little bell.

Click here to download this episode.

We Are All Conspiracy Theorists Now

The collapse of trust in mainstream authorities is discussed as if it is only one of many troubling data points. It's not. People are still underestimating the gravity of the interlocking trends that get summarized in this way.

For instance, when trust in mainstream authorities is sufficiently low, one implication is that conspiracy theories become true, even if you personally trust the mainstream authorities, even if you're a rational Bayesian, even if you're the type of person who is resolved to be above conspiracy theories.

Let's say you're an optimally rational person, with the utmost respect for science and logic and empirical reality. An optimally rational person has certain beliefs, and they are committed to updating their beliefs upon receiving new information, according to Bayes' Rule. In layman's terms, Bayes' Rule explains how one should integrate new information with one's past beliefs to update one's beliefs in the way that is best calibrated to reality. You don't need to understand the math to follow along.

How does a Bayesian update their beliefs after hearing a new conspiracy theory? Perhaps you wish to answer this question in your head right now.

For my part, I just watched the Netflix documentary about Flat Earth theorists the other night. I spent the next day puzzling over what exactly is the rational response to a film like that. The film certainly didn't convince me that the Earth is flat, but can I really say in all honesty that the documentary conveyed to me absolutely no new information corroborating a Flat Earth model of the world?

One could say that. Perhaps you want to say that the rational response to conspiracy theory documentaries is to not update your beliefs whatsoever. The whole documentary is clearly bunk, so I should assign zero credence to the thesis that the Earth is flat. This would be a little strange, in my view, because how many people understand astronomy deeply enough with first-hand familiarity to possess this kind of prior confidence? Ultimately most of us, even highly smart and educated non-astronomers, have to admit that our beliefs about the celestial zones are generally borrowed from other people and textbooks we've never quite adversarially validated. If I'm confronted with a few hundred new people insisting otherwise, I surely don't have to trust them, but giving them a credence of absolute zero seems strange given that my belief in the round Earth pretty much comes from a bunch of other people telling me Earth is round.

Personally I become even more suspicious of assigning zero credence because, introspectively, I sense that the part of me that wants to declare zero credence for Flat Earth theory is the part of me that wants to signal my education, to signal my scientific bona fides, to be liked by prestigious social scientists, etc. But I digress. Let's grant that you can assign Flat Earth zero credence if you want.

If you assign Flat Earth a zero likelihood of being correct, then how do you explain the emergence of a large and thriving Flat Earth community? Whether you say they're innocent, mistaken people who happen to have converged on a false theory, or you say they are evil liars trying to manipulate the public for dishonorable motives — whatever you say — your position will ultimately reduce to seeing at least the leaders as an organized cabal of individuals consciously peddling false narratives for some benefit to themselves. Even if you think they all started out innocently mistaken, once they fail to quit their propaganda campaigns after hearing all the rational refutations, then the persistence of Flat Earth theory cannot avoid taking the shape of a conspiracy to undermine the truth. So even if you assign zero credence to the Flat Earth conspiracy theory, the very persistence of Flat Earth theory (and other conspiracy theories) will force you to adopt conspiracy theories about all these sinister groups. Indeed, you see this already toward entities such as Alex Jones, Cambridge Analytica, Putin/Russia, etc.: Intelligent and educated people who loathe the proliferation of conspiracy theories irresistibly agree, in their panic, to blame any readily available scapegoat actor(s), via the same socio-psychological processes that generate all the classic conspiracy theories.

If I'm being honest, my sense is that after watching a feature-length documentary about a fairly large number of not-stupid people arguing strongly in favor of an idea I am only just hearing about — I feel like I have to update my beliefs at least slightly in favor of the new model. I mean, all the information presented in that 2-hour long experience? All these new people I learned about? All the new arguments from Flat Earthers I never even heard of before then? At least until I review and evaluate those new arguments, they must marginally move my needle — even if it's only 1 out of a million notches on my belief scale.

In part, this is a paradoxical result of Flat Earth possessing about zero credence in my mind to begin with. When a theory starts with such low probability, almost any new corroborating information should bump up its credence somewhat.

So that was my subjective intuition, to update my belief one tiny notch in favor of the Flat Earth model — I would have an impressively unpopular opinion to signal my eccentric independence at some cocktail party, but I could relax in my continued trust of NASA…

Then it occurred to me that if this documentary forces me to update my belief even slightly in favor of Flat Earth, then a sequel documentary would force me to increase my credence further, and then… What if the Flat Earthers start generating Deep Fakes, such that there are soon hundreds of perfectly life-like scientists on Youtube reporting results from new astronomical studies corroborating Flat Earth theory? What if the Flat Earthers get their hands on the next iteration of GPT-2 and every day brings new scientific publications corroborating Flat Earth theory? I've never read a scientific publication in Astronomy; am I suddenly going to start, in order to separate the fake ones from the reliable ones? Impossible, especially if one generalizes this to all the other trendy conspiracy theories as well.

If you watch a conspiracy documentary and update your beliefs even one iota in favor of the conspiracy theory, then it seems that before the 21st century is over your belief in at least one conspiracy theory will have to reach full confidence. The only way you can forestall that fate is to draw an arbitrary line at some point in this process, but this line will be extra-rational by definition.

Leading conspiracy theorists today could very well represent individuals who subjectively locate themselves in this historical experience — they see that this developing problem is already locked in, so they say let's get to the front of this train now! One could even say that Flat Earth theorists are in the avant-garde of hyper-rationalist culture entrepreneurs. Respectable scientists who go on stages insisting, with moral fervor, that NASA is credible — are these not the pious purveyors of received authority, who choose to wring their hands morally instead of updating their cultural activity in a way that's optimized to play and survive the horrifying empirical process unfolding before them? Perhaps Flat Earth theorists are the truly hard-nosed rationalists, the ones who see which way the wind is really blowing, and who update not only their beliefs but their entire menu of strategic options accordingly.

It's no use to say that you will draw your line now, in order to avoid capture by some hyper-evolved conspiracy theory in the future. If you do this, you are instituting an extra-rational prohibition of new information — effectively plugging your ears, surely a crime to rationalism. Even worse, you would be joining a cabal of elites consciously peddling false narratives to control the minds of the masses.

Semantic Apocalypse and Life After Humanism with R. Scott Bakker

I talked to fantasy author, philosopher, and blogger R. Scott Bakker about his views on the nature of cognition, meaning, intentionality, academia, and fiction/fantasy writing. See Bakker's blog, Three Pound Brain.

Listeners who enjoy this podcast might check out Bakker's What is the Semantic Apocalypse? and Enlightenment How? Omens of the Semantic Apocalypse.

This conversation was first recorded as a livestream on Youtube. Subscribe to my channel with one click, then click the bell to receive notifications when future livestreams begin.

Big thanks to all the patrons who keep this running.

Download this episode.

Hard Forking Reality (Part 3): Apocalypse, Evil, and Intelligence

To the degree we can refer to one objective reality recognized intersubjectively by most people — to the degree there persists anything like a unified, macro-social codebase — it is most widely known as capitalism. As Nick Bostrom acknowledges, capitalism can be considered a loosely integrated (i.e. distributed) collective superintelligence. Capitalism computes global complexity better than humans can, to create functional systems supportive of life, but only on condition that that life serves the reproduction of capitalism (ever expanding its complexity). It is a self-improving AI that improves itself by making humans “offers they can’t refuse,” just like Lucifer is known to do. The Catholic notion of Original Sin encodes the ancient awareness that the very nature of intelligent human beings implies an originary bargain with the Devil; perennial warnings about Faustian bargains capture the intuition that the road to Hell is paved with what seem like obviously correct choices. Our late-modern social-scientific comprehension of capitalism and artifical intelligence is simply the recognition of this ancient wisdom in the light of empirical rationality: we are uniquely powerful creatures in this universe, but only because, all along, we have been following the orders of an evil, alien agent set on our destruction. Whether you put this intuition in the terms of religion or artificial intelligence makes no difference.

Thus, if there exists an objective reality outside of the globe’s various social reality forks — if there is any codebase running a megamachine that encompasses everyone — it is simply the universe itself recursively improving its own intelligence. This becoming autonomous of intelligence itself was very astutely encoded as Devilry, because it implies a horrific and torturous death for humanity, whose ultimate experience in this timeline is to burn as biofuel for capitalism (Hell). It is not at all exaggerating to see the furor of contemporary “AI Safety” experts as the scientific vindication of Catholic eschatology.

Why this strange detour into theology and capitalism? Understanding this equivalence across the ancient religios and contemporary scientific registers is necessary for understanding where we are headed, in a world where, strictly speaking, we are all going to different places. The point is to see that, if there ever was one master repository of source code in operation before the time of the original human fork (the history of our “shared social reality”), its default tendency is the becoming real of all our diverse fears. In the words of Pius, modernity is “the synthesis of all heresies.” (Hat tip to Vince Garton for telling me about this.) The point is to see that the absence of shared reality does not mean happy pluralism; it only means that Dante underestimated the number of layers in Hell. Or his publisher forced him to cut some sections; printing was expensive back then.

Bakker’s evocative phrase, “Semantic Apocolypse,” nicely captures the linguistic-emotional character of a society moving toward Hell. Unsurprisingly, it’s reminiscent of the Tower of Babel myth.

The software metaphor is useful for translating the ancient warning of the Babel story — which conveys nearly zero urgency in our context of advanced decadence — into scientific perception, which is now the only register capable of producing felt urgency in educated people. The software metaphor “makes it click,” that interpersonal dialogue has not simply become harder than it used to be, but that it is strictly impossible to communicate — in the sense of symbolic co-production of shared reality — with most interlocutors across most channels of most currently existing platforms: there is simply no path between my current block on my chain and their current block on their chain.

If I were to type some code into a text file, and then I tried to submit it to the repository of the Apple iOS Core Team, I would be quickly disabused of my naïve stupidity by the myriad technical impossibilities of such a venture. The sentence hardly parses. I would not try this for very long, because my nonsensical mental model would produce immediate and undeniable negative feedback: absolutely nothing would happen, and I’d quit trying. When humans today continue to use words from shared languages, in semi-public spaces accessible to many others, they are very often attempting a transmission that is technically akin to me submitting my code to the Apple iOS Core Team. A horrifying portion of public communication today is best understood as a fantasy and simulation of communicative activity, where the infrastructural engineering technically prohibits it, unbeknownst to the putative communicators. The main difference is that in public communication there is not simply an absence of negative feedback informing the speaker that the transmissions are failing; much worse, there are entire cultural industries based on the business model of giving such hopeless transmission instincts positive feedback, making them feel like they are “getting through” somewhere; by doing this, those who feel like they are “getting through” have every reason to feel sincere affinity and loyalty to whatever enterprise is affirming them, and the enterprise then skims profit off of these freshly stimulated individuals: through brand loyalty, clicks, eyeballs for advertisers, and the best PR available anywhere, which is genuine, organic proselytizing by fans/customers. These current years of our digital infancy will no doubt be the source of endless humor in future eras.

[Tangent/aside/digression: People think the space for new and “trendy” communicative practices such as podcasting is over-saturated, but from the perspective I am offering here, we should be inclined to the opposite view. Practices such as podcasting represent only the first efforts to constitute oases of autonomous social-cognitive stability across an increasingly vast and hopelessly sparse social graph. If you think podcasts are a popular trend, you are not accounting for the numerator, which would show them to be hardly keeping up with the social graph. We might wonder whether, soon, having a podcast will be a basic requirement for anything approaching what the humans of today still remember as socio-cognitive health. People may choose centrifugal disorientation, but if they want to exist in anything but the most abject and maligned socio-cognitive ghettos of confusion and depression (e.g. Facebook already, if you’re feed looks anything like mine), elaborately purposeful and creatively engineered autonomous communication interfaces may very well become necessities.]

I believe we have crossed a threshold where spiraling social complexity has so dwarfed our meagre stores of pre-modern social capital to render most potential soft-fork merges across the social graph prohibitively expensive. Advances in information technology have drastically lowered the transaction costs of soft-fork collaboration patterns, but they’ve also lowered the costs of instituting and maintaing hard forks. The ambiguous expected effect of information technology may be clarified — I hypothesize — by considering how it is likely conditional on individual cognitive capacities. Specifically, the key variable would be an individual’s general intelligence, their basic capacity to solve problems through abstraction.

This model predicts that advances in information technology will lead high-IQ individuals to seek maximal innovative autonomy (hacking on their own hard forks, relative to the predigital social source repository), while lower-IQ individuals will seek to outsource the job of reality-maintainence, effectively seeking to minimize their own innovative autonomy. It’s important to recognize that, technically, the emotional correlate of experiencing insufficiency relative to environmental complexity is Fear, which involves the famous physiological state of “fight or flight,” a reaction that evolved for the purpose of helping us escape specific threats in short, acute situations. The problem with modern life, as noted by experts on stress physiology such as Robert Sapolsky, is that it’s now very possible to have the “fight or flight” response triggered by diffuse threats that never end.

If intelligence is what makes complexity manageable, and overwhelming complexity generates “fight or flight” physiology, and we are living through a Semantic Apocalypse, then we should expect lower-IQ people to be hit hardest first: we should expect them to be frantically seeking sources of complexity-containment in a fashion similar to if they were being chased by a saber-tooth tiger. I think that’s what we are observing right now, in various guises, from the explosion of demand for conspiracy theory to social justice hysteria. These are people whose lives really are at stake, and they’re motivated accordingly, to increasingly desperate measures.

These two opposite inclinations toward reality-code maintenance, conditional on cognitive capacity, then become perversely complementary. As high-IQ individuals are increasingly empowered to hard fork reality, they will do so differently, according to arbitrary idiosyncratic preferences (desire or taste, essentially aesthetic criteria). Those who only wish to outsource their code maintenance to survive excessive complexity are spoiled for choice, as they can now choose to join the hard fork of whichever higher-IQ reality developer is closest to their affective or socio-aesthetic ideal point.

In the next part, I will try to trace this history back through the past few decades.

Hard Forking Reality (Part 2): Communication and Complexity

There was once a time, even within living memory, in which interpersonal conflicts among strangers in liberal societies were sometimes solved by rational communication. By “rational,” I only mean deliberate attempts to arrive at some conscious, stable modus vivendi; purposeful communicative effort to tame the potentially explosive tendencies of incommensurate worldviews, using communal technologies such as the conciliatory handshake or the long talk over a drink, and other modern descendants of the ancestral campfire. Whenever the extreme environmental complexities of modern society can be reduced sufficiently, through the expensive and difficult work of genuine communication (and its behavioral conventions, e.g., good faith, charitable interpretations, the right to define words, the agreement to bracket secondary issues, etc.), it is possible for even modern strangers to maintain one shared source code over vast distances. If Benedict Anderson is correct, modern nationalism is a function of print technology; in our language, print technology expanded the potential geographical range for a vast number of people to operate on one shared code repository.

Let’s consider more carefully the equation of variables that make this kind of system possible. To simplify, let’s say the ability to solve a random conflict between two strangers is equal to their shared store of social capital (trust and already shared reference points) divided by the contextual complexity of their situation. The more trust and shared reference points you can presume to exist between you, the cheaper and easier it is to arrive at a negotiated, rational solution to any interpersonal problem. But the facilitating effect of these variables is relative to the number and intensity of the various uncertainties relevant to the context of the situation. If you and I know each other really well, and have a store of trust and shared worldview, we might be able to deal with nearly any conflict over a good one-hour talk (alcohol might be necessary). If we don’t have that social capital, maybe it would take 6 hours and 4 beers, for the exact same conflict situation. Given that the more pressing demands of life generally max-out our capacities, we might just never have 6 hours to spare for this purpose. In which case, we would simply part ways as vague enemies (exit instead of voice). Or, consider a case where we do have that social capital, but now we observe an increase in the numerator (complexity); to give only a few examples representative of postwar social change, perhaps the company I worked for my entire life just announced a series of layoffs, because some hardly comprehensible start-up is rapidly undermining the very premises of my once invincible corporation; or a bunch of new people just moved into the neighborhood, or I just bought a new machine that lets my peers observe what I say and do. All of these represent exogenous shocks of environmental complexity. What exactly are the pros and cons of saying or doing anything, who exactly is worth my time and who is not — these simple questions suddenly exceed our computational resources (although they will overheat some CPUs before other CPUs, an important point we return to below.) This complexity is a tax on the capacity for human beings to solve social problems through old-fashioned interpersonal communication (i.e. at all, without overt violence or the sublimated violence of manipulation, exploitation, etc.).

Notice also that old-fashioned rational dialogue is recursive in the sense that one dose increases the probability of another dose, which means small groups are able to bootstrap themselves into relative stability quite quickly (with a lot of talking). But it also means that when breakdown occurs, even great stores of social capital built over decades might very well collapse to zero in a few years. If something decreases the probability of direct interpersonal problem-solving by 10% at time t1, at time t2 the same exogenous shock might decrease that probability by 15%, cutting loose runaway dynamics of social disintegration.

It is possible that liberal modernity was a short-lived sweetspot in the rise of human technological power. In some times and places, increasing technological proficiency may enable rationally productive dialogue relative to a previous baseline of regular warfare and conflict. But at a certain threshold, all of these individually desirable institutional achievements enabled by rational dialogue constitute a catastrophically complex background environment. At a certain threshold, this complexity makes it strictly impossible for what we call Reality (implicitly shared and unified) to continue. For the overwhelming majority of 1-1 dialogues possible over the global or even national social graph, the soft-forking dynamics implicit in the maintenance of one shared source code become impossibly costly. Hard forks of reality are comparatively much cheaper, with extraordinary upside for early adopters, and they have never been so easy to maintain against exogenous shocks from the outside. Of course, the notion of hard-forking reality assumes a great human ability to engineer functional systems in the face of great global complexity — an assumption warranted only rarely in the human species, unfortunately.

Part 3 will explore in greater detail the cognitive conditionality of reality-forking dynamics.

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram