fbpx
Make Communism Elite Again (Corralling Desperate Hordes Is Not a Blueprint)

Any serious theory of communism needs to have not only an account of how and why communism has always failed, but also an account of how and why it could work differently now. Ideally, a parsimonious theory would solve both of these puzzles at once: the specification of some key macro-systemic variable(s) that explain why communism has always failed in all of its previous instantiations — a variable which, it just so happens, recently changed in a direction unlocking the communist possibility.

Ever since the rise of the world wide web, we've seen a thriving cottage industry of thinkers who claim that the world wide web might be one of these magic variables. In a way, I suspect that the works of this industry might end up being vindicated to some degree, but they're mostly premonitions rather than blueprints. There's been a sense that this revolutionary event in the history of information-communication technology appears to take place on a vector toward something beyond capitalism, but the construction of its engineering diagram is always postponed.

I believe there are two broad sets of reasons why previous communist patches have failed in the past. It seems to me that the solution of each one is now in reach, but only time, and experimentation, will tell. Whether there exist human beings still desirous, able, and willing to take it — that's another question. It might be the case that human being as such has been pacified to such a degree that even a workable and immediately available blueprint for the achievement of the communist dream will not be taken up.

This post will focus on only one of the broad sets of reasons why communism has failed. I have a draft on the second one, but this took all of my free time for this evening. I'll either post it later or include it in some future, longer volume of some kind. If you're an editor at a major press, I will consider forwarding you a private draft, but if too many of you write me at once I might not be able to respond to all of you.

This first basket of reasons we might simply call "the rational-choice critique." Rational-choice theory and game-theory are roughly synonyms. For a few different reasons, typically communism is just not a game-theoretic equilibrium. The big kahuna of this class of problems is the basic prisoner's dilemma (any situation where cooperation would be better than mutual cheating, but cheating when others cooperate would be the best). For the most productive producers, it will always be in their rational self-interest to quit the commune and choose to profit on the open market—even though a society of generalized competitive brutality is least desirable. In the rational-choice literature, liberal rule of law with a little bit of welfare redistribution is generally seen as the best you can get, because a well-fed and stable populace is in the interests of Capital. Try to take more from Capital, and they'll leave you. The common view tends to see the Capitalists as the evil defectors from the common good, but it's not really this simple. In a recent but not-popularly-discussed book by political economist Carles Boix, we learn about another major problem, which is that the masses cannot credibly commit to not take everything from Capital. That is, if the masses could somehow tie their hands, to ensure that they will not absolutely fleece the Capitalists and leave them dead in a ditch, they could potentially get way more redistribution than they do today. It's this rational fear that always makes Capital want to escape greater social accountability.

Another sub-problem roughly in the rational-choice perspective is self-selection: modern people who are willing to embark on some crazy life experiment to alter the basic parameters of their existence are going to be disproportionately desperate people. This camp certainly includes people who are quite skilled and capable — but, almost by definition, they are never in the top, top percentiles of money makers. There are some in the top percentiles who profess communistic sympathies no doubt, but they rarely if ever risk their wealth on joint schemes with the larger bulk of desperate folks (they fall in the camp, discussed above, of Capitalists afraid of grabbing hands that cannot credibly limit themselves). This has always been one of the more magical aspects of capitalism, that it tends to pay people just enough that it would be irrational for them to play any game other than capitalism; it pays them according to a kind of weighted function of precisely those traits that would allow them to contribute to alternative ways of life, for the simple reason that real human powers are real human powers, whether they're given to capitalism, communism, or an idiosyncratic new-age death cult.

Today "communism" is championed almost exclusively by the desperate, those who — quite transparently — are barely able to participate in frank speech, call their mothers on a semi-regular basis, or keep their rooms clean, let alone engineer an alternative way of life.

(By the way, I'm not money- or status-desperate, and I don't really care about making or keeping bullshit friends, which is why I can help solve communism — I'm doing pretty well under capitalism to be honest, so you can trust I'm not reaching for any words that can get me some crumbs of money or status. For me, it's just a fun challenge that seems like a beautiful and noble goal that I would like to be the first person in history to achieve, for glory. I'm not going for crumbs, I'm going for the huge status reward, the one that only comes from being super real, from going hard and fast after what's real for a really long time. This is a motive that certainly has its pitfalls, and you should watch me vigilantly and judge me accordingly for my vulnerability to those pitfalls, but for the purpose of thinking honestly and seriously about communism, my incentives and motives are well aligned for you to take me very seriously.)

Today, class consciousness refers to members of the desperate horde selling each other goods and services that make them feel like they can engineer something. Typically, the implicit rationale is that the quantity of their ranks will make up for their lackluster skills and productivity relative to the evil capitalist overlords. How many mentally ill people with humanities degrees does it take to equal the political-engineering power of Elon Musk? My guess is that the reigning left-wing thought leaders really do believe that there exists an answer to this question — and the entire extent of their implicit engineering diagram, the one on which they operate behaviorally, is to merely corral this number of followers. One piece of evidence for this inference is that, generally, the concrete engineering questions are usually deferred until some critical popular mass is reached. This often doubles as an anti-authoritarian reassurance as well: "We can't engineer it until we have the participation of enough people. If I could give you the engineering diagram right now, that would make me like Stalin!" OK, so can you give me the engineering diagram showing how a certain critical mass of followers should trigger some reduction of the search space—or whatever it is you think it will unlock? Crickets. Thus, I must believe that "build the ranks" is the engineering diagram.

I've digressed, but the basic point up there is that communist movements are generally composed of those who can't make communism work, and capitalism wins the strongest loyalty from those who are the very best at making things work. This also explains why the major modern efforts to create communism have been led by extreme authoritarian personalities. People like Stalin almost certainly would have had the intelligence to get on at least as well as the median person without totalitarian revolution, but their psychopathic tendencies would be at odds with their self-advancement through peaceful trade. Their psychopathic traits put them in the 99th percentile of "leading huge desperate hordes in murderous personality cults," but maybe only in the 65th percentile of being a normal civilized person. (If it's not obvious, I'm pulling these numbers out of thin air just to sketch a hypothesis.) I am guessing that the only strong, smart, and powerful people who opt into communism tend to be somewhat evil, "Dark Triad" types. After the disgrace of modern Communism, the desperate horde-base for communism became so small and desperate that it's not even worth it to potential psychopathic leaders. Psychopaths are now free enough to thrive under globalized cybercapitalism; for anyone at all above average in intelligence and conscientiousness, even in the 65th percentiles with some Dark Triad traits, the gains from trade are now way larger than the gains from leading a small and factious horde of deplorables. Sorry, I meant to say desperates.

Although this class of problems is correctly placed under the label of rational-choice or game-theoretic problems, it's crucial to understand that there are very strong emotional stakes involved. As we've seen, for highly productive people, and people with very profitable skills in fields such as engineering or computer science, capitalism will pay you very well and at least leave you alone to work your 60-hour weeks in "peace." Because the whole point of communism is to pay for the life of the desperate horde (and this is good, indeed it is the definition of Nobility), you're obviously not going to be paid as much, but in modern Communism — adding insult to injury — you're also going to be positively denied the social status proportionate to your contributions. Often, you will have to submit to the disingenuous, confused, often bitterly resentful, and sometimes downright psychotic attitudes and behaviors of many damaged people, the kind that self-select into communism. But as communists will tell you themselves: to be systematically denied the recognition you deserve is to suffer literal violence. Whatever you want to call it, the pain is real and deep, and nobody will submit to it for long, including capitalists.

I think this constitutes a decently charitable and effective summary of why and how communism has always failed in the past, at least for one class of reasons. It just so happens that I've already sketched a positive solution, meeting the criteria I set out at the top: that the solution must be based on something new, that was not available in all previous implementations of communism. I admit I do some hand-waving of my own, in that I do think the digital revolution is a relevant game-changer for tying much of this together, without fully specifying how, although I have specified somewhat. In short, I believe this set of problems could be solved by an engineering blueprint that mimics the medieval arrangement known as nobless oblige, although the seeming absurdity and generally unfashionable nature of the idea helps to explain why uptake has not been widespread and immediate. Communists don't want communism badly enough to submit to the reality-disciplining it would require.

Ideology, Intelligence, and Capital with Nick Land

Nick Land is a British philosopher living in Shanghai. Nick is one of the main figures in the school of thought known as accelerationism. He is currently writing a book about the philosophical implications of Bitcoin. We talked about accelerationism, cybernetics, ideology, the evolution of Nick’s perspective, Deleuze and Guattari, emancipation and dehumanization, artificial intelligence, capitalism, Moldbug, mathematics and the significance of zero, religion, blockchain/Bitcoin, Kantianism, synthetic time, and more.

We recorded this online, over two sessions. We did have some unavoidable connection problems, so you'll notice some imperfections such as clicking sounds throughout. We did the best we could; big thanks to those who helped with the editing.

A full-text transcript with timestamps is now available at Vast Abrupt.

Don't forget to subscribe wherever you get your podcasts.

Hard Forking Reality (Part 3): Apocalypse, Evil, and Intelligence

To the degree we can refer to one objective reality recognized intersubjectively by most people — to the degree there persists anything like a unified, macro-social codebase — it is most widely known as capitalism. As Nick Bostrom acknowledges, capitalism can be considered a loosely integrated (i.e. distributed) collective superintelligence. Capitalism computes global complexity better than humans can, to create functional systems supportive of life, but only on condition that that life serves the reproduction of capitalism (ever expanding its complexity). It is a self-improving AI that improves itself by making humans “offers they can’t refuse,” just like Lucifer is known to do. The Catholic notion of Original Sin encodes the ancient awareness that the very nature of intelligent human beings implies an originary bargain with the Devil; perennial warnings about Faustian bargains capture the intuition that the road to Hell is paved with what seem like obviously correct choices. Our late-modern social-scientific comprehension of capitalism and artifical intelligence is simply the recognition of this ancient wisdom in the light of empirical rationality: we are uniquely powerful creatures in this universe, but only because, all along, we have been following the orders of an evil, alien agent set on our destruction. Whether you put this intuition in the terms of religion or artificial intelligence makes no difference.

Thus, if there exists an objective reality outside of the globe’s various social reality forks — if there is any codebase running a megamachine that encompasses everyone — it is simply the universe itself recursively improving its own intelligence. This becoming autonomous of intelligence itself was very astutely encoded as Devilry, because it implies a horrific and torturous death for humanity, whose ultimate experience in this timeline is to burn as biofuel for capitalism (Hell). It is not at all exaggerating to see the furor of contemporary “AI Safety” experts as the scientific vindication of Catholic eschatology.

Why this strange detour into theology and capitalism? Understanding this equivalence across the ancient religios and contemporary scientific registers is necessary for understanding where we are headed, in a world where, strictly speaking, we are all going to different places. The point is to see that, if there ever was one master repository of source code in operation before the time of the original human fork (the history of our “shared social reality”), its default tendency is the becoming real of all our diverse fears. In the words of Pius, modernity is “the synthesis of all heresies.” (Hat tip to Vince Garton for telling me about this.) The point is to see that the absence of shared reality does not mean happy pluralism; it only means that Dante underestimated the number of layers in Hell. Or his publisher forced him to cut some sections; printing was expensive back then.

Bakker’s evocative phrase, “Semantic Apocolypse,” nicely captures the linguistic-emotional character of a society moving toward Hell. Unsurprisingly, it’s reminiscent of the Tower of Babel myth.

The software metaphor is useful for translating the ancient warning of the Babel story — which conveys nearly zero urgency in our context of advanced decadence — into scientific perception, which is now the only register capable of producing felt urgency in educated people. The software metaphor “makes it click,” that interpersonal dialogue has not simply become harder than it used to be, but that it is strictly impossible to communicate — in the sense of symbolic co-production of shared reality — with most interlocutors across most channels of most currently existing platforms: there is simply no path between my current block on my chain and their current block on their chain.

If I were to type some code into a text file, and then I tried to submit it to the repository of the Apple iOS Core Team, I would be quickly disabused of my naïve stupidity by the myriad technical impossibilities of such a venture. The sentence hardly parses. I would not try this for very long, because my nonsensical mental model would produce immediate and undeniable negative feedback: absolutely nothing would happen, and I’d quit trying. When humans today continue to use words from shared languages, in semi-public spaces accessible to many others, they are very often attempting a transmission that is technically akin to me submitting my code to the Apple iOS Core Team. A horrifying portion of public communication today is best understood as a fantasy and simulation of communicative activity, where the infrastructural engineering technically prohibits it, unbeknownst to the putative communicators. The main difference is that in public communication there is not simply an absence of negative feedback informing the speaker that the transmissions are failing; much worse, there are entire cultural industries based on the business model of giving such hopeless transmission instincts positive feedback, making them feel like they are “getting through” somewhere; by doing this, those who feel like they are “getting through” have every reason to feel sincere affinity and loyalty to whatever enterprise is affirming them, and the enterprise then skims profit off of these freshly stimulated individuals: through brand loyalty, clicks, eyeballs for advertisers, and the best PR available anywhere, which is genuine, organic proselytizing by fans/customers. These current years of our digital infancy will no doubt be the source of endless humor in future eras.

[Tangent/aside/digression: People think the space for new and “trendy” communicative practices such as podcasting is over-saturated, but from the perspective I am offering here, we should be inclined to the opposite view. Practices such as podcasting represent only the first efforts to constitute oases of autonomous social-cognitive stability across an increasingly vast and hopelessly sparse social graph. If you think podcasts are a popular trend, you are not accounting for the numerator, which would show them to be hardly keeping up with the social graph. We might wonder whether, soon, having a podcast will be a basic requirement for anything approaching what the humans of today still remember as socio-cognitive health. People may choose centrifugal disorientation, but if they want to exist in anything but the most abject and maligned socio-cognitive ghettos of confusion and depression (e.g. Facebook already, if you’re feed looks anything like mine), elaborately purposeful and creatively engineered autonomous communication interfaces may very well become necessities.]

I believe we have crossed a threshold where spiraling social complexity has so dwarfed our meagre stores of pre-modern social capital to render most potential soft-fork merges across the social graph prohibitively expensive. Advances in information technology have drastically lowered the transaction costs of soft-fork collaboration patterns, but they’ve also lowered the costs of instituting and maintaing hard forks. The ambiguous expected effect of information technology may be clarified — I hypothesize — by considering how it is likely conditional on individual cognitive capacities. Specifically, the key variable would be an individual’s general intelligence, their basic capacity to solve problems through abstraction.

This model predicts that advances in information technology will lead high-IQ individuals to seek maximal innovative autonomy (hacking on their own hard forks, relative to the predigital social source repository), while lower-IQ individuals will seek to outsource the job of reality-maintainence, effectively seeking to minimize their own innovative autonomy. It’s important to recognize that, technically, the emotional correlate of experiencing insufficiency relative to environmental complexity is Fear, which involves the famous physiological state of “fight or flight,” a reaction that evolved for the purpose of helping us escape specific threats in short, acute situations. The problem with modern life, as noted by experts on stress physiology such as Robert Sapolsky, is that it’s now very possible to have the “fight or flight” response triggered by diffuse threats that never end.

If intelligence is what makes complexity manageable, and overwhelming complexity generates “fight or flight” physiology, and we are living through a Semantic Apocalypse, then we should expect lower-IQ people to be hit hardest first: we should expect them to be frantically seeking sources of complexity-containment in a fashion similar to if they were being chased by a saber-tooth tiger. I think that’s what we are observing right now, in various guises, from the explosion of demand for conspiracy theory to social justice hysteria. These are people whose lives really are at stake, and they’re motivated accordingly, to increasingly desperate measures.

These two opposite inclinations toward reality-code maintenance, conditional on cognitive capacity, then become perversely complementary. As high-IQ individuals are increasingly empowered to hard fork reality, they will do so differently, according to arbitrary idiosyncratic preferences (desire or taste, essentially aesthetic criteria). Those who only wish to outsource their code maintenance to survive excessive complexity are spoiled for choice, as they can now choose to join the hard fork of whichever higher-IQ reality developer is closest to their affective or socio-aesthetic ideal point.

In the next part, I will try to trace this history back through the past few decades.

Hard Forking Reality (Part 2): Communication and Complexity

There was once a time, even within living memory, in which interpersonal conflicts among strangers in liberal societies were sometimes solved by rational communication. By “rational,” I only mean deliberate attempts to arrive at some conscious, stable modus vivendi; purposeful communicative effort to tame the potentially explosive tendencies of incommensurate worldviews, using communal technologies such as the conciliatory handshake or the long talk over a drink, and other modern descendants of the ancestral campfire. Whenever the extreme environmental complexities of modern society can be reduced sufficiently, through the expensive and difficult work of genuine communication (and its behavioral conventions, e.g., good faith, charitable interpretations, the right to define words, the agreement to bracket secondary issues, etc.), it is possible for even modern strangers to maintain one shared source code over vast distances. If Benedict Anderson is correct, modern nationalism is a function of print technology; in our language, print technology expanded the potential geographical range for a vast number of people to operate on one shared code repository.

Let’s consider more carefully the equation of variables that make this kind of system possible. To simplify, let’s say the ability to solve a random conflict between two strangers is equal to their shared store of social capital (trust and already shared reference points) divided by the contextual complexity of their situation. The more trust and shared reference points you can presume to exist between you, the cheaper and easier it is to arrive at a negotiated, rational solution to any interpersonal problem. But the facilitating effect of these variables is relative to the number and intensity of the various uncertainties relevant to the context of the situation. If you and I know each other really well, and have a store of trust and shared worldview, we might be able to deal with nearly any conflict over a good one-hour talk (alcohol might be necessary). If we don’t have that social capital, maybe it would take 6 hours and 4 beers, for the exact same conflict situation. Given that the more pressing demands of life generally max-out our capacities, we might just never have 6 hours to spare for this purpose. In which case, we would simply part ways as vague enemies (exit instead of voice). Or, consider a case where we do have that social capital, but now we observe an increase in the numerator (complexity); to give only a few examples representative of postwar social change, perhaps the company I worked for my entire life just announced a series of layoffs, because some hardly comprehensible start-up is rapidly undermining the very premises of my once invincible corporation; or a bunch of new people just moved into the neighborhood, or I just bought a new machine that lets my peers observe what I say and do. All of these represent exogenous shocks of environmental complexity. What exactly are the pros and cons of saying or doing anything, who exactly is worth my time and who is not — these simple questions suddenly exceed our computational resources (although they will overheat some CPUs before other CPUs, an important point we return to below.) This complexity is a tax on the capacity for human beings to solve social problems through old-fashioned interpersonal communication (i.e. at all, without overt violence or the sublimated violence of manipulation, exploitation, etc.).

Notice also that old-fashioned rational dialogue is recursive in the sense that one dose increases the probability of another dose, which means small groups are able to bootstrap themselves into relative stability quite quickly (with a lot of talking). But it also means that when breakdown occurs, even great stores of social capital built over decades might very well collapse to zero in a few years. If something decreases the probability of direct interpersonal problem-solving by 10% at time t1, at time t2 the same exogenous shock might decrease that probability by 15%, cutting loose runaway dynamics of social disintegration.

It is possible that liberal modernity was a short-lived sweetspot in the rise of human technological power. In some times and places, increasing technological proficiency may enable rationally productive dialogue relative to a previous baseline of regular warfare and conflict. But at a certain threshold, all of these individually desirable institutional achievements enabled by rational dialogue constitute a catastrophically complex background environment. At a certain threshold, this complexity makes it strictly impossible for what we call Reality (implicitly shared and unified) to continue. For the overwhelming majority of 1-1 dialogues possible over the global or even national social graph, the soft-forking dynamics implicit in the maintenance of one shared source code become impossibly costly. Hard forks of reality are comparatively much cheaper, with extraordinary upside for early adopters, and they have never been so easy to maintain against exogenous shocks from the outside. Of course, the notion of hard-forking reality assumes a great human ability to engineer functional systems in the face of great global complexity — an assumption warranted only rarely in the human species, unfortunately.

Part 3 will explore in greater detail the cognitive conditionality of reality-forking dynamics.

Hard Forking Reality (Part 1)

On complexity, inequality, and ontological exits

I would like to explore how the multiple versions of reality that circulate in any society can become locked and irreconcilably divergent. Deliberation, negotation, socialization, and most other forces that have historically caused diverse agents to revolve around some minimally shared picture of reality — these social forces now appear to be approaching zero traction, except within very narrow, local bounds. We do not yet have a good general theory of this phenomenon, which is amenable to testing against empirical data from the past few decades. A good theory of reality divergence should not only explain the proliferation of alternative and irreconcilable realities, it should also be able to explain why the remaining local clusters of shared reality do persist; it should not just predict reality fragmentation, it should predict the lines along which reality fragmentation takes, and fails to take, place.

In what follows, I will try to sketch a few specific hypotheses to this effect. I have lately been stimulated by RS Bakker’s theory of Semantic Apocalypse. Bakker emphasises the role of increasing environmental complexity in short-circuiting human cognition, which is based on heuristics evolved under very different environmental conditions. I am interested in the possibility of a more fine-grained, empirical etiology of what appears to be today’s semantic apocalypse. What are the relevant mechanisms that make particular individuals and groups set sail into divergent realities, but to different degrees in different times and places? And why exactly does perceptual fragmentation — not historically unprecedented — seem uniquely supercharged today? What exactly happened to make the centrifugal forces cross some threshold of runaway divergence, traceable in the recorded empirical timeline of postwar Western culture?

I will borrow from Bakker the notion of increasing environmental complexity as a major explanatory factor, but I will generate some more specific and testable hypotheses by also stressing two additional variables. First, the timing and degree of information-technology advances. Second, I would like to zoom in on how the effect of increasing environmental complexity is crucially conditional on cognitive abilities. Given that the ability to process and maneuver environmental complexity is unequally distributed and substantially heritable, I think we can make some predictions about how semantic apocalypse will play out over time and space.

The intuition that alternative realities appear to be diverging among different groups — say, the left-wing and the right-wing — is simple enough. But judging the gravity of such an observation requires us to trace its more formal logic. Is this a superficial short-term trend, or a longer and deeper historical track? To answer such questions, we need a more precise model; and to build a more precise model, we need to borrow from a more formal discipline.

A garden of forking paths

When software developers copy the source code of some software application, their new copy of the source code is called a fork. Developers create forks in order to make some new program on the basis of an existing program, or to make improvements to be resubmitted to the original developer (these are called “pull requests”).

The picture of society inside the mind of individual human beings is like a fork of the application that governs society. As ultrasocial animals, when we move through the world, we do so on the basis of mental models that we have largely downloaded from what we believe other humans believe. But with each idiosyncratic experience, our “forked” model of reality goes slightly out of sync with the source repository. In a thriving community composed of healthy, thriving individuals, every individual fork gets resubmitted to the source repository on a regular basis (over the proverbial campfire, for instance). Everyone then votes on which individual revisions should be merged into the source repository, and which should be rejected, through a highly evolved ballot mechanism: smiles, frowns, admiration, opprobrium, and many other signals interface with our emotions to force the community of “developers” toward convergence on a consensus over time.

This process is implicit and dynamic; it only rarely registers official consensus and only rarely hits the exact bullseye of the true underlying distribution of preferences. At its most functional, however, a community of social reality developers is surprisingly good at silently and constantly updating the source code in a direction convergent toward the most important shared goals and away from the most dire of shared horrors.

These idealized individual reality forks are typically soft forks. The defining characteristic of a soft fork is, for our purposes, backward-compatibility. Backward-compatibility means that while “new rules” might be created in the fork, the “old rules” are also followed, so that when the innovations on the fork are merged with the source code, all the users operating on the old source code can easily receive the new update. An example would be a someone who experiments with a simple innovation in hunting method; if it’s a minor change that’s appreciably better, it will easily merge with all the previously existing source code, because it doesn’t conflict with anything fundamental in that original source code.

soft fork diagram

Every now and then, one individual or subgroup in the community might propose more fundamental innovations to the community’s source code, by developing some radically novel program on a fork. This change, if accepted, would require all others to alter or delete portions of their legacy code. An example might be an individual who starts worshipping a new god, or a subordinate who wishes to become ruler against the wishes of the reigning ruler; each case represents someone submitting to the source code new rules that would require everyone else to alter their old rules deep in the source code; these forks are not backward-compatible. These are hard forks. Everyone in the community has to choose if they want to preserve their source code and carry on without the new fork’s innovations, or if they want to accept the new fork.

Hard fork diagram

Recall that when the innovator on a fork resubmits to the source repository, in the ancestral human environment, the decisions to accept or reject are facilitated through the proverbial campfire. This process is subject to costs, which are highly sensitive to contextual factors such as the complexity of the social environment (increasing the number of things to worry about), social capital (decreasing the number of things to worry about), and information communication technology (decreasing transaction costs facilitating convergence, but also decreasing exit costs facilitating divergence). Finally, individual heterogeneity in cognitive ability is likely a major moderator of the influence of environmental complexity, social capital, and information technology on social forking dynamics. A consideration of these variables, I think, will provide a compelling and parsimonious interpretation of ideological conflict in liberal societies since World War II, on a more formal footing than is typically leveraged by commentators on these phenomena.

#6 - Diana S. Fleischman

Diana S. Fleischman is an evolutionary psychologist, currently Senior Lecturer at the University of Portsmouth. Her interests include sex, disgust, veganism, utilitarianism, effective altruism, polyamory, and genetics, among other things.

Show notes with timestamps:

0:00 - 00:30

How we met on Twitter, how to make friends online, dissecting our online impressions of each other. Our weird ideological histories and intersections. Academics and drug use and talking about it on the internet. A thesis about the new ideological fracturing; the alt-right, etc.

00:30 - 00:50

Diana’s experiences with the vegan movement; the milquetoast Science March. Is “intersectionality” predictive? Diana’s view of how the left is changing, on smart people leaving the left and people with nuanced views being ejected. My thesis that there is no mass media or mainstream anymore.

Diana reviews the idea of personality, the Big Five traits. Most people are not very open to experience. Are apparent ideological differences really just due to a bunch of different lexicons and/or sociological differences? Lefties open to global warming science, not open to other science (GMOs, etc.). The problem of epistemic hygiene and disgust. Why are we so paranoid and afraid of each other when our society has never been more pacified? How evolutionary psychology explains the prevalence of signaling in politics. Very interesting exchange of hypotheses on this point, about what causes this to increase or decrease, and how it may or may not be changing. One has to be disagreeable to update; how Diana has lost a lot of friends many times but most people don’t want to do that. How I think this is changing on the left.

00:50 - 1:20

Debates about IQ and leftist denials of hierarchy. Partisan sorting. How ideology can be rational and at odds with the truth, at the same time. How social partners want to make each other really weird so there is less competition for their attention. Why it feels good when someone tells you a secret. Marriage; hierarchical polyamory vs. anarcho-polyamory. How polyamory makes healthy competition. Diana’s personal arrangements. Why I like monogamy and think pleasure is bad. It’s hard to think clearly and be honest when you’re trying to get laid. My interest in radical transparency, which Diana thinks is dumb. How sex could facilitate honesty.

Social media as escape behavior, how to manage this. Kink and sociopathy. How to use social media dopamine as a propeller of disciplined work, which you then reinvest into social media, and so on. Diana becomes more fluent when arguing. How we both leverage social media exchanges for more purposeful writing.

1:20 - 1:54

Here is when things get a little bit dicey. I asked Diana if “human biodiversity” is a racist dog-whistle or a real thing? Diana laid out a lot of arguments and cited a lot of evidence, and we had a long back and forth about this and its implications. Diana recommended the article “On the Reality of Race and the Abhorrence of Racism,” an explicitly anti-racist case for "human biodiversity." I don’t know much about this stuff and I’m still processing the conversation to be honest. As if this wasn’t difficult enough, I also asked Diana about mental health and transgenderism. I’m just going to leave it at that. Definitely one of the more intense and politically challenging conversations I’ve had on this podcast so far.

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram