When I first laid out my idea for a neo-feudal technocommunistpatch, I only waved my hand at the coming technological pathways to my proposed polity. In that first talk, I just hypothesized that Rousseau's concept of the General Will could be engineered by Internet of Things + Smart Contracts.
But "Internet of Things" is really just a popular shorthand for the deepening integration of our physical and digital worlds. So it's easy to point at such a general class of coming technologies and say "something here is certainly going to solve [insert hitherto unsolvable problem]." One could very well have questioned my original talk on the grounds that what I was describing is not really feasible, or will not be feasible anytime soon.
The technology necessary to make communism game-theoretically stable seems closer than I thought.
One pathway on the sensor front is radar. Google has produced a new sensing device called Soli, which uses miniature radar to measure "touchless gestures." It's basically a tiny chip that holds a sensor as well as an antenna array, in one 8mm x 10mm rectangle:
Though Google's intended applications revolve around hand gestures, some people are already finding more general applications. (A flashy new prototype from a megacorp is one thing; but when some other entity starts tinkering with interesting results, that makes me pay more attention.)
A team of academics at the University of St. Andrews recently used Soli to explore the...
counting, ordering, identification of objects and tracking the orientation, movement and distance of these objects. We detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns, beyond static interaction, which are continuous and dynamic. With a focus on planar objects, we report on a series of studies which demonstrate the suitability of this approach. This exploration is grounded in both a characterization of the radar sensing and our rigorous experiments which show that such sensing is accurate with minimal training.
Take a minute to watch it in action, before we embark on a little thought experiment.
It's easy to imagine — without much extrapolation — how one could use this technology to enforce collective honesty and ethical performance optimization. Consider a large multi-family compound. One individual in one of the families is, by far, the most productive chopper of firewood. But he's a little dumb, and earns little money on the market. Then some other individual is by far the most productive software developer; he makes a lot of money on the market but he sucks at chopping firewood. Of course, rich software developers can already pay dumb manual laborers to produce their firewood, but currently no smart and rich person can enjoy the much more valuable and scarce luxury good of living in genuine harmony with a manual laborer.
So our wood-chopping expert hooks up some Soli chips to the pile of chopped wood he maintains for the community. Whenever a piece of wood is removed, he gets a ping on his phone, or maybe a digest at the end of each week. It tells him how many pieces of wood were taken, their weight, which person took them, and how many tokens were transferred to him by the associated Firewood Smart Subcontract (subcontracts are like clauses added to the original founding Smart Contract established at the founding of the polity; they can be constantly added and taken away by consensus, typically as new people enter or leave the group, or if/when individuals' skills/traits/needs change substantially). The richer the person taking firewood, the more they pay per piece of wood via the Smart Contract, according to a steeply progressive taxation rate agreed and programmed into law previously.
On the other hand, if Mr. Bunyan is not keeping the stock replenished, which leads to some individuals suffering very cold evenings, a certain number of tokens are transferred from him to whoever suffered a cold evening. This transfer can be automatically triggered whenever the data show the wood stock to be beneath some threshold, and the temperature data from a particular house to be beneath some threshold, on the same day. And again, these thresholds can be agreed consensually.
Aside: It might seem that this technocommunism sure does require a lot of group decisions — won't it fail like Occupy failed, because democracy is too much work?! Not quite. First, other than the basic preference thresholds defined in the contracts, there is no discussion or deliberation whatsoever. The code is sovereign, and removes the need for regular meetings and debates. My references to consensus only refer to periodic updates. Second, you know what requires a million decisions? The construction of a modern website. And yet it's easier than ever to make one, even with a group. Why? Because code evolves. With code, future people let the smartest and most successful past people make decisions for them. Over time, the larger global community of neo-feudal techno-communist polity hackers will converge on templates: kits containing a variety of sensor devices with a corresponding code repository, containing all the device+subcontract components found in almost all of the most successful previous patches to date. Groups will add new modules if they enjoy hacking, but many will just use the default settings. Or upon initiation, each person completes a short survey gauging basic traits and aptitudes, which plugs into the template optimal values for the various preference thresholds.
Depending on the use case, perhaps a video rig combined with image detection algorithms would work better than radar. Perhaps multiple, redundant methods leveraging different dimensions (video, radar, sound, etc.) might be used at once, in especially tricky and sensitive cases. Perhaps it turns out that 67% percent of the most destructive community offenses occur in kitchens, so the kitchen is loaded with every method and a heavyweight ensemble model. With some problems our tolerance for false positives might be greater/lesser than our tolerance for false negatives, so perhaps the statistical cutoff for inferring a violation would be set higher or lower accordingly.
Meanwhile, while the wood-chopper's system is managing itself, the rich computer programmer might leave a huge stock of old-fashioned USD greenbacks out in the open, available to all for immediate, interest-free, cash loans. Why? Because the risk approaches zero: Just as you can watch in the video above, all removals and returns are fully identified and recorded with radar, and if anyone fails to repay, the owner of the cash stock will be automatically credited from the taker's account after some agreed time (if the taker doesn't have it, a small portion will be taken from all of the others, all of whom have agreed to guarantee each other).
The only question right now is, what are currently the best technologies available for getting started? That and, who's game?
In this podcast, Dave gave me a better understanding of the potential — and the limits — of Ethereum smart contracts. I told Dave about my own ideas (see Reality Patchwork and Neo-Feudal Techno-Communism and Aristocracy and Communism) to see how my intuitions bounced off a technical expert. There's also some good stuff in here for anyone curious about learning to develop their own smart contracts.
Big thanks to all the patrons who help keep this podcast going.
When I talk about aristocratic communism — the idea that a functional communism might be achieved by organizing and enforcing respect for the rich, on condition they distribute wealth — many people scoff and say "that's already the hell of neoliberal capitalism!"
But in fact, today, it's increasingly difficult for the wealthy to enjoy their nobless oblige, in part because it's so mediated by large sclerotic institutions. Wealthy nobles once upon a time redistributed their wealth as a kind of art form; they were like painters painting on the grandest canvas, and the enjoyment of this creative control, as well as the glory that came from being directly and visibly linked to it, were likely major incentives encouraging redistribution.
Today, the wealthy donate a lot of money through Big Philanthropy, but Big Philanthropy is better thought of as a huge bureaucratic blockage to the real social-psychological attractions of philanthropy.
…the gift is unlikely to take the form of Jeff Bezos dictating terms, even if he is the world’s richest man. Bezos and his team will have to work through many institutions — not just preschools and homeless shelters but other organizations that help them do their work. Even brand new preschools and homeless shelters, funded entirely by Bezos, will have their own charters, missions, staffs and fiduciary responsibilities. Any wealthy person who wants to give away money will find that incentives and the nature of decentralization and bureaucracy impose their own set of checks and balances.
This supports my contention that perhaps the only thing the rich cannot get their hands on today is the invaluable experience of genuine nobility — which comes from generously and creatively supporting others and receiving respect and admiration in return. If we could engineer a way for some rich people to enjoy such true, disintermediated nobility, I think they'd become quite open to supporting a community of common folk in a fashion that approximates the classical communist ideal.
And now that I think about it, who is blocking the rich from exercising their noblessoblige? Most of the bureaucrats and meddlers working in philanthropic and humanitarian agencies and organizations generally see themselves, and present themselves, as morally progressive agents. If Bezos wants to give $2 billion to solving some big social ill, there will be dozens if not hundreds of groups who already claim to be the nobles "working on it." But these people basically own the poor and working people they seek to represent and "help." If Jane wants to give me 20 bucks but John insists that she must give it to him first, and then John gives me 10 bucks — John is not my helper. He is my owner, and he is using me to make money for himself. In short, modern society is overrun with fake nobles, who do not have resources to distribute but quite the opposite: they push the moral buttons of the populace and pull government levers to extract money from the wealthy, primarily for their own careers and identity, and only secondarily to help others. This ordering of priorities is clearly legible in the balance sheets of these organizations, which generally show most of the money going to staff and overhead. They claim to be promoting redistribution, but they happily place themselves in the way of rich people who would like to be morecommunist, if only they were allowed.
[Disclosure: I don't actually know that much about European feudalism. Most of my posts contain a fair bit of speculative guesswork and imagination, but after finishing this I felt compelled to make clear it's almost all conjecture. Rather than make every sentence wishy-washy with too many qualifiers, I've kept many of the probably-too-firm sentences but am putting this here to qualify all of them.]
Under European feudalism, normative status hierarchies seem to have been relatively well aligned with objective character qualities and community contributions. For instance, the Lord organized, commanded, and actually fought with the army that protected the patch from external threats, thus earning the premium of admiration and respect (not to mention money) associated with his title. Social facts (the codified power distribution, titles and so on) and social values (what and who gets counted as good), seem to have been more tightly correlated than today, with both relatively well calibrated to their proper objective referents.
On the lower end of the status hierarchy, the most hard-working, responsible, patient, loyal, Serfs who cared about their family's future (all normatively positive descriptors) could save money and eventually become freemen — if they were blessed with the abilities necessary to do so. The epoch's techno-scientific inability to distinguish between inherited abilities and the above-listed character virtues was unfortunate and certainly caused much measurement error (giving too much or too little normative credit to individuals for inherited traits), but — discounting for their ignorance on these matters — the relationship between social facts and social values had to be better calibrated than it would be when mass-broadcast deceptiveness becomes possible. Drunks and brawlers presumably did not transcend their bondage, and those who avoided drinking and fighting would be more likely to gain independence. In short, good adjectives were likely applied to those producing objectively pro-social and self-rewarding effects, and bad adjectives were likely applied to those producing objectively anti-social and self-destructive effects. At least, I would infer, more so than today, when objectively bad people sometimes earn positive admiration from millions, and objectively good people sometimes receive nothing but punishment. Obviously, I'm being highly simplistic; ye old manor was no rose garden. But while there was much natural suffering and tragedy, and many typical human pathologies, it seems true that the social calibration of normative worthiness with its objective empirical referents was far less vulnerable to the kind of systematic, impervious-to-error-correction divergence dynamics we appear to be living through today.
The philosophical and behavioral backbone of the post-Roman patchwork was Catholicism. It stands to reason that this was the unique condition that allowed a high degree of fragmentation and decentralization, but nonetheless a high degree of shared identity, meaning, purpose (relative to anything we know today, anyway).
The feudal community codifies objectively existing differences in human temperament and ability, which may be natural and hardwired or arbitrary and unjust, but — with the surplus of social goodwill produced by this factual and spiritual attunement — the powerful are genuinely invested in lifting the floor of their most downtrodden subjects. The weak also are genuinely invested in — sincerely praying for, or "rooting for" (to use the contemporary term for "prayer") — the success of their Lord and his army. European feudalism therefore provides some historical evidence for the consistency of what I have previously theorized as noble communism, and it suggests that the Catholic faith may be uniquely effective in solving the coordination problems of the ideal communist model. It was the Catholic faith alone that sustained cohesion, meaning, and collective economic productivity (though admittedly not optimality) in a context that was fundamentally libertarian. It was also unique to this Catholic patchwork that only from here would we observe the intelligence explosion that we now think of as modern capitalism.
One should not ask why the feudal commune failed: it was a genuinely free communism that succeeded in generating growth behavior (to be called capitalism). It was probably only in this fragmented but high-trust context that capitalism could emerge.
In its first few hundred years, it's been like an angry tiger just released from an all-too-small cage, but on the world-historical timeline a few hundred years is nothing.
The pro-growth, libertarian Catholic communism of European feudalism was so successful that for a short period of time, several proud and arrogant generations thought they could do away with God. They tried, life eventually became unbearably empty, at the same time that scientific rationality now affirms the likelihood of a creator God at the beginning of our time, and a second coming of God in the near future. All of this is now being realized, and a return to Catholic communism may be the only path forward, on rationalist grounds, aesthetic grounds, and ethical grounds.
Another virtue of Catholic communism was that it sustained itself for hundreds of years without any race consciousness, which had not yet been invented in its modern sense. Thus the Catholic communist model offers a viable and much better alternative to ethnic identity as a principle of cohesion in the West.
So the question is not "How could we make feudalism work today, if it couldn't work in the past?" It did work in the past, all too well! The question is rather whether secular capitalism can last, for more than a few hundred years. Feudal communism worked for hundreds of years and is generative of capitalism; secular rational capitalism has worked for a few hundred years, but its rapidly being looted by rent-seekers within and anti-western, anti-secular enemies on the outside. The human biomass that is now merely a plaything of the rational secular capitalist super-system has no will to fight for anything other than its own resentments. Any sufficiently aggressive and repressive force on the inside or outside of secular capitalism may very well destroy everything before artificial superintelligence takeoff locks in.
It is not for nothing that the threat of militant Islam again rears its head today, right when the decline of Catholic communist Europe is approaching completion. The reason why there was not much innovative art and culture in the Dark Ages is because most of the human effort went towards the military defense of the Catholic patches against pagan pirates from the north and Islam from the south. It stands to reason that Catholic communism was adaptive for keeping out regressive militant Islam, and the threat of progressive militant Islam incentivized the Catholic communism. We are only being reminded of this today after a long hiatus of lazy arrogance; that if a dignified and meaningful life is not provided to all by the noble, then the children of Europe will sooner join the Islamic holy war than resist it. And most of them will be indifferent at best.
We are well aware of the ways in which secular communism is typically not a stable game-theoretic equilibrium. We have learned this through many data points. But we are less aware of whether secular capitalism is a stable equilibrium, because it’s a unique world-system experiment with an n = 1. So far though, it's not looking good if you ask me.
[This is a transcript of a talk I gave at the Diffractions/Sbds event, "Wyrd Patchwork," in Prague on September 22, 2018. The video can be found here. My talk begins at around the 2-hour and 6-minute mark. I've added some links and an image.]
I want to talk about patchwork as an empirical model, but also a little bit as a normative model, because there's this idea that capitalism is increasingly collapsing the fact/value distinction. I tend to think that's true. And I think what that means is that, that which is empirically true increasingly looks to be normatively true also. Or if you're searching for a true model, you should be searching for models that are at once empirically well calibrated with reality and also one should be looking for normative or ethical consistency. And you can find the true model in any particular situation by kind of triangulating along the empirical and the normative. That's kind of how I think about patchwork.
I've been thinking about it in both of these dimensions and that has allowed me to converge on a certain vision of what I think patchwork involves or entails. And I've been writing a lot about that over the past couple months or so. So what I'm going to do in this talk specifically, is not just rehash some ideas that I've been thinking about and writing about and speaking about the past couple months, but I'm going to try to break a little bit of ground, at least in my own weird head, at the very least. And how these, some of these different ideas of mine connect, or can be integrated. In particular, I wrote a series of blog posts a few months ago on what I call reality forking (1, 2, 3). "Forking" is a term that comes from the world of software engineering. And so that's going to be one component of the talk.
You'll see it. It's very obvious how that connects to the idea of patchwork. And I'm also going to talk about this vision for a communist patch a lot of us have been interested in. And I've been talking with a lot of people about this idea of the communist patch and soliciting, you know, different people's impressions on it. And I also have written a few blog posts recently talking — kind of sketching, kind of hand-waving, if you will — at what a possibly communist patch might look like. A lot of people think, to this day, that patchwork has a very kind of right-wing connotation. People think primarily of Moldbug and Nick Land when they think of patchwork. But I think it's not at all obvious that patchwork necessarily has a right-wing flavor to it.
I think we can easily imagine left-wing patches that would be as competitive and as successful as more authoritarian patches. And so that's kind of what I've really been thinking a lot about recently. And even Nick Land himself told me that, you know, there's nothing wrong with trying to think about and even build a communist patch — it's all fair play. He's much less bullish on it than I am, but be that as it may. So those two ideas I'm going to discuss basically in turn and then try to connect them in a few novel ways. I have a few points or comments or extrapolations or connections between these two different ideas I've been working on, that I've never really written down or quite articulated yet. So that's what I'm going to try to do here.
So first of all, I was going to start this by talking a little bit about how patchwork I think is already happening in a lot of ways, but I deleted many of my bullet points because Dustin's presentation basically covered that better than I possibly could. So I'm not going to waste too much time talking about that. There's a lot of empirical data right now that looks a lot like fragmentation is the order of the day and there's a lot of exit dynamics and fragmentation dynamics that we're observing in many domains. And yeah, Dustin articulated a lot of them.
One thing I would say to kind of situate the talk, though, is that it's worth noting that not everyone agrees with this, you know... There's still a lot of integrative talk nowadays. There's a lot of discourse about the necessity of building larger and larger organizations. Especially when people are talking about global issues and major existential threats. Often in the educated discourse around preventing nuclear threats, for instance, or AI, things like runaway inhumane genetic testings, things like that. You could probably think of a few others. Climate change would be the obvious big one, right? A lot of these major global issues, the discourse around them, the expert opinions, tend to have a kind of integrative, centralized tendency to them. Actually just this morning I happened to be listening to a podcast that Sam Harris did with Yuval Harari. This guy who wrote the book, Sapiens, this mega global blockbuster of a book, and you know, he seemed like a nice guy, a smart guy of course, but everything he was saying was totally integrated. He was talking about how we need things like international organizations and more global international cooperation to solve all of these different problems and Sam Harris was just kind of nodding along happily. And that got me thinking actually, because even if you read people like Nick Bostrom and people who are kind of more hard-nosed and analytical about things like intelligence explosion, you find a lot of educated opinion is the opposite of a patchwork orientation, you find "We need to cooperate at a global level." Anyway, the reason I mentioned this is just to put in context that the ideas we're interested in and the empirical dynamics that were pinpointing are not at all obvious to everyone.
Even though, when you really look at all of the fragmentation dynamics now, I think it's increasingly hard to believe any idea, any proposal having to do with getting all of the nation states to cooperate on something. I just... I just don't see it. For instance, genetic engineering, you know China is off to the races and I just don't see any way in which somehow the US and China are going to negotiate some sort of pause to that. Anyway, so that's worth reflecting on. But one of the reasons I mention that is because I kind of have a meta-theory of precisely those discourses and that's what I'm going to talk about a little bit later in my talk when I talk about the ethical implications, because I think a lot of that is basically lying.
Okay. One of my theses is that when people are talking about how we have to organize some larger structure to prevent some moral problem — nine times out of ten, what they're actually doing is a kind of capitalist selling process. So that's actually just a kind of cultural capitalism in which they're pushing moral buttons to get a bunch of people to basically pay them. That is a very modern persona, that's a modern mold and that's precisely one of many things that I think is being melted down in the acceleration of capitalism. What's really happening is all that's really feasible in so many domains. All you can see for miles when you look in every possible direction is fragmentation, alienation, atomization, exits of all different kinds on all different kinds of levels.
And then you have people who are like, "Uh, we need to stop this, so give me your money and give me your votes." I think that's basically an unethical posture. I think it's a dishonest, disingenuous posture and it's ultimately about accruing power to the people who are promoting that — usually high-status, cultural elites in the "Cathedral" or whatever you want to call it. So that's why I think there are real ethical implications. I think if you want to not be a liar and not be a kind of cultural snake-oil salesman — which I think a lot of these people are — patchwork is not only what's happening but we're actually ethically obligated to hitch our wagon to patchwork dynamics. If only not to be a liar and a manipulator about the the nature of the real issues that we're going to have to try to navigate somehow.
I'll talk a little bit more about that, but I just wanted to kind of open up the talk with that reflection on the current debate around these issues. So, okay.
The one dimension of patchwork dynamics or exit dynamics that we're observing right now, that Dustin didn't talk about so much, is a patchwork dynamic that's taking place on the social-psychological level. To really drive this point home, I've had to borrow a term from the world of software engineering. I'll make this really quick and simple.
Basically, when you're developing software and you have a bunch of people contributing to this larger codebase, you need some sort of system or infrastructure for how a bunch of people can edit the code at the same time, right? You need to keep that orderly, right? So there's this simple term, it's called forking. So you have this codebase and if you want to make a change to the code base, you fork it. In a standard case, you might do what we call a soft fork. I'm butchering the technical language a little bit; if there are any hardcore programmers in the room, I'm aware I'm painting with broad strokes, but I'll get the point across effectively enough without being too nerdy about it.
A soft fork means that you pulled the codebase off for your own purposes, but it ultimately can merge back in — is the simple idea there. But a hard fork is when you pull the code base off to edit it, and there's no turning back. There's no reintegrating your edits to the shared master branch or whatever you want to call it. So I use this kind of technical distinction between a soft fork and a hard fork to think about what's actually going on with social, psychological reality and its distribution across Western societies today. The reason I do this is because I think you need this kind of language to really drive home how radical the social psychological problems are. I really think that we underestimate how much reality itself is being fragmented in different subpopulations.
I think we're talking about fundamental... We are now fundamentally entering into different worlds and it's not at all clear to me that there's any road back to having some sort of shared world. And so I sketched this out in greater detail. The traditional human society, you can think of it as a kind of system of constant soft forking, right? Individuals go off during the day or whatever, they go hunting and do whatever traditional societies do, and at the end of the night they integrate all of their experiences in a shared code base. Soft forks, which are then merged back to the master branch around the campfire or whatever you want to call it, however you want to think about that. But it's only now that, for the first time ever, we have the technological conditions in which individuals can edit the shared social codebase and then never really integrate back into the shared code base.
And so this is what I call the hard forking of reality. I think that is what we're living through right now. And I think that's why you see things like political polarization to a degree we've never seen before. That's why you see profound confusion and miscommunication, just deep inabilities to relate with each other across different groups, especially like the left vs. right divide, for instance. But you also see it with things like... Think about someone like Alex Jones, think these independent media platforms that are just on a vector towards outer space — such that it's hard to even relate it to anything empirical that you can recognize. You see more and more of these kinds of hard reality forks, or that's what I call them. I'm very serious.
I think educated opinion today underestimates how extreme that is and how much that's already taking place. It's not clear to me once this is underway, it's not clear to me how someone who is neck-deep in the world of Alex Jones — and that is their sense of what reality is — how that person is ever going to be able to sync back up with, you know, an educated person at Harvard University or something like that. It's not just that those people can't have dinner together — that happened several decades ago probably — but there's just no actual technical, infrastructural pathway through which these two different worlds could be negotiated or made to converge into something shared. The radicalism of that break is a defining feature of our current technological moment.
And that is an extraordinary patchwork dynamic. In other words, I think that patchwork is already here, especially strong in the socio-psychological dimension, and that's very invisible. So people underestimate it. People often think of patchwork as a territorial phenomenon and maybe one day it will be, but I think primarily for now it's social-psychological and that should not be underestimated because you can go into fundamentally different worlds even in the same territory. But that's what the digital plane opens up to us. So that's one half of what I'm bringing to the table in this talk.
There are a few antecedent conditions to explain, like why I think this is happening now. One is that there's been an extraordinary breakdown in trust towards all kinds of traditional, institutionalized, centralized systems. If you look at the public opinion data, for instance, on how people view Congress in the United States, or how people view Parliament or whatever, just trust in elected leaders... You look at the public opinion data since the fifties and it's really, really on the decline, a consistent and pretty rapid decline.
And this is true if you ask them about the mass media, politicians, a whole bunch of mainstream, traditional kinds of institutions that were the bedrock of modernized societies... People just don't take them seriously anymore at all. And I think that is because of technological acceleration, what's happened is that there is unprecedented complexity. There's just too much information. There's so much information that these modern institutions are really, really unwieldy. They're really unable to process the complexity that we now are trying to navigate and people are seeing very patently that all of these systems are just patently not able to manage. They're not able to do or give what they're supposed to be giving with this explosion of information that they were not designed to handle. So it's kind of like a bandwidth problem, really. But because of this, people are dropping their attention away from these institutions and they're looking outwards, they're looking elsewhere, they're looking for other forms of reality because that's ultimately what's at stake here.
These traditional institutions, they supplied the shared reality. Everyone referred back to these dominant institutions because — even if you didn't like those institutions in the 60s or 70s or whatever, even when people really didn't like those institutions, like the hippies or whatever — everyone recognized them as existing, as powerful. So even opposing them, you kind of referred back to them. We're now post- all of that, where people so mistrust these institutions that they're not even referring back to them anymore. And they're taking all their cues for what reality is from people like Alex Jones or people like Jordan Peterson or you name it, and you're going to see more and more fragmentation, more and more refinement of different types of realities for different types of subpopulations in an ever more refined way that aligns with their personalities and their preferences. These are basically like consumer preferences. People are going to get the realities that they most desire in a highly fragmented market. Anyway... So I think I've talked enough about that. That's my idea of reality forking and that's my model of a deep form of patchwork that I think is already underway in a way that people underestimate.
So now I want to talk a little bit more about the ethics of patchwork because I think the observations that I just prevent presented, they raise ethical questions. And so if I am right, that reality itself is already breaking up into multiple versions and multiple patches, well then that raises some interesting questions for us, not just in terms of what we want to do, but in terms of what should we do.
Ethics and Patchwork
What does it mean to seek the good life if this is in fact what's happening? It seems to me that, right now, you're either going to be investing your efforts into somehow creatively co-constituting a new reality or you're going to be just consuming someone else's reality. And a lot of us, I think, do a combination of this. Like all the podcasts I listen to, and all the Youtube videos I watch, that's me outsourcing reality-creation to other people, to some degree. But then the reason I've gotten on Youtube and the reason I've gotten really into all of these platforms and invested myself in creating my own sense of the world is because I don't just want to be a consumer of other people's realities. I want to be... I want to create a world. That would, that sounds awesome. That would be the ideal, right? But the problem is that people are differently equipped to do so, to either create or consume realities and I think that this is difficult and very fraught. This is a very politically fraught problem. The left and the right will have debates about, you know, "the blank slate" versus the heritability of traits and all of that. And I don't want to get into that now, but however you want to interpret it, it is an obvious fact that some people are better equipped to do things like create systems, than other people. To me, this is the ethical-political question space.
The default mode right now is the one that I already described at the top of my talk: it's the moralist. It's the traditional left-wing (more or less) posture. "Here's a program for how we're going to protect a bunch of people. All it requires is for you to sign up and give your votes and come to meetings and give your money and somehow we're going to all get together and we're going to take state power and protect people" or something like that. As I already said — I won't beat a dead horse — but I think that's increasingly revealing itself to be a completely impractical and not serious posture that plays with our... it suits our moral tastebuds a little bit, but it's increasingly and patently not able to keep up with accelerating capitalism.
That's not gonna work. Why I think patchwork is an ethical obligation is because, if you're not going to manipulate people by trying to build some sort of large centralized institution, by manipulating their heartstrings, then what remains for us to do is to create our own realities, basically. And I think that the most ethical way to do that is to do it honestly and transparently, to basically reveal this, to reveal the source code of reality and theorize that and model that and make those blueprints and share those blueprints and then get together with people that you want to get together with and literally make your own reality. I feel like that doesn't just sound cool and fun, but you kind of have to do that or else you're going to be participating in this really harmful, delusional trade. That's my view anyway.
Now I'll just finish by telling you what I think the ideal path looks like ethically and practically. I've called it many different things, I haven't really settled on a convenient phrase to summarize this vision, but I think of it as a neo-feudal techno-communism. I think the ideal patch that will be both most competitive, most functional, most desirable and successful as a functioning political unit, but also that is ethically most reflective and consistent with the true nature of human being is... It's going to look something a little bit like European feudalism and it's going to be basically communist, but with contemporary digital technology.
Let me unpack that for you a little bit. You probably have a lot of questions [laughing]. One thing is that patchwork always sounds a little bit like "intentional communities." And on the Left, the "intentional communities" kind of have a bad rap because they've never really worked. You know, people who want to start a little group somewhere off in the woods or whatever, and make the ideal society, and then somehow that's going to magically grow and take over. It usually doesn't end well. It doesn't have a good historical track record. It usually ends up in some kind of cult or else it just fizzles out and it's unproductive or whatever. I think that the conditions now are very different, but I think if you want to talk about building a patch, you have to kind of explain why your model is different than all the other intentional communities that have failed.
One reason is that the digital revolution has been a game changer, I think. Most of the examples of failed intentional communities come from a pre-digital context, so that's one obvious point. I think the search-space, the solution-space, has not all been exhausted. That's kind of just a simple point.
But another thing I've thought a lot about, and I've written some about, is that, in a lot of the earlier intentional communities, one of the reasons they fail is because of self-selection. That's just a fancy social science term for... There's a certain type of person who historically has chosen to do intentional communities and they tend to have certain traits and I think for many reasons — I don't want to spend too much time getting into it — but it's not hard to imagine why that causes problems, right? If all the people are really good at certain things but really bad at other things, you have very lopsided communities in terms of personality traits and tendencies. I think that that's one of the reasons why things have led to failure. So what's new now, I think, is that because the pressure towards patchwork is increasingly going to be forced through things like climate change and technological shocks of all different kinds, because these are fairly random kinds of systemic, exogenous shocks, what that means is it's going to be forcing a greater diversity of people into looking for patches or maybe even needing patches. And I think that is actually valuable for those who want to make new worlds and make better worlds, because it's actually nature kind of imposing greater diversity on the types of people that will have to make different patches.
So what exactly does neo-feudal techno-communism look like? Basically it would have a producer elite, and this is where a lot of my left-wing friends start rolling their eyes, because it basically is kind of like an aristocracy. Like, look, there's going to be a small number of people who are exceptionally skilled at things like engineering and who can do things that most other people can't. You need at least a few people like that to engineer really sophisticated systems. Kind of like Casey said before, "the mayor as sys-admin." That's kind of a similar idea. You'd have a small number of elite engineer types and basically they can do all of the programming for the system that I'm about to describe, but what they also do is they make money in the larger techno-commercium. They would run a small business, basically, that would trade with other patches and it would make money, in probably very automated ways. So it would be a sleek, agile kind of little corporation of producer elites at the top of this feudal pyramid of a patch society. Then there would be a diversity of individuals including many poor unskilled, disabled, etc., people who don't have to do anything basically. Or they can do little jobs around the patch or whatever, to help out.
The first thing you might be thinking — this is the first objection I get from people — is why would the rich, these highly productive, potentially very rich, engineer types want to support this patch of poor people who don't do anything? Isn't the whole problem today, Justin, that the rich don't want to pay for these things and they will just exit and evade?
Well, my kind of novel idea here is that there is one thing that the rich today cannot get their hands on, no matter where they look. And I submit that it's a highly desirable, highly valuable human resource that most people really, really, really want. And that is genuine respect and admiration, and deep social belonging. Most of the rich today, they know that people have a lot of resentment towards them. Presumably they don't like the psychological experience of being on the run from national governments and putting their money in Swiss bank accounts. They probably don't like feeling like criminals who everyone more or less kind of resents and wants to get the money of, or whatever. So my hypothesis here is that if we could engineer a little social system in which they actually felt valued and desired and admired and actually received some respect for their skills and talents that they do have and the work that they do put in... I would argue that if you could guarantee that, that they would get that respect, and the poor would not try to take everything from them. If you could guarantee those things, then the communist patch would actually be preferable to the current status quo for the rich people. My argument is that this would be preferable; it would be a voluntary, preferable choice for the rich, because of this kind of unique, new agreement that the poor and normal people won't hate them and we'll actually admire them for what they deserve to be admired for. So then the question becomes, well, how do you guarantee that that's going to happen? This is where technology comes in.
The poor and normal people can make commitments to a certain type of, let's call them "good behaviors" or whatever. Then we can basically enforce that through trustless, decentralized systems, namely, of course, blockchain. So what I'm imagining is... Imagine something like the Internet of Things — you know, all of these home devices that we see more and more nowadays that have sensors built in and can passively and easily monitor all types of measures in the environment. Imagine connecting that up to a blockchain, and specifically Smart Contracts, so that basically the patch is being constantly measured, your behavior in the patch is being constantly measured. You might have, say, skin conductance measures on your wrist; there might be audio speakers recording everyone's voice at all times. I know that sounds a little authoritarian, but stick with me. Stick with me.
Basically, by deep monitoring of everything using the Internet of Things, what we can do is basically as a group agree on what is a fair measure of, say, a satisfactory level of honesty, for instance. Let's say the rich people say, "I'll guarantee you a dignified life by giving you X amount of money each month. You don't have to do anything for it as long as you respect me, you know, you don't tell lies about me, you don't plot to take all of my money" or whatever. So then you would have an Alexa or whatever, it would be constantly recording what everyone says, and that would be hooked up to a Smart Contract. And so if you tell some lie about the producer aristocrat, "He totally punched me the other day, he was a real ignoble asshole," and that's actually not true. Well, all of the speech that people are speaking would be constantly compared to some database of truth. It could be Wikipedia or whatever. And every single statement would have some sort of probability of being true or false, or something like that. That could all be automated through the Internet of Things feeding this information the internet, and basically checking it for truth or falsity. And then you have some sort of model that says, if a statement has a probability of being false that is higher than — maybe set it really high to be careful, right? — 95 percent, so only lies that can be really strongly confirmed... Those are going to get reported to the community as a whole.
If you have X amount of bad behaviors, then you lose your entitlement from the aristocrat producers. It's noblesse oblige, the old kind of feudal term for basically an aristocratic communism, the [obligatory] generosity of the noble. So that's all very skittish. A little sketch of how Internet of Things and Smart Contracts could be used to create this idea of a Rousseauean General Will.
The reason why this has never worked in history is because of lying, basically. People can always defect. People can always manipulate and say they're going to do one thing but then not deliver. That's on the side of the rich and also on the side of the poor. But what's at least in sight now, is the possibility that we could define very rigorously the ideal expectations of everyone in a community and program that in transparent Smart Contracts, hook those up to sensors that are doing all of the work in the background, and in this way basically automate a radically guaranteed, egalitarian, communist system in which people do have different abilities, but everyone has an absolutely dignified lifestyle guaranteed for them as long as they're not total [expletive] who break the rules of the group. You can actually engineer this in a way that rich people would find it preferable to how they're currently living. So to me that's a viable way of building communism that hasn't really been tried before. And I think it really suits a patchwork model. I think that this would be something like an absolutely ideal patch, and not just in a productive, successful way. This is the ideal way to make a large group of people maximally productive and happy and feel connected and integrated. Like everyone has a place and everyone belongs, even if there's a little bit of difference in aptitudes. The system, the culture, will reflect that. But in a dignified, and fair, and reasonable kind way, a mutually supportive way. I could say more, but I haven't been keeping time, and I feel like I've been talking enough.
I found out recently that — hat tip to my friend the Jaymo — the town of Tombsboro, Georgia is right now for sale, for only $1.7 million. I think that's a pretty good deal. It comes with a railroad station, a sugar factory, all kinds of stuff and you could easily build a little prototype patch that I just described. If you have a bunch of people and it's a major publicized project, it wouldn't be that hard to raise enough for a mortgage on a $1.7 million property. Especially if you have a compelling white paper along the lines that I just sketched. I'm not quite there yet, but that's what I'm thinking about, that's my model or my vision of the communist patch. So I'm going to cut myself off there. Thank you very much.
Communism remains the best conceivable form of human organization, and it can work, but the major catch is that it requires something that smells intolerably fishy to those who are most likely to want communism (left-wing activists). I will try to show that a workable and highly desirable communism is possible on the condition of accurate social valuation of individual characters. Some people are better or worse at different things, including ethical conduct. To the degree a group calibrates itself to these differences, it can have true communism; to the degree it denies or inaccurately assesses these differences, it cannot have communism.
As we'll see, a wondrous feature of this discovery is that individuals and groups can have nearly any combination stupid scoundrels and beneficient geniuses: a group only has to have some mechanism for ensuring accurate shared knowledge of these relative differences in all members. This might sound simple, but the reason this is a major catch — so far, a prohibitively difficult condition — is because it just sounds so reactionary — so icky — to precisely those ears that are moved by the sweet music of egalitarian revolution.
Communism as we know it always fails or becomes genocidal because its modern form was designed to overlook or deny this engineering requirement, simply because of the unfortunate sociological fact that modern Communist activists have generally been left-wing ideologues.
My hypothesis, that communism relies on accurate social valuation of individual characters, is consistent with a variety of data and explains many otherwise anomolous observations. First, one of the most successful communist arrangements in history is the pre-modern form of aristocratic communism known as nobless oblige. Second, wherever modern Communism appears (in the genocidal form with which it is now synonymous), it generally corresponds to a collapsing Nobility. Third, contemporary Communists universally reject the possibility of a legitimate Nobility, and despite much effort they are universally unable to create even small patches of lasting communism (explicit communists in the West generally live in milieus of extreme material and spirital squalor). Fourth, those today who enjoy communities based on shared and realistic group valuations of individual characters tend toward functioning communist relations (e.g., healthy marriages, religious communities). If this essay makes its way into a future book-length project, each of these data points will receive a dedicated section. For this blog post, I'm just going to stipulate them. At present, I would like to focus on the more fun task of thinking about how this insight can be used to engineer a functioning patch of communism. The best test of any hypothesis, is, ultimately: can you build something based on it?
The Cyberpositive AI-aligned Communism (CAIC) Protocol
In Atomization and Liberation, I began to outline a vision of small-c communism divorced from the corruption of modern Communism as most people know it. As I've explored many times before, the entire modern tradition of Communism is based on the application of moral claims for instrumental purposes, essentially the opposite of true ethical conduct: Communism is capitalist exploitation raised to a higher degree, adding a new layer of righteous dissimulation and social confusion on top of capitalism's relatively transparent brutality.
In that essay, I focused on where the idea of communism meets the human subject. As modern capitalism increasingly destroys the experience of life as a coherent individual, modern Communism has always been invested in what I call an aggregative strategy: organize collective life in such a way as to liberate the false bourgeois subject into a higher form of being. I argued for a disaggregative strategy: let capitalism split the subject, because one's decomposed sub-personalities can become a commune unto themselves. For shorthand, I called this alternative vision of communism the CAIC protocol (Cyberpositive AI-aligned Communism) because its essential wager is to move with the forces of recursive intelligence escalation rather than against them, however painful those forces might be to the modern subjective ego, and however much such a wager grates against the presumptions of most individuals on the political Left today.
I also suggested that my molecular strategy would be, ironically, more conducive to inter-subjective aggregations than the capital-C Communist instinct toward aggregation. But so far I have only alluded to this expectation.
In this article, I would like to focus more deeply on how the CAIC protocol facilitates inter-personal organization, even despite and across ideological hatred. One of the thorniest questions neglected almost completely by all living communist thinkers I am aware of, is the question of how communism can be achieved despite the protest of majorities who do not want communism, and the smart, rich people who really do not want communism. Probably the most widespread and plausible implicit answer is some kind of electoral pathway (I believe this is basically the model of the Democratic Socialists of America, and Corbynistas in the UK, for instance). More anarchist-inclined communists imagine the bottom-up spread of communist organizations to the point that mass power is achieved outside of electoral politics. What both of these popular mental models have in common ("popular" as the tallest building in Topeka is tall), is that all individuals who today explicitly and strongly dislike communism will, in one way or another, have their preferences over-ridden. There are many ways to phrase this: such individuals currently have false consciousness but eventually they will see the truth; such individuals are wilfully evil and deserve to be over-ridden whether they like it or not; building an electoral majority is a legitimate and justified way to over-ride minority viewpoints. I don't wish to assess the reasonableness of these various defenses, I only wish to highlight that nearly all mental models of how communism might succeed involve over-riding the current stated preferences of millions of people.
CAIC suggests a fundamentally alternative solution. CAIC wagers itself unconditionally on whatever intelligence determines to be true (what else could it mean to truly believe in communism?). This thesis is falsifiable; intelligence might not produce communism, but if communism is true then intelligence will produce communism. It follows that a true communist believes communism will provide some "edge" to humans living within it, for this is implied in an intelligence advantage. One can see this in communist beliefs such as the belief that communism involves greater human flourishing. If this is the case, then humans living under communism should have some extra insight, or time, or energy, or money, or health, or something more/better than humans living under capitalism. Anyone who believes in communism must believe communism does something better than capitalism! If you believe this, then you should also believe that the correct realization of communism would have some kind of "surplus" relative to capitalists, which communists could potentially trade to anti-communists, in exchange for anti-communists to join in the communist movement. In other words, if correct, communists should be able to "buy-out" anti-communists in whatever human value that capitalism is uniquely bad at achieving and communism is uniquely good at achieving.
And thus I have realized a new key to achieving communism. In fact, it's so correct that even anti-communist conservatives will like it. Whereas my last article looked at the intra-individual component of CAIC or Atomic Communization, now I will present an informal model of the interpersonal component of CAIC or Truth Communization, for what will be at stake here is optimizing the calibration of interpersonal value-assignments (one of modern capitalism's grandest, most ridiculous incapacities).
The unreasonably powerful mechanism of honesty
Here is the key, the little discovery that has never been tried before but will change everything once even just a few people start to implement it: each person in a community agrees to assign status (i.e. distribute their respect) to all the others according to the others' contributions to the community, however each person honestly evaluates the others' contributions.1
Most modern societies are based on obfuscating (from the Left and Right) certain objective realities about different individuals' contributions, so a shared commitment to simply calibrating intersubjective evaluations correctly is basically the rock from which any small group could reboot all of society.2
You have many questions and objections, I know, but hear me out.
This bombshell of a social engineering insight will produce a new kind of noble Communism, based on Renaissance–era noblesse oblige, feasible immediately at small scale. When this works wonderfully for a few small pilot groups, it will naturally spread and take over all of society.
What does this look like in practice? It's very simple. Find me a few people who are extremely intelligent and productive individuals, who do high-quality work of any kind and make a lot of money. Then find me a much larger group of people (the ideal proportions will be figured out later through trial and error), with average and below-average intelligence and productivity. Their current incomes are variable but they generally cluster around current subsistence levels. The only hard requirement for all of the people in this big group is that everyone promises to give respect wherever they honestly believe or feel it is due, in unconstrained dialogue with the others. "Give" just means to publicly and generously express. This doesn't require too much, it just means if someone does a nice thing you say "Hey, that was really nice of you." And if they do a bad thing, you say, "Hey, that's a bad thing you just did!" Of course there will be some differences in how people estimate these qualities, but that's okay; the stipulation is only that everyone commits to good faith evaluations of the others and honest expressions of those evaluations. Notice this is basically what normal, healthy, and decent humans should already be like. Also notice how this obtains almost nowhere in everyday Western life today. In complex and opaque ways, modern institutions have decimated the sociological foundations of such basic decency. The main point to understand is that in the Aristocratic Commune, if someone does something really good for the community, then on average they are going to feel a lot of love; if they do something really shitty, they are going to feel really shitty.
I submit that this would be enough to constitute a working model of communism, that optimizes the well-being and flourishing of all its members, "from each according to their ability, to each according to their need."
How accurate social valuation of individual characters generates true communism
But how would this create communism, you ask? Well, first of all, the small number of highly productive people would want to pay for a comfortable and dignified subsistence for everybody else! Why would they do that? For starters, because nobody is forcing them to — so they get the genuine pleasure and ethical satisfaction of choosing to help others. Second, they will choose to provide a dignified basic income to all the peons. (We're being honest, remember? So yes, many of them will be peons, but that's OK, because True Communism will ensure they receive the dignified life they deserve, which is much more than you can say about liberalism, which protects peons from being called peons but ensures that they rot in their "diversity"). Why? Because all the peons will truly love them for their beneficence! Receiving the authentic and earned love of a large group of people is an unsurpassably desirable human experience. When Mr. Moneybags goes out to walk his dog in the morning, literally every person he passes on the street smiles at him glowingly in warm, grateful admiration. Today, he hides in a private mansion, unrecognized on the street at best, hated or belittled at worst. Tomorrow, under Aristocratic Communism, he now enjoys the subjective experience of Lorenzo de Medici: deservedly proud, appreciated, and respected, although his opprobrium is feared and avoided. All Mr. Moneybags has to do is give a small chunk of money to a genuinely magnificent cause. Additionally, he also now gets to think of himself as an avant-garde cultural visionary, something the rich have loved to fancy themselves since time immemorial, because he’s also participating in a world-historical socio-political innovation. Of course, the really significant art is going to come almost exclusively from the rare creative geniuses on his dole, but they're all happy to let Daddy fancy himself artistically relevant).
I submit that the psychological and behavioral micro-foundations of True Communism are, for the Aristocrat class, not only well-founded but game-theoretically stable. I think actually existing human beings who match this description would choose this alternative to the status quo — if it was available — and they could not do any better by defecting (so long as a true distribution of respect is maintained).
As for the below-average-productivity people, they get what they've always wanted: a communist society where they don't have to work for money to survive. They can do whatever they want, like start a blog (and some would be interesting, instead of all the dumb lifestyle marketing blogs people launch to escape shitty jobs), or they can just sit around doing nothing all day, enjoying life. But the genius of this model keeps giving, because it would be very unlikely that many of these people would do nothing at all. Why? They would not get a lot of respect. A small number of them might not care, holing themselves up in their paid-for house, doing nothing and perhaps being the object of some negative gossip here and there; the real negative opprobrium is naturally reserved for willful evil, which is measured by repeating behaviors that everyone else dislikes.3 If these people are not hurting anyone and not running around spouting stupid shit to disrupt everyone else's flows in the name of "social change" that they're only seeking because they're genuinely alienated, then the rich people will now really love them. After seeing, today, what really happens ultimately when the capitalist underclasses are left to be poor and alienated, rich people would feel such incredible relief. They would feel wondrous admiration for these simple souls even if all they did was cease running around like chickens with their heads cut off.
What would the masses in our True Commune choose to do? What is unique about Aristocratic Communism is that, because it is uniquely indexed to the real objective differences that exist between people, honest effort will tend to be respected even if the person is relatively useless (with respect to productivity). For people with below-average productive abilities, in all modern Western societies there has always been an incentive to under-contribute the effort you are capable of, and over-report your limitations, insofar as what you could win from guilting rich people has typically been greater than what you could win from just trying your best. This is because, for those with below-average productive abilities, your best does not do much in terms of objective value on the open market, but — and here's the rub — in the context of modern alienated anomie, nobody cares whatsoever that you might really be trying your best to contribute to society. You don't get any love or respect or appreciation for that. If you did, you'd much rather try your reasonable best, be respected by your peers, maintain a clean conscience, and not have to do all kinds of deceptive rhetorical gymnastics, to the public and to yourself, just to make your way through life.
Under contemporary anomie, many ethically dubious practices that would feel excruciatingly shameful to an upright person even 50 years ago (such as publicly inflating your handicaps to get a few extra bucks or respect points as charity)4 do not currently result in very much ego depletion. They should. Descending to such lows should feel like a terrible descent, but it doesn't, in part because such a low baseline has been normalized. Today, nobody really cares that much about each other, so the threat of opprobrium from your peers is a weak and avoidable penalty — just move onto the next group of people who will vaguely but not really care about you. (A dirty little secret in leftist circles is that this kind of cycling is common; people move from one communal house to the next, or from one activist cell to the next, or they even move cities, not because they've done anything especially bad to anyone but because all the little interpersonal hangups accumulate — trivial things, for example, all the missed appointments from everyday flakiness... The conversations where neither of you are really there... It all just becomes intolerably soggy and weighty, so you displace the problem geographically.5
This is one reason why modernity is such a rapid downward spiral of ethical degeneration, something the left and right both agree on (citing, respectively, the unbounded greediness of modern business people and the unbounded wretchedness of the underclass). Because Aristocratic Communism rewards honest-best-efforts and punishes ethically dubious practices, I find it extremely likely that under Aristocratic Communism the below-average-producers would choose to contribute the best they can, within the limits of their ability and temperament.
I submit that the psychological and behavioral micro-foundations of True Communism are, for the Mass class, not only well-founded but game-theoretically stable. I think actually existing human beings who match this description would choose this alternative to the status quo — if it was available — and they could not do any better by defecting (so long as the nobless oblige is maintained).
Other implications and observations
Now, for all the anti-communist right-wingers out there, you might still strongly doubt all of this. And indeed, you have some good reasons to fear that in such a situation the Masses would degenerate, nonetheless, into laziness and rent-seeking. But all of your data points come from contexts that are emphatically not Aristocratic Communism. As I would be the first to admit, the Left has built its entire modern political strategy on defaming the rich and lying to itself, but you must admit that the modern Right does some fibbing itself. I think the main fib that's relevant here is the idea that poor people are poor because they are lazy. How rude! They’re not poor because they’re lazy, they’re poor because they're stupid. In part. There are other factors causing them to be poor, also, but one reason they're poor is that they happen not to be blessed with the mental hardware that leads others to win all the money games (which, only in capitalist secular modernity gets equated, incorrectly, with their "human worth" or ultimate dignity, and only because capitalist secular modernity has no basis for value other than economic value; in True Communism, differences in intelligence become normatively inconsequential in how people are ultimately valued and treated).
That poor people tend to be dumber is even acknowledged in Marxism, it's just packaged instrumentally for marketing purposes. Anyway, stupidity is not their fault (it’s at least half heritable), so casting normative aspersions on the poor is evil, and its part and parcel of the ethical death-spiral that is modernity. Just like the Leftist lie gives rich people good reason to leave the whole planet behind in squalor, this Rightist lie gives the masses good reason to select their mouth-noises for how well they pull on the public's heart strings rather than for how well they bear witness to the truth of reality.
Honestly, it now looks to me like we free-speech-people are serious idiots for imagining large numbers of deeply alienated people would long be capable of caring about the human capacity for distinguishing better models of reality from worse models of reality. How boring and useless is this concern, unless you have the substantial resources necessary to make something of it.
Conservatives have drastically underestimated the oppression of the left-leaning masses. Because leftists lie even in the simplest descriptions of things, conservatives have inferred that all the "oppression" is surely one big, tall tale. It is true leftists often instrumentally twist what they really feel and exactly why (as most people do), but they tell the truth when they say things to the effect that they "can't take it anymore." When lefty activists freak out about some minor political event as if the world is coming to an end, it's very easy for normal people to dismiss them as only confused, but what I am telling you is that the degree of this confusion is so intense and widespread that the magnitude and urgency of the social problem they are pointing to really is as big and bad as the end of the world, not just "for them" but for what they're going to do to society if somebody smarter than them doesn't find a way to alleviate their suffering. That didn't fit on my placard when I was an SJW, so that's one reason why I'm trying to get these thoughts down now. I'm minimally intelligent and disciplined enough to vaguely get by in a middle-class profession, but the truth is that I feel closer to the normal unwashed masses than to the rich producer class. That's why I still write from a left-leaning perspective, despite continuing to venture deeper and deeper down the ghastly red-pilled rabbit hole that is the Left's most hidden and foundational bargains with Lucifer.
You might have already noted that there's a negative version of the hypothesis sketched here: the primary toxic aspect of the modern radical–left drive, which is really becoming a systemic problem for Western society, is not the demand for resource redistribution (in fact, rich people generally want to give, for reasons of personal psychology and social stability) but the correlated tendency to tell lies to themselves and others for instrumental purposes (most saliently, about different individuals’ and groups’ objective abilities). It's understandable why these tendencies are correlated, because one of the most natural ways to argue for more resource redistribution is to say that the Have-Nots deserve more and the Haves deserve less. And this was a damn powerful rhetorical method, if you look at the early Workers Movements; they succeeded in forcing a whole lot of redistribution in many times and places.
What the Left didn't realize is not simply that differences in objective individual abilities are real, but that technological acceleration would amplify rather than attenuate them. The technologically amplified cognitive superiority of the new rich, and their corresponding mobilities, are now too great (up to and including access to space travel and a monopoly on all the proceeds from intergalactic colonialism). The rich already have everything on such lockdown — in gated communities and private jets and private AI systems and tax havens — that even if there was some democratic social movement able to map this territory and make a concerted effort to pounce on it, it would already be a day late and a dollar short. Capitalism itself sees and acts on the future like no other human organization can, simply by paying whoever happens to be farthest into the future. This is why rich people have all the resources in the first place, they are thinking more steps ahead. It is no exaggeration to say that they live further in the future than normal people. The world Elon Musk lives in includes mental photographs of the future, which might not enter my head until months or years after they enter his. That is why I can't do the things he does. Someone smarter than you is, pretty literally, an alien from the future.
The mainstream Left, including the now mainstream radical variants that see themselves as left of the mainstream Left, is now running purely on fumes, rehearsing routines that only barely worked with the early Worker Movements. But a game only starts once, and now that these routines are fully intelligible to all the people with money, they make no sense. But because their maps are so divorced from the territory, the Left is doubling down on them with increasingly hilarious adaptations: for instance, if lying about the superior abilities of the rich doesn't get the goods anymore, try lying about the differential abilities of anyone around you! Of course, most people around you don't have many more resources than you to redistribute, hence the often noted oddity of applying white privilege discourses to poor or even average white people. You can't get blood from a stone, but that doesn't stop someone who's just compulsively rehearsing inherited routines.
Where was I? Right, Aristocratic Communism. It's a viable way to bootstrap a healthy society from the ruins of modern disintegration, and it should be equally attractive to honest and open-minded leftists as well as honest and open-minded conservatives (including super rich ones). Please apply in the comments.
Many objections will arise related to the problem of verifying honesty. I actually think humans are extremely good at identifying honesty, the problem in complex modern societies is that we rarely have the incentives to honestly report our estimations of others' honesty. But if you don't buy that, then: lie-detector tests. If you think those are unreliable, then lie-detector tests plus technology that's probably going to be invented soon. If you don't think that will happen, then lie-detector tests plus blockchain, which is already here. How blockchain will solve the problem I have no idea, but you probably can't explain why it won't solve it, so this will probably stop you from objecting again. Also it could maybe be true, I just don't know if it is. ↩
Readers of Moldbug might sense already that Noble Communism is a competitor to his alternative proposal for a "reboot." His plan involves a legal coup to change the American political system from the top down, which I believe is one of his mistakes. My proposal not only has a wildly different ideological slant, but it's a purely bottom-up project that relies on organic viral contagion as its mechanism for takeover. ↩
You'll see there is a circular definition of the Good here as that which everyone thinks is Good, which might be a problem for a philosophy class but, for a healthy society that removes all excuses for lying, it works. ↩
Respect is, in fact, a matter of life and death. The idea that people need respect or else they'll eventually start killing is dealt with in a branch of political theory on the concept of "recognition," which is a criminally understated term. And conservatives incorrectly believe lefty academics tend to exaggerate! Rather the problem is nearly the opposite: they twist words so much that they lose their urgency to listeners. ↩
The Marxist geographer David Harvey talks about how capitalism never solves its crises but it deals with them by displacing them geographically. Because activism is an instrumental pursuit, it is a game of social capitalism, which is why activists live amidst perpetual crises of social capital — broken relationships and broken families. Like their hidden master Capital, activists never solve but only displace these problems socio-spatially. ↩
Stay up to date on all my projects around the web. No spam, don't worry.