Libertatem hierarchia. What we call European feudalism was the becoming–patchwork of the Roman Empire. As the control structures of the Roman Empire atrophied, and the de facto liberty of its subjects increased, those subjects sorted themselves according to their objective levels of political power. Slaves sorted into Serfdom, mid-tier Roman nobility sorted into Lordship, and Roman super nobility sorted into Overlordship, and so on. Feudalism was obviously hierarchical, but also free in the sense that each member of each level recognized themselves as rightful members of each level. Hilaire Belloc paints a succinct but colorful portrait of this hierarchy from below, summarizing European feudalism thusly:
the passing of actual government from the hands of the old Roman provincial centres of administration into the hands of each small local society and its lord. On such a basis there was a reconstruction of
society from below: these local lords associating themselves under greater men, and these again holding together in great national groups under a national overlord.
In the violence of the struggle through which Christendom passed, town and village, valley and castle, had often to defend itself alone.
For the purposes of cohesion that family which possessed most estates in a district tended to become the leader of it. Whole provinces were thus formed and grouped, and the vaguer sentiments of a larger unity expressed themselves by the choice of some one family, one of the most powerful in every county, who would be the overlord of all the other lords, great and small.
One might raise many good questions about Belloc's historical narratives, which I admit are relatively light on primary sources and heavy on romantic nostalgia. But with respect to the idea that European feudalism was a free hierarchy, this must have been the case because if any member of any level possessed greater power than their assigned status reflected, they could have just taken their rightful status: that is what the decline of Roman authority implied. If a Serf thought they were incorrectly or unfairly lumped in with the Serfs, they were perfectly free to create and secure their own manor against raiding pirates and crusading Islamic armies. Of course, having recently been slaves, they had no such power. Having no such power themselves, and having no centralized authority to physically protect their survival, it seems fair to presume they genuinely wanted to ally with someone who could raise an army.
Many people assume that coming from parents with high social status is an advantage, because it would appear to increase the probability of gaining high social status for oneself. But what if parental social status is more like a weight on one's shoulders, an obligation heavy enough that, in some cases, it might even be a losing ticket in the lottery of life?
My parents have very low social status. I am a statistical oddity for having become a tenured academic, which is a relatively high status position (although I wager it's falling in the ranks as academia becomes discredited).
But I've been an academic for five years now, and with every passing year it gets harder and harder to understand why my job is worth doing. The volume of patently nonsensical and often ethically dubious make-work is so high that one of the chief intellectual puzzles I've become the most fascinated by is simply why everyone around me (myself included) is willing to work this job. And people are not just willing to work this job, they even continue to eagerly compete for it. That this has become a puzzle to me suggests that something about me is losing the capacity to do it, and yet for the moment at least — I'm still doing it.
In other occupations, the answer to such a question is obvious: people put up with all the nonsense either because they have no other choice, or because the money is worth it. But what is peculiar about academia is that most academics are skilled and connected enough to do many other things, and the money is usually better in private-sector versions of academic fields. So if I am right that academia is becoming less and less worth it, given increasing loads of nonsense, I do think that the continuing passionate interest in either obtaining or maintaining academic careers is indeed a puzzling instance of lemming-like, behavioral inertia. But to call it herd behavior is too easy and not really satisfying. How or why does this particular herd dynamic hang together? A good theory would explain why academic investment varies across individuals (e.g., why is it becoming weaker in me, but not others?).
One possible explanation is the drive to meet parental expectations. The rationale is simple. If both of your parents were professors, or they had some other high-status occupation, you'll have a higher tolerance for nonsensical make-work, because you don't want to fail in the eyes of your parents. Quitting because of a too-high volume of nonsense would be existentially much more difficult than it would be for me, as their parents would view it more negatively than mine. Plus, they would feel their parents' judgment more because their parents' status gives their judgments greater credence. My parents, on the other hand, basically think I'm a highly-successful genius no matter what I do, and if for some reason they were to downgrade their opinion of me, my superior education would blunt the effects of that downgrading on me. Therefore, for an academic from high-status parents, maintaining their academic position is more rewarding than it is for me. They feel like they are representing something larger and historical and their parents actually follow what they do. I am doing something that most of my family does not really understand or care about.
For the moment, I'm carrying on. The big question is whether I am carrying on for the right reasons or the wrong reasons. My statistically improbable status background could give me a valuable edge in clarity, allowing me to see things that others can't see and act on them with a greater daring that others cannot access (namely, that perhaps academia is a sinking ship from which one should jump sooner than later). Or, my statistically improbable status background could just make long-term success in a high-status career more difficult, and the correct attitude and behavioral adaptation would be to suck it up and stop rationalizing my weaknesses. I still don't know the answer to this question, but I believe my basic observation about the causal role of parental status may be correct.
Many different people are asking me what's going on with me. In different languages, sometimes gleefully and sometimes worriedly, I have been asked some variant of "what are you doing?" so many times in the past couple of weeks that I figure I should just write one thing that I can give to anyone who asks. The chorus seems to be approaching a crescendo at the moment, with friends, strangers, coworkers, and now even students, and therefore bosses (that was quick!) joining in. So here's what I'm doing, as succinctly as I can put it.
It's not complicated. It's not profound. It’s not heroic or impressive. In fact, it's possibly the simplest decision I've made, or action I've taken, in the past eight years. It's very important to me personally, but it's something anyone can do, something many people should do, and something countless people do every day, with no fanfare.
I've never liked carving myself into separate sections, and strategically presenting myself to one audience here and one over there. People will say, "But of course, everyone has to do that!" Maybe that's correct, but maybe it's just a useful fiction for people who have made their life about optimizing something other than the truth (how they are perceived, their status, their income or financial stability, etc.). For my part, I believe that any mature adult who claims to be an intellectual must insist upon the widest possible latitude to think and speak in their own tongue — in a way that they are content to let stand for any interested party. Comfortably accepting any latitude less than the greatest latitude they can force open for themselves is fine — it just means you are living a different kind of life than the intellectual life. To think one thing and say another, or to say one thing to your peers and another thing to your students and another thing to the public, is — I believe — a truly abominable, cardinal sin for anyone who says to the public that they are in the business of truth-seeking. I understand that some people must live like this, because of their own unique web of obligations, which is why I am not judging others — but it doesn't mean I must like it, or live my own life that way. I am relatively young (32) and highly skilled; I don't have kids yet; my wife is even younger, and she supports me 100% in saying and doing whatever I need to do. One reason she supports and even encourages my freedom is because, over the past few years of being a tight-lipped, well-behaved prestigious professional, I have been a boring, stressed, shell of myself. If my vision of the intellectual life is impossible or "impractical," so be it. For the moment, I can afford to take my chances, and so what I am doing now is taking my chances, because it is my honest view that continuing life as a normal, respectable academic feels like a much bigger risk to me. I have also been delighted and emboldened by those who value my work enough to throw me money on a monthly basis. It doesn't match my salary from academia, but it's certainly enough to make me wonder what would happen if I pulled out all the stops.
People will think I am being ridiculous because, of course, what I am criticizing is the norm in academia and the intelligentsia more generally. First of all, it is exactly the normalcy of deceptiveness in academia that makes the stakes feel so high to me. Maybe, just maybe, this has something to do with the large-scale semi-international backlash of right-wing populists. Gee, I wonder [scratches head]. Additionally, in the contemporary fragmented media environment, trying to think and write honestly while also pleasing your family, bosses, students, and the public is just prohibitively energy consuming. As an academic, you can easily spend most of your days strategizing how to present yourself in different spaces, and never get around to thinking or saying anything worthwhile. If you want to seek the truth, as a life project, you must at nearly all cost find your own language that you can speak to all comers. Or else, you'll never get around to finding out anything interesting, let alone sharing it. I'm aware that all of these patterns I'm enumerating here are utterly banal to observe. As I said, I'm not making a genius argument, I am just explaining why I am now refusing to behave as I have behaved in the past few years.
What I am doing is simple. I am just thinking and saying whatever I feel like. I'm no hero and I'm certainly no martyr (academia looks much more vulnerable than I feel). I'm not asking for anyone's permission, I'm not asking for sympathy, and I'm not asking for more freedom. I'm not even defending myself on the grounds that I have something especially valuable or important to say. I am taking what belongs to me, for the trivial and even frivolous reason that I want to enjoy the right to make mistakes, to be rude, to occasionally overshoot and occasionally undershoot, perhaps even wildly — to try different ideas and performances on for size, sometimes for the sheer pleasure of doing so. I believe that such irresponsible leisure is a truly necessary, if not sufficient, condition for the more important forms of intellectual liberty that are easier to market. But I refuse even the obligation to market my liberty-taking as something more noble than it is. I don't want to justify what I'm doing with reference to these larger values, because my whole point is that I don't want to be constantly playing this rearguard game of having always to justify my own freedom. I did not get a PhD to live my life trembling at what a student or bureaucrat might think or feel about whatever it is I feel like saying. My mother always taught me that as long as I'm not hurting anyone, then I should do what I want.
Now that I've mentioned it, my family looms large in what I'm doing now. The bastard brat of an Irish-American roofer, I was never supposed to enter the official cosmopolitan intelligentsia — and when you sneak into a place, it looks very different than it does to those who are supposed to be there. I'm only here because I learned early how to hack social firewalls and I made up for my modest IQ with extra piss and vinegar (two things I did inherit amply). My dad and brother both have what the DSM calls Oppositional Defiance Disorder; I'm pretty sure I'm on that spectrum too, but I was blessed with enough self-control to sublimate my rebelliousness into a patient, longer game. Through intellectual work I could eventually prove that all those institutional authority figures were wrong, so I would do that instead of acting out and getting punished. My dad never finished high school, running away to hitchhike and eventually join the Marines. My mom, also Irish-American, also had no education and little earning power, but that didn't stop them from having four kids. Two of my siblings are recovering heroin addicts.
That's who I am, I am these people — and I'm quite tired of acting like I'm exactly the same as every other rootless hyper-educated citizen of the world. The typical cosmopolitan professor today — if she was giving my mother personal advice in 1986 — would have advised my parents to abort me. She would be disgusted by the latent racism and sexism she would have found embedded unconsciously in their vernacular. If my parents were "smart," they probably would have divorced each other at some point, in search of greener pastures. But they didn't abort me, and they spoke how they spoke, and they didn't break the family, all for reasons I have been too educated to understand. Until lately. The last time I visited my family was in the run-up to the US Presidential election. My grandmother, a former teacher who is educated and fiercely intelligent (and disagreeable), told me she was going to vote for Trump. I articulated my reasons for why that upset me, and she looked me in the eyes like she never had before, with a coldness unlike her, and she said, "I do not care what anybody thinks." I was horrified and upset at the time, but this was one of my best friends growing up, and I never, ever would have become a successful academic without her. I didn't vote for Trump and I'm still no fan, but her words on that day have been echoing in my head like crazy since then. I may have recalled these words every single day since then. All of my own traits and accomplishments that I like and value the most about myself, I got from my family. They have backbones far stronger than most people I've met in my extensive travels among the international intellectual class. I haven't yet made sense of all this, but sometimes life forces you to make broad wagers, on ill-defined questions you don't fully understand. I needed to give you all of this background, but in conclusion, all I can really say is that I have already invested far too much into academic respectability, and not enough into honoring my family. And I've never been good at half measures, so now I'm going to see what happens if I bet the farm on "I do not care what anybody thinks."
If my bosses think that any of this is inconsistent with my employment, then I will just infer that their employment is inconsistent with a real intellectual life. I am a highly skilled researcher and lecturer, with good publications, and a fine track record in every aspect of my academic career thus far. If the person I truly am, and aspire to become, does not fit into academia, I would much prefer to learn this now rather than later. In fact, it would be a most profound discovery regarding the real limits of higher education today. That would give me something to think and study and write about for years. For intellectuals, huge surprises are hugely valuable; they're good news, exciting.
If academia can tolerate me, that would also be good to know. But if I can't be truly free to think and say what I want right now, while I have more respectable prestige points than perhaps I ever will, and while I have tenure (the British version, anyway), then I'll certainly never be granted such liberty in the future. I am just going to cease calculating, as much as possible anyway. Sometimes that will mean saying the smartest thing I can think of, sometimes that will mean saying the funniest thing I can think of, and maybe sometimes it will mean saying the dumbest thing I can think of, if in that moment I feel like not bearing the burden of sophistication. As I said, I don't need you to like this, or even understand it, let alone praise or forgive it. But you asked, so here is my answer for now.
[This is a transcript of a talk I gave at the Diffractions/Sbds event, "Wyrd Patchwork," in Prague on September 22, 2018. The video can be found here. My talk begins at around the 2-hour and 6-minute mark. I've added some links and an image.]
I want to talk about patchwork as an empirical model, but also a little bit as a normative model, because there's this idea that capitalism is increasingly collapsing the fact/value distinction. I tend to think that's true. And I think what that means is that, that which is empirically true increasingly looks to be normatively true also. Or if you're searching for a true model, you should be searching for models that are at once empirically well calibrated with reality and also one should be looking for normative or ethical consistency. And you can find the true model in any particular situation by kind of triangulating along the empirical and the normative. That's kind of how I think about patchwork.
I've been thinking about it in both of these dimensions and that has allowed me to converge on a certain vision of what I think patchwork involves or entails. And I've been writing a lot about that over the past couple months or so. So what I'm going to do in this talk specifically, is not just rehash some ideas that I've been thinking about and writing about and speaking about the past couple months, but I'm going to try to break a little bit of ground, at least in my own weird head, at the very least. And how these, some of these different ideas of mine connect, or can be integrated. In particular, I wrote a series of blog posts a few months ago on what I call reality forking (1, 2, 3). "Forking" is a term that comes from the world of software engineering. And so that's going to be one component of the talk.
You'll see it. It's very obvious how that connects to the idea of patchwork. And I'm also going to talk about this vision for a communist patch a lot of us have been interested in. And I've been talking with a lot of people about this idea of the communist patch and soliciting, you know, different people's impressions on it. And I also have written a few blog posts recently talking — kind of sketching, kind of hand-waving, if you will — at what a possibly communist patch might look like. A lot of people think, to this day, that patchwork has a very kind of right-wing connotation. People think primarily of Moldbug and Nick Land when they think of patchwork. But I think it's not at all obvious that patchwork necessarily has a right-wing flavor to it.
I think we can easily imagine left-wing patches that would be as competitive and as successful as more authoritarian patches. And so that's kind of what I've really been thinking a lot about recently. And even Nick Land himself told me that, you know, there's nothing wrong with trying to think about and even build a communist patch — it's all fair play. He's much less bullish on it than I am, but be that as it may. So those two ideas I'm going to discuss basically in turn and then try to connect them in a few novel ways. I have a few points or comments or extrapolations or connections between these two different ideas I've been working on, that I've never really written down or quite articulated yet. So that's what I'm going to try to do here.
So first of all, I was going to start this by talking a little bit about how patchwork I think is already happening in a lot of ways, but I deleted many of my bullet points because Dustin's presentation basically covered that better than I possibly could. So I'm not going to waste too much time talking about that. There's a lot of empirical data right now that looks a lot like fragmentation is the order of the day and there's a lot of exit dynamics and fragmentation dynamics that we're observing in many domains. And yeah, Dustin articulated a lot of them.
One thing I would say to kind of situate the talk, though, is that it's worth noting that not everyone agrees with this, you know... There's still a lot of integrative talk nowadays. There's a lot of discourse about the necessity of building larger and larger organizations. Especially when people are talking about global issues and major existential threats. Often in the educated discourse around preventing nuclear threats, for instance, or AI, things like runaway inhumane genetic testings, things like that. You could probably think of a few others. Climate change would be the obvious big one, right? A lot of these major global issues, the discourse around them, the expert opinions, tend to have a kind of integrative, centralized tendency to them. Actually just this morning I happened to be listening to a podcast that Sam Harris did with Yuval Harari. This guy who wrote the book, Sapiens, this mega global blockbuster of a book, and you know, he seemed like a nice guy, a smart guy of course, but everything he was saying was totally integrated. He was talking about how we need things like international organizations and more global international cooperation to solve all of these different problems and Sam Harris was just kind of nodding along happily. And that got me thinking actually, because even if you read people like Nick Bostrom and people who are kind of more hard-nosed and analytical about things like intelligence explosion, you find a lot of educated opinion is the opposite of a patchwork orientation, you find "We need to cooperate at a global level." Anyway, the reason I mentioned this is just to put in context that the ideas we're interested in and the empirical dynamics that were pinpointing are not at all obvious to everyone.
Even though, when you really look at all of the fragmentation dynamics now, I think it's increasingly hard to believe any idea, any proposal having to do with getting all of the nation states to cooperate on something. I just... I just don't see it. For instance, genetic engineering, you know China is off to the races and I just don't see any way in which somehow the US and China are going to negotiate some sort of pause to that. Anyway, so that's worth reflecting on. But one of the reasons I mention that is because I kind of have a meta-theory of precisely those discourses and that's what I'm going to talk about a little bit later in my talk when I talk about the ethical implications, because I think a lot of that is basically lying.
Okay. One of my theses is that when people are talking about how we have to organize some larger structure to prevent some moral problem — nine times out of ten, what they're actually doing is a kind of capitalist selling process. So that's actually just a kind of cultural capitalism in which they're pushing moral buttons to get a bunch of people to basically pay them. That is a very modern persona, that's a modern mold and that's precisely one of many things that I think is being melted down in the acceleration of capitalism. What's really happening is all that's really feasible in so many domains. All you can see for miles when you look in every possible direction is fragmentation, alienation, atomization, exits of all different kinds on all different kinds of levels.
And then you have people who are like, "Uh, we need to stop this, so give me your money and give me your votes." I think that's basically an unethical posture. I think it's a dishonest, disingenuous posture and it's ultimately about accruing power to the people who are promoting that — usually high-status, cultural elites in the "Cathedral" or whatever you want to call it. So that's why I think there are real ethical implications. I think if you want to not be a liar and not be a kind of cultural snake-oil salesman — which I think a lot of these people are — patchwork is not only what's happening but we're actually ethically obligated to hitch our wagon to patchwork dynamics. If only not to be a liar and a manipulator about the the nature of the real issues that we're going to have to try to navigate somehow.
I'll talk a little bit more about that, but I just wanted to kind of open up the talk with that reflection on the current debate around these issues. So, okay.
The one dimension of patchwork dynamics or exit dynamics that we're observing right now, that Dustin didn't talk about so much, is a patchwork dynamic that's taking place on the social-psychological level. To really drive this point home, I've had to borrow a term from the world of software engineering. I'll make this really quick and simple.
Basically, when you're developing software and you have a bunch of people contributing to this larger codebase, you need some sort of system or infrastructure for how a bunch of people can edit the code at the same time, right? You need to keep that orderly, right? So there's this simple term, it's called forking. So you have this codebase and if you want to make a change to the code base, you fork it. In a standard case, you might do what we call a soft fork. I'm butchering the technical language a little bit; if there are any hardcore programmers in the room, I'm aware I'm painting with broad strokes, but I'll get the point across effectively enough without being too nerdy about it.
A soft fork means that you pulled the codebase off for your own purposes, but it ultimately can merge back in — is the simple idea there. But a hard fork is when you pull the code base off to edit it, and there's no turning back. There's no reintegrating your edits to the shared master branch or whatever you want to call it. So I use this kind of technical distinction between a soft fork and a hard fork to think about what's actually going on with social, psychological reality and its distribution across Western societies today. The reason I do this is because I think you need this kind of language to really drive home how radical the social psychological problems are. I really think that we underestimate how much reality itself is being fragmented in different subpopulations.
I think we're talking about fundamental... We are now fundamentally entering into different worlds and it's not at all clear to me that there's any road back to having some sort of shared world. And so I sketched this out in greater detail. The traditional human society, you can think of it as a kind of system of constant soft forking, right? Individuals go off during the day or whatever, they go hunting and do whatever traditional societies do, and at the end of the night they integrate all of their experiences in a shared code base. Soft forks, which are then merged back to the master branch around the campfire or whatever you want to call it, however you want to think about that. But it's only now that, for the first time ever, we have the technological conditions in which individuals can edit the shared social codebase and then never really integrate back into the shared code base.
And so this is what I call the hard forking of reality. I think that is what we're living through right now. And I think that's why you see things like political polarization to a degree we've never seen before. That's why you see profound confusion and miscommunication, just deep inabilities to relate with each other across different groups, especially like the left vs. right divide, for instance. But you also see it with things like... Think about someone like Alex Jones, think these independent media platforms that are just on a vector towards outer space — such that it's hard to even relate it to anything empirical that you can recognize. You see more and more of these kinds of hard reality forks, or that's what I call them. I'm very serious.
I think educated opinion today underestimates how extreme that is and how much that's already taking place. It's not clear to me once this is underway, it's not clear to me how someone who is neck-deep in the world of Alex Jones — and that is their sense of what reality is — how that person is ever going to be able to sync back up with, you know, an educated person at Harvard University or something like that. It's not just that those people can't have dinner together — that happened several decades ago probably — but there's just no actual technical, infrastructural pathway through which these two different worlds could be negotiated or made to converge into something shared. The radicalism of that break is a defining feature of our current technological moment.
And that is an extraordinary patchwork dynamic. In other words, I think that patchwork is already here, especially strong in the socio-psychological dimension, and that's very invisible. So people underestimate it. People often think of patchwork as a territorial phenomenon and maybe one day it will be, but I think primarily for now it's social-psychological and that should not be underestimated because you can go into fundamentally different worlds even in the same territory. But that's what the digital plane opens up to us. So that's one half of what I'm bringing to the table in this talk.
There are a few antecedent conditions to explain, like why I think this is happening now. One is that there's been an extraordinary breakdown in trust towards all kinds of traditional, institutionalized, centralized systems. If you look at the public opinion data, for instance, on how people view Congress in the United States, or how people view Parliament or whatever, just trust in elected leaders... You look at the public opinion data since the fifties and it's really, really on the decline, a consistent and pretty rapid decline.
And this is true if you ask them about the mass media, politicians, a whole bunch of mainstream, traditional kinds of institutions that were the bedrock of modernized societies... People just don't take them seriously anymore at all. And I think that is because of technological acceleration, what's happened is that there is unprecedented complexity. There's just too much information. There's so much information that these modern institutions are really, really unwieldy. They're really unable to process the complexity that we now are trying to navigate and people are seeing very patently that all of these systems are just patently not able to manage. They're not able to do or give what they're supposed to be giving with this explosion of information that they were not designed to handle. So it's kind of like a bandwidth problem, really. But because of this, people are dropping their attention away from these institutions and they're looking outwards, they're looking elsewhere, they're looking for other forms of reality because that's ultimately what's at stake here.
These traditional institutions, they supplied the shared reality. Everyone referred back to these dominant institutions because — even if you didn't like those institutions in the 60s or 70s or whatever, even when people really didn't like those institutions, like the hippies or whatever — everyone recognized them as existing, as powerful. So even opposing them, you kind of referred back to them. We're now post- all of that, where people so mistrust these institutions that they're not even referring back to them anymore. And they're taking all their cues for what reality is from people like Alex Jones or people like Jordan Peterson or you name it, and you're going to see more and more fragmentation, more and more refinement of different types of realities for different types of subpopulations in an ever more refined way that aligns with their personalities and their preferences. These are basically like consumer preferences. People are going to get the realities that they most desire in a highly fragmented market. Anyway... So I think I've talked enough about that. That's my idea of reality forking and that's my model of a deep form of patchwork that I think is already underway in a way that people underestimate.
So now I want to talk a little bit more about the ethics of patchwork because I think the observations that I just prevent presented, they raise ethical questions. And so if I am right, that reality itself is already breaking up into multiple versions and multiple patches, well then that raises some interesting questions for us, not just in terms of what we want to do, but in terms of what should we do.
Ethics and Patchwork
What does it mean to seek the good life if this is in fact what's happening? It seems to me that, right now, you're either going to be investing your efforts into somehow creatively co-constituting a new reality or you're going to be just consuming someone else's reality. And a lot of us, I think, do a combination of this. Like all the podcasts I listen to, and all the Youtube videos I watch, that's me outsourcing reality-creation to other people, to some degree. But then the reason I've gotten on Youtube and the reason I've gotten really into all of these platforms and invested myself in creating my own sense of the world is because I don't just want to be a consumer of other people's realities. I want to be... I want to create a world. That would, that sounds awesome. That would be the ideal, right? But the problem is that people are differently equipped to do so, to either create or consume realities and I think that this is difficult and very fraught. This is a very politically fraught problem. The left and the right will have debates about, you know, "the blank slate" versus the heritability of traits and all of that. And I don't want to get into that now, but however you want to interpret it, it is an obvious fact that some people are better equipped to do things like create systems, than other people. To me, this is the ethical-political question space.
The default mode right now is the one that I already described at the top of my talk: it's the moralist. It's the traditional left-wing (more or less) posture. "Here's a program for how we're going to protect a bunch of people. All it requires is for you to sign up and give your votes and come to meetings and give your money and somehow we're going to all get together and we're going to take state power and protect people" or something like that. As I already said — I won't beat a dead horse — but I think that's increasingly revealing itself to be a completely impractical and not serious posture that plays with our... it suits our moral tastebuds a little bit, but it's increasingly and patently not able to keep up with accelerating capitalism.
That's not gonna work. Why I think patchwork is an ethical obligation is because, if you're not going to manipulate people by trying to build some sort of large centralized institution, by manipulating their heartstrings, then what remains for us to do is to create our own realities, basically. And I think that the most ethical way to do that is to do it honestly and transparently, to basically reveal this, to reveal the source code of reality and theorize that and model that and make those blueprints and share those blueprints and then get together with people that you want to get together with and literally make your own reality. I feel like that doesn't just sound cool and fun, but you kind of have to do that or else you're going to be participating in this really harmful, delusional trade. That's my view anyway.
Now I'll just finish by telling you what I think the ideal path looks like ethically and practically. I've called it many different things, I haven't really settled on a convenient phrase to summarize this vision, but I think of it as a neo-feudal techno-communism. I think the ideal patch that will be both most competitive, most functional, most desirable and successful as a functioning political unit, but also that is ethically most reflective and consistent with the true nature of human being is... It's going to look something a little bit like European feudalism and it's going to be basically communist, but with contemporary digital technology.
Let me unpack that for you a little bit. You probably have a lot of questions [laughing]. One thing is that patchwork always sounds a little bit like "intentional communities." And on the Left, the "intentional communities" kind of have a bad rap because they've never really worked. You know, people who want to start a little group somewhere off in the woods or whatever, and make the ideal society, and then somehow that's going to magically grow and take over. It usually doesn't end well. It doesn't have a good historical track record. It usually ends up in some kind of cult or else it just fizzles out and it's unproductive or whatever. I think that the conditions now are very different, but I think if you want to talk about building a patch, you have to kind of explain why your model is different than all the other intentional communities that have failed.
One reason is that the digital revolution has been a game changer, I think. Most of the examples of failed intentional communities come from a pre-digital context, so that's one obvious point. I think the search-space, the solution-space, has not all been exhausted. That's kind of just a simple point.
But another thing I've thought a lot about, and I've written some about, is that, in a lot of the earlier intentional communities, one of the reasons they fail is because of self-selection. That's just a fancy social science term for... There's a certain type of person who historically has chosen to do intentional communities and they tend to have certain traits and I think for many reasons — I don't want to spend too much time getting into it — but it's not hard to imagine why that causes problems, right? If all the people are really good at certain things but really bad at other things, you have very lopsided communities in terms of personality traits and tendencies. I think that that's one of the reasons why things have led to failure. So what's new now, I think, is that because the pressure towards patchwork is increasingly going to be forced through things like climate change and technological shocks of all different kinds, because these are fairly random kinds of systemic, exogenous shocks, what that means is it's going to be forcing a greater diversity of people into looking for patches or maybe even needing patches. And I think that is actually valuable for those who want to make new worlds and make better worlds, because it's actually nature kind of imposing greater diversity on the types of people that will have to make different patches.
So what exactly does neo-feudal techno-communism look like? Basically it would have a producer elite, and this is where a lot of my left-wing friends start rolling their eyes, because it basically is kind of like an aristocracy. Like, look, there's going to be a small number of people who are exceptionally skilled at things like engineering and who can do things that most other people can't. You need at least a few people like that to engineer really sophisticated systems. Kind of like Casey said before, "the mayor as sys-admin." That's kind of a similar idea. You'd have a small number of elite engineer types and basically they can do all of the programming for the system that I'm about to describe, but what they also do is they make money in the larger techno-commercium. They would run a small business, basically, that would trade with other patches and it would make money, in probably very automated ways. So it would be a sleek, agile kind of little corporation of producer elites at the top of this feudal pyramid of a patch society. Then there would be a diversity of individuals including many poor unskilled, disabled, etc., people who don't have to do anything basically. Or they can do little jobs around the patch or whatever, to help out.
The first thing you might be thinking — this is the first objection I get from people — is why would the rich, these highly productive, potentially very rich, engineer types want to support this patch of poor people who don't do anything? Isn't the whole problem today, Justin, that the rich don't want to pay for these things and they will just exit and evade?
Well, my kind of novel idea here is that there is one thing that the rich today cannot get their hands on, no matter where they look. And I submit that it's a highly desirable, highly valuable human resource that most people really, really, really want. And that is genuine respect and admiration, and deep social belonging. Most of the rich today, they know that people have a lot of resentment towards them. Presumably they don't like the psychological experience of being on the run from national governments and putting their money in Swiss bank accounts. They probably don't like feeling like criminals who everyone more or less kind of resents and wants to get the money of, or whatever. So my hypothesis here is that if we could engineer a little social system in which they actually felt valued and desired and admired and actually received some respect for their skills and talents that they do have and the work that they do put in... I would argue that if you could guarantee that, that they would get that respect, and the poor would not try to take everything from them. If you could guarantee those things, then the communist patch would actually be preferable to the current status quo for the rich people. My argument is that this would be preferable; it would be a voluntary, preferable choice for the rich, because of this kind of unique, new agreement that the poor and normal people won't hate them and we'll actually admire them for what they deserve to be admired for. So then the question becomes, well, how do you guarantee that that's going to happen? This is where technology comes in.
The poor and normal people can make commitments to a certain type of, let's call them "good behaviors" or whatever. Then we can basically enforce that through trustless, decentralized systems, namely, of course, blockchain. So what I'm imagining is... Imagine something like the Internet of Things — you know, all of these home devices that we see more and more nowadays that have sensors built in and can passively and easily monitor all types of measures in the environment. Imagine connecting that up to a blockchain, and specifically Smart Contracts, so that basically the patch is being constantly measured, your behavior in the patch is being constantly measured. You might have, say, skin conductance measures on your wrist; there might be audio speakers recording everyone's voice at all times. I know that sounds a little authoritarian, but stick with me. Stick with me.
Basically, by deep monitoring of everything using the Internet of Things, what we can do is basically as a group agree on what is a fair measure of, say, a satisfactory level of honesty, for instance. Let's say the rich people say, "I'll guarantee you a dignified life by giving you X amount of money each month. You don't have to do anything for it as long as you respect me, you know, you don't tell lies about me, you don't plot to take all of my money" or whatever. So then you would have an Alexa or whatever, it would be constantly recording what everyone says, and that would be hooked up to a Smart Contract. And so if you tell some lie about the producer aristocrat, "He totally punched me the other day, he was a real ignoble asshole," and that's actually not true. Well, all of the speech that people are speaking would be constantly compared to some database of truth. It could be Wikipedia or whatever. And every single statement would have some sort of probability of being true or false, or something like that. That could all be automated through the Internet of Things feeding this information the internet, and basically checking it for truth or falsity. And then you have some sort of model that says, if a statement has a probability of being false that is higher than — maybe set it really high to be careful, right? — 95 percent, so only lies that can be really strongly confirmed... Those are going to get reported to the community as a whole.
If you have X amount of bad behaviors, then you lose your entitlement from the aristocrat producers. It's noblesse oblige, the old kind of feudal term for basically an aristocratic communism, the [obligatory] generosity of the noble. So that's all very skittish. A little sketch of how Internet of Things and Smart Contracts could be used to create this idea of a Rousseauean General Will.
The reason why this has never worked in history is because of lying, basically. People can always defect. People can always manipulate and say they're going to do one thing but then not deliver. That's on the side of the rich and also on the side of the poor. But what's at least in sight now, is the possibility that we could define very rigorously the ideal expectations of everyone in a community and program that in transparent Smart Contracts, hook those up to sensors that are doing all of the work in the background, and in this way basically automate a radically guaranteed, egalitarian, communist system in which people do have different abilities, but everyone has an absolutely dignified lifestyle guaranteed for them as long as they're not total [expletive] who break the rules of the group. You can actually engineer this in a way that rich people would find it preferable to how they're currently living. So to me that's a viable way of building communism that hasn't really been tried before. And I think it really suits a patchwork model. I think that this would be something like an absolutely ideal patch, and not just in a productive, successful way. This is the ideal way to make a large group of people maximally productive and happy and feel connected and integrated. Like everyone has a place and everyone belongs, even if there's a little bit of difference in aptitudes. The system, the culture, will reflect that. But in a dignified, and fair, and reasonable kind way, a mutually supportive way. I could say more, but I haven't been keeping time, and I feel like I've been talking enough.
I found out recently that — hat tip to my friend the Jaymo — the town of Tombsboro, Georgia is right now for sale, for only $1.7 million. I think that's a pretty good deal. It comes with a railroad station, a sugar factory, all kinds of stuff and you could easily build a little prototype patch that I just described. If you have a bunch of people and it's a major publicized project, it wouldn't be that hard to raise enough for a mortgage on a $1.7 million property. Especially if you have a compelling white paper along the lines that I just sketched. I'm not quite there yet, but that's what I'm thinking about, that's my model or my vision of the communist patch. So I'm going to cut myself off there. Thank you very much.
I have recently been assigned to an Ethics Reviewer position, and I just had my first training. One of the lecture slides for this training was quite audacious: It placed the UK's current academic ethics initiatives in a glorious history, beginning with the Nuremberg Code of 1947. The Nuremberg code came after the famous Nuremberg trials; it sought to codify ethical research guidelines, in response to the atrocities carried out as "research" by Nazi doctors. It was thrilling to learn that my new administrative position was only the latest episode in a grand story of moral enlightenment. I thought I was just taking on a new bureaucratic responsibility, so I was relieved and quite inspired to learn that I would really be fighting fascism.
The reason I describe this particular lecture slide as audacious is because — although my excellent training leader forgot to mention this — the Nazi doctors had been subject to an ethics code from the beginning: the 1931 Guidelines for Human Experimentation (see this 2011 article in Perspectives in Clinical Research, which argues that the Nuremberg Code plagiarized the 1931 Guidelines). When the doctors were later tried in the Nuremberg Trials, one of the defenses put forward by the doctors' lawyers was that the doctors were acting in accordance with the guidelines!
There is little doubt, then, that contemporary academic ethics review systems have some kind of relationship with the horrors of mid-twentieth century fascist totalitarianism. The only question is whether we are the good guys or the bad guys. Is the Ethics Review System (henceforth ERS) of the modern university a 180-degree turn away from the Third Reich's fake, evil system of research ethics, now functioning to protect people from harm? Or is the Ethics Review System of the modern university like the ethics system of the Third Reich, in a more sophisticated form, functioning primarily to protect the interests of research institutions while harming some other subpopulation?
To figure that out, we need to ask what exactly this system is doing. Is it doing something that looks more like "preventing horrific behaviors" or does it look more like "a state-sponsored system to promote a certain group of humans over others?" I will submit that it looks much more like a state-sponsored system to promote some humans over others. But I should admit that I am biased. If I chose the first option, that would not make for a very good blog post.
First, the reasons why it doesn't look like a system dedicated to preventing harm.
For starters, I've not been made aware of any cases in which some horror was prevented by the ERS. That doesn't mean much, because of course the ERS might have stopped some horrible researchers from even attempting to conduct some evil research they would have otherwise conducted. Still, even granting some effect here, my sense is that this counterfactual quantity of prevented harm is very small as a percentage of total research activity, if only because I've met a lot of academics. Most of them don't even do the types of research that can really hurt people. Most of the ethics approval applications are from undergraduate students, and most of those students are seeking to do the easiest and simplest research they can get away with. They want good grades, often in a short time frame, so typically they steer away from elaborate experiments injecting racial minorities with strange chemicals or whatever. It's just not really in their wheelhouse. Even social scientists analyzing public, secondary datasets are now being asked to submit ethics applications. When was the last time that harmed someone?
The really dangerous types of research, on the other hand, such as biomedical research, are not even strongly constrained by the ERS because if the ERS says no to anything, that research will just be conducted in the private sector. I don't know the details so I can't confirm this, but I've been told — in my initial training session, as a matter of fact — that the Cambridge lecturer who created the psycho-graphic Facebook app that would later be used by Cambridge Analytica to force the victory of Trump and Brexit (lol), was denied academic ethics approval. So then he just went the commercial route. The second to last reason I doubt the ERS prevents harm is that, even when ethics reviewers identify potential "ethical problems," the result is usually nothing more than some superficial language changes. Then it's approved. The ERS rarely gives a verdict of "you are absolutely not allowed to do anything like this, do not even try to reapply;" they usually just command linguistic modifications to how people frame their research plans. Finally, there's no actual enforcement of the research conduct itself, so this is a huge reason I doubt the ERS prevents harm. If I'm evil enough to conduct an experiment, say, covertly injecting a novel synthetic hormone into the testicles of non-consenting senior citizens, I'm probably evil enough to obtain ethics approval by simply omitting the part where I plan to secretly stab senior citizens in the balls.
Next, the reasons why the ERS looks more like a state-sponsored system to promote some human lives over others.
The key thing to understand is that — and you'd be amazed how quickly and frankly they will admit this explicitly, if you ask them, as I did! — "ethics" really means a kind of "quality control" for the purpose of university image-maintenance, in order to ensure the flow of money from government research councils. My trainer told me that, straight up.
The examples they gave us of ethics violations that have actually occurred recently under our system's monitoring — rather than legendary historical cases like the Stanford Prison Experiment — are not primarily ethical violations. They are intellectual 'quality' violations. For instance, one case was of a student who emailed out a bunch of survey questions written with very poor grammar. This was brought to the attention of the university because it reflected poorly on the university's brand as an education provider. This could lower the status of the university, which could lower the likelihood of government councils giving money to our university instead of others. Now it starts to make sense why so much time, energy, and manpower are invested in these "ethics" review systems. Is it well known that this is the real purpose of these systems? I have not read this anywhere else...
Another case they gave us was a case where a student sent their survey to the email address of someone who is now deceased. The wife of the deceased man was upset that a student would send an email to her deceased husband. Is it an ethical violation to send a letter to someone who you did not realize is now dead? Could anyone say with a straight face that this is an example of unethical research practice? I don't think so. The only problem here is that someone in the public was upset about something they associated with the university. It's a PR problem, and that's about it. There was no principle given for what would distinguish a case of mere subjective dislike of the study from an unethical study. This isn't even seen as a relevant question, and I'm afraid to say that the appearance of unquestioning conformity in this system does not bode well for the ERS's promise that it is totally not the Third Reich.
Therefore, ethics review bureaucracies in contemporary universities are systems the primary purpose of which is to keep money pumping from taxpayers into the coffers of high-IQ people shielding themselves from economic competition. It is the PR wing of a massive fleecing system.
This also reminds one how education, manners, and aesthetic refinement (e.g. the grammar in a research survey questionnaire) are moral performances. And moral performance is essentially status competition, and money flows to the winners of status competitions.
In other words, the relationship between the state-sponsored genocidal research systems of the totalitarian regimes of the twentieth century and the state-sponsored research systems of the liberal democracies in the 21st century is more like a parent-child relationship than an ethically-enlightened-opposition relationship.
Anyone who's ever been to an administrative meeting in a contemporary university will likely find my interpretation to have much more face validity than the other one...
I was recently wondering whether countries with more centralized executive and legislative powers (less checks and balances) might have more status-intensive cultures — or in some way a qualitatively different type of status culture. My hypothesis is inchoate but here it goes.
When a government has few checks and balances (e.g. the UK is known for having a pretty centralized, unified government), the flow of public funds into civil society is highly conditional on the subjective status-estimates of a small set of people (those in government). By subjective status estimates I mean the personal impressions of the rulers regarding what people and projects out in the world are good, valuable, desirable, attractive, etc. When a government has a lot of checks and balances, the flow of public funds into civil society is not as conditional on the subjective status-estimates of one small set of agents — it's conditional on many different socially separated sets of agents.
The two countries I have the most experience living in, the US and the UK, occupy the two opposite poles with respect to the centralization of power in a unified government. And it seems to me that status-signaling activity in these countries is different in a noticeable but predictable way. These are just impressions and could be totally wrong, but here's what it looks like to me. It seems to me that much of UK civil society revolves around satisfying the whims of some superior, who is mostly concerned to satisfy the whims of some other superior, and so on upward... But at the top of almost all of these different chains of deference in different subspaces of civil society, is the whim of one group: the government in parliament at that time. This is why, I think, there is a lot of volatility in the priorities of civil society organizations in the UK (the "strategic plan" of a university can easily change once a year for some stretches), and yet the volatility seems strangely correlated (some change in a university's "strategic plan" sounds oddly like some new messaging you encounter from some other Arts organization. There are weird lags and interactions as you descend the pyramid from parliament to civil society, of course, but the diversity of civil society organizations all seem roughly attuned to the one fickle center at the top. So all the status-games feel, to me, weirdly and claustrophobically entrained.
In the US, it's obviously not that status competition is less prevalent, but the many status games of different civil society actors don't all trace upward to one master at the top of the pyramid. It's much more fractured, regarding who different civil society actors are trying to impress. But because it's more fractured, this means individuals have a relatively wider choice of what particular status game they want to play. Those who are immersed in one, don't pay as much attention to those who are playing another. If you find one status game really irksome, then you can potentially switch into another (relative to a country such as the UK).
One can then speculate about what types of people are selected for by these different contexts. I find the overly synchronized, centrally entrained status games of the UK kind of creepy, personally. I think it might make people marginally more delusional. It seems to me that people in the UK, including really smart people, are more likely to take arbitrary government directives as anchors of reality, whereas Americans have more mental leeway to, as it were, take 'em or leave 'em. When everyone else is attuned to the same center, it makes sense that you might mistake what's coming from the center as a vector of reality itself, rather than one contingent possibility among others. Americans often seem "kind of crazy" to non-Americans, and this might help to explain it. The highly fractured nature of government power in the US might make the American individual feel and act kind of like a free agent navigating many contingent possibilities of what reality even is.