fbpx
Ethics of Shoplifting From Self-Checkout Kiosks With Xenogothic

From the rap sheet that nearly got me fired. I thought I made a very good case!

"Tonight's livestreamed disquisition will be on the ethics of shoplifting from self-checkout kiosks. Hint: I'm in favor. But I often feel guilty, so I need to think this through clearly and explicitly. Then we will be joined by @Xenogothic, a denizen of "acc twitter" and "cave twitter" and the batshit blogosphere. I believe Xenogothic is kind of an anti-leftist leftist type, kind of like me maybe, but I'm not really sure. Only one way to find out... Xenogothic's blog is at https://xenogothic.com

Big thanks to all the patrons who help me keep the lights on.

If you'd like to discuss this podcast with me and others, suggest future guests, or read/watch/listen to more content on these themes, request an invitation here.

This conversation was first recorded on July 3, 2018 as a livestream on Youtube. To receive notifications when future livestreams begin, subscribe to my channel with one click, then click the little bell.

Click here to download this episode.

Fascism over yourself is called autonomy

When I recently sketched out a system for bootstrapping a libertarian communist society from a combination of AI and blockchain, I was genuinely surprised to receive so many indignant accusations to the effect that I'm an authoritarian. I was called a Duginist, a neoliberal, and even a fascist, etc.

Of course, in retrospect, I can understand the optics. Anything that involves the use of technology to monitor behavior is, in some sense, quite invasive — so a proposal to do this intensely, with a distribution of resources conditional on it, sounds pretty authoritarian.

The reason I was surprised by these accusations and the reason why I'm still unconvinced by them, is that my proposal involves a purely voluntary protocol. The parameters are decided by the individuals involved. All individuals are free to exit at any time. How fascist could a proposal be if it has all these criteria? Perhaps the most charitable I can be to these accusations is to say that, if my proposal is somewhat fascist, then I would say that these crucial, libertarian design features effectively remove the undesirable aspects of fascism. The main reason why fascism is now synonymous with horrific evil is that, historically, it's highly correlated with a drive to impose a program on a large number of people, often at the nation-state level, and often violently.

Given that my proposal is decidedly not imposing anything on anyone against their will, and given that it features benign failure modes, the accusations of fascism suggest to me only that my proposal sounds overly harsh, rigid, or controlling, to a degree that people find undesirable or offensive. If someone just dislikes my idea, then of course that's fine, they'll never be forced or even pressured to join (although I do fear that life outside of novelly engineered communitarian lifeboats will soon be the most horrifying place to be...).

When it comes to one's own will over oneself, I would submit that harshness and rigidity are necessary for the kind of human constitution that is capable of saying no to fascism. It seems possible to me that fascism at aggregate levels (ethnic groups, nation-states, etc.) is a pathological reaction to modern humans becoming insufficiently constituted at the individual level. Fascism rails against the modern weakness of will, and seeks to solve the problem at a higher level of social organization. I rail against the modern weakness of will, but I want to engineer solutions at the level of individuals' component parts. The components of an individual constitution are the other people in one's primary group and one's own drives or sub-personalities. When individuals exercise sufficient authority over themselves, they will be less likely to submit to intoxicating herd behaviors, and there will be less demand for violent over-compensations at higher levels of organization. 

If you dislike the idea of enforcing your own will on yourself, the algebra can be rearranged to say that you like the wide margin of ethical slothfulness you are afforded under contemporary postmodern relativism and social anomie. Today, nobody really minds if you say one thing and do another; you are permitted and even encouraged to have goals or ideals that you do not work your hardest to embody. It is hard and difficult work to become who you are, and liberalism is the political philosophy that nobody should be forced to do it.

It is certainly desirable that centralized political institutions do not enforce overly strict discipline according to overly regimented criteria — such as patriotism or ethnicity or religion — for purposes of statecraft. But that does not mean we should not seek to enforce strict discipline on ourselves, by ourselves, according to whatever we believe to be the truest ethical principles. There is no other method of soulcraft; there is no method for constituting a true life other than the ethical work of self-discipline (askēsis). Just because the infamous slogan upon the gates of Auschwitz said that "work sets you free" does not mean that certain forms of work cannot, in fact, set you free. If I say that I am a Catholic, it is in part because I believe that the truth is what sets one free, and the truth is produced through the work of frank speech (parrhesia), a form of askēsis. If I say that I am a communist, it is because I believe that everyone is intrinsically and equally valuable, and anything that inhibits anyone from becoming who they are must be destroyed in the same way and for the same reason that a philosopher or scientist seeks to destroy all errors and all mistakes.

Perhaps under contemporary liberalism we have become so "antifascist" that we would gladly choose to die if only enough people brought to our attention that fascists once sought to live. If the Nazis ever stated that work will set you free, then the refined cosmopolitan of 2018 will never work to be set free. That'll show 'em.

If I am a fascist over my own soul, so be it: fascism over oneself is called autonomy.

[The second installment of the Diffractions/Sdbs workshop on patchwork just took place yesterday. You can watch it here.]

Eichmann in Oxford

I have recently been assigned to an Ethics Reviewer position, and I just had my first training. One of the lecture slides for this training was quite audacious: It placed the UK's current academic ethics initiatives in a glorious history, beginning with the Nuremberg Code of 1947. The Nuremberg code came after the famous Nuremberg trials; it sought to codify ethical research guidelines, in response to the atrocities carried out as "research" by Nazi doctors. It was thrilling to learn that my new administrative position was only the latest episode in a grand story of moral enlightenment. I thought I was just taking on a new bureaucratic responsibility, so I was relieved and quite inspired to learn that I would really be fighting fascism.

The reason I describe this particular lecture slide as audacious is because — although my excellent training leader forgot to mention this — the Nazi doctors had been subject to an ethics code from the beginning: the 1931 Guidelines for Human Experimentation (see this 2011 article in Perspectives in Clinical Research, which argues that the Nuremberg Code plagiarized the 1931 Guidelines). When the doctors were later tried in the Nuremberg Trials, one of the defenses put forward by the doctors' lawyers was that the doctors were acting in accordance with the guidelines!

There is little doubt, then, that contemporary academic ethics review systems have some kind of relationship with the horrors of mid-twentieth century fascist totalitarianism. The only question is whether we are the good guys or the bad guys. Is the Ethics Review System (henceforth ERS) of the modern university a 180-degree turn away from the Third Reich's fake, evil system of research ethics, now functioning to protect people from harm? Or is the Ethics Review System of the modern university like the ethics system of the Third Reich, in a more sophisticated form, functioning primarily to protect the interests of research institutions while harming some other subpopulation?

To figure that out, we need to ask what exactly this system is doing. Is it doing something that looks more like "preventing horrific behaviors" or does it look more like "a state-sponsored system to promote a certain group of humans over others?" I will submit that it looks much more like a state-sponsored system to promote some humans over others. But I should admit that I am biased. If I chose the first option, that would not make for a very good blog post.

First, the reasons why it doesn't look like a system dedicated to preventing harm.

For starters, I've not been made aware of any cases in which some horror was prevented by the ERS. That doesn't mean much, because of course the ERS might have stopped some horrible researchers from even attempting to conduct some evil research they would have otherwise conducted. Still, even granting some effect here, my sense is that this counterfactual quantity of prevented harm is very small as a percentage of total research activity, if only because I've met a lot of academics. Most of them don't even do the types of research that can really hurt people. Most of the ethics approval applications are from undergraduate students, and most of those students are seeking to do the easiest and simplest research they can get away with. They want good grades, often in a short time frame, so typically they steer away from elaborate experiments injecting racial minorities with strange chemicals or whatever. It's just not really in their wheelhouse. Even social scientists analyzing public, secondary datasets are now being asked to submit ethics applications. When was the last time that harmed someone?

The really dangerous types of research, on the other hand, such as biomedical research, are not even strongly constrained by the ERS because if the ERS says no to anything, that research will just be conducted in the private sector. I don't know the details so I can't confirm this, but I've been told — in my initial training session, as a matter of fact — that the Cambridge lecturer who created the psycho-graphic Facebook app that would later be used by Cambridge Analytica to force the victory of Trump and Brexit (lol), was denied academic ethics approval. So then he just went the commercial route. The second to last reason I doubt the ERS prevents harm is that, even when ethics reviewers identify potential "ethical problems," the result is usually nothing more than some superficial language changes. Then it's approved. The ERS rarely gives a verdict of "you are absolutely not allowed to do anything like this, do not even try to reapply;" they usually just command linguistic modifications to how people frame their research plans. Finally, there's no actual enforcement of the research conduct itself, so this is a huge reason I doubt the ERS prevents harm. If I'm evil enough to conduct an experiment, say, covertly injecting a novel synthetic hormone into the testicles of non-consenting senior citizens, I'm probably evil enough to obtain ethics approval by simply omitting the part where I plan to secretly stab senior citizens in the balls.

Next, the reasons why the ERS looks more like a state-sponsored system to promote some human lives over others.

The key thing to understand is that — and you'd be amazed how quickly and frankly they will admit this explicitly, if you ask them, as I did! — "ethics" really means a kind of "quality control" for the purpose of university image-maintenance, in order to ensure the flow of money from government research councils. My trainer told me that, straight up.

The examples they gave us of ethics violations that have actually occurred recently under our system's monitoring — rather than legendary historical cases like the Stanford Prison Experiment — are not primarily ethical violations. They are intellectual 'quality' violations. For instance, one case was of a student who emailed out a bunch of survey questions written with very poor grammar. This was brought to the attention of the university because it reflected poorly on the university's brand as an education provider. This could lower the status of the university, which could lower the likelihood of government councils giving money to our university instead of others. Now it starts to make sense why so much time, energy, and manpower are invested in these "ethics" review systems. Is it well known that this is the real purpose of these systems? I have not read this anywhere else...

Another case they gave us was a case where a student sent their survey to the email address of someone who is now deceased. The wife of the deceased man was upset that a student would send an email to her deceased husband. Is it an ethical violation to send a letter to someone who you did not realize is now dead? Could anyone say with a straight face that this is an example of unethical research practice? I don't think so. The only problem here is that someone in the public was upset about something they associated with the university. It's a PR problem, and that's about it. There was no principle given for what would distinguish a case of mere subjective dislike of the study from an unethical study. This isn't even seen as a relevant question, and I'm afraid to say that the appearance of unquestioning conformity in this system does not bode well for the ERS's promise that it is totally not the Third Reich.

Therefore, ethics review bureaucracies in contemporary universities are systems the primary purpose of which is to keep money pumping from taxpayers into the coffers of high-IQ people shielding themselves from economic competition. It is the PR wing of a massive fleecing system.

This also reminds one how education, manners, and aesthetic refinement (e.g. the grammar in a research survey questionnaire) are moral performances. And moral performance is essentially status competition, and money flows to the winners of status competitions.

In other words, the relationship between the state-sponsored genocidal research systems of the totalitarian regimes of the twentieth century and the state-sponsored research systems of the liberal democracies in the 21st century is more like a parent-child relationship than an ethically-enlightened-opposition relationship.

Anyone who's ever been to an administrative meeting in a contemporary university will likely find my interpretation to have much more face validity than the other one...

Are the greatest beneficiaries of Effective Altruism its proponents?

[I’m not sure how much I believe this, this just barely passed my threshold of post-worthiness.]

It seems to me that the greatest beneficiaries of Effective Altruism might very well be its proponents, who help others in a linear fashion but help themselves in a non-linear fashion. This might be justified in a Rawlsian way, such that Effective Altruists should be allowed to enjoy their generosity greatly so long as it helps others sufficiently.

Effective Altruism is recursive self-satisfaction. The Effective Altruist is doing good, which feels good, which helps them do more good, which makes them feel even better, and so on. But the altruistic upshot of their exponentially positive experience is only additive, because recipients of altruism at best feel neutral about charity and at worst feel guilty, embarrassed, or ashamed. Malaria nets are wonderful things, and saving a life is no small feat, but one life is worth one life and if a malaria net saves one life then ten malaria nets save ten lives. For the practitioner of Effective Altruism, however, giving one malaria net is a potentially never-ending well of eudaemonia. None of this is to deny the value of Effective Altruism, it is only to observe where a large proportion of the psychological gains are really enjoyed.

The net-negative utility of Utilitarianism in the long term

Utilitarianism, taken as a world-historical arrival at the level of civilization itself, might very well have a net-negative utility. It is perfectly plausible that an overly refined awareness of, and sensitivity to, utility would have unintended consequences tending toward catastrophically negative outcomes. Deontological ethical systems have often issued from this intuition, I think. Below are the moving parts to this argument.

Our awareness of all the suffering that might be alleviated has recently exploded due to the information revolution. We still have no idea what this will do to human beings in the long run, but it sure seems plausible that it increases the prevalence of guilt feelings and anxiety about a nearly infinite number of global problems. Incentives exist to highlight and report these negative stimuli, and incentives exist to publicly feel bad about them; it strikes me as perfectly reasonable to imagine that globalized modern civilization is already headlong into an irrecovable spiral of collective depressive delusion. While a utilitarian spirit is obviously not the only or even main driver of this dynamic, it is a a necessary condition for it. The counterfactual — large numbers of individuals switching to a deontological worldview in which they’re only felt sense of obligation is to a small number of categorical local rules — would almost certainly increase global utility to an extraordinary degree. Unless you think the utilitarian reflections of the average person cause them to non-trivially improve the world. With respect to the overwhelming majority of people, this strikes me as unlikely.

Our intuitions and institutions get updated slowly. We suddenly understand many sources of suffering way better than ever, but nobody can change all of our institutions to solve these problems at anywhere near the rate our conscience would need to be be at peace. Therefore, the contemporary global village is plagued with a necessary temporal gap between the suffering that exists, and our ability to reduce it. If you consider the fact that our sense of ethically problematic suffering increases rather than decreases with progress, it is possible that this temporal gap increases even as we make technical progress closing it. Again, the more effectively utilitarian we are, the more felt suffering might increase, precisely as we objectively decrease suffering and increase net-utility with respect to most direct measures.

How great is the suffering caused by this gap? That’s anybody’s guess, but it does not seem implausible that it is greater than the entire history of utility gained by human activity heretofore.

If you doubt that the suffering caused by hyper-awareness of suffering could be so large, here are some reasons why you might not want to dismiss this idea. Human experience is recursive, so it seems to me that this makes it potentially exponential, non-linear. If you’re depressed, you feel guilty for being depressed, then you feel stupid for feeling guilty for feeling depressed, all of which makes you more depressed, and so on. Human experience can rapidly approach infinities of low, and high. I see no reason why human suffering could not potentially skyrocket toward infinity given media-driven negative information glut, instant interconnectivity at large scales, and economic incentives to spread and express sad affects, not to mention cognitive bugs such as negativity bias). None of this dismisses the wealth of data marshaled by people such as Steven Pinker, showing that in so many ways, markers of human suffering are decreasing. Felt perceptions might be wildly miscalibrated with objective data about world trends, and still veer off in an explosive detachment from reality.

Additionally, even if you don’t think that’s possible, a small number of highly suffering people can still wreak untold havoc on society at large. Trends such as anti-natalism and anti-civilizational thought more generally, often promoted by sad people who want to wind down life itself, are to some degree children of utilitarian progress. They look at the costs and benefits of humanity thus far and (however miscalibrated) they decide none of it is really worth it, and they speak and act accordingly. This is perhaps because, in the long run, there are no costs and benefits, and thus the validity of deontology gets revealed with particular clarity in end times. Regardless, if anti-life intellectual currents were to produce future policy changes, or some new, crazier version of these thought-patterns were to take hold in the form of the next big moral panic, which in turn leads to centralized policies with negative systemic effects, some portion of these consequences would have to be counted as causal effects of a short-circuiting utilitarianism. You can say that such ridiculous deductions from the utilitarian starting point are unfounded and you might be right; but you still have to chalk-up such consequences as effects of the utilitarian memeplex’s diffusion into the postmodern polity.

The utilitarian ethical defaults of modern western individuals are in meltdown from overheated inputs they do not have the capacity to process. Cooling innovation always follows hot invention, but we live in a unique historical period where the time lag between new inventions is less than the time lag between one invention and the secondary technologies that make it work over time. Fires are no longer put out, but displaced by new fires, which burn only long enough to sustain a feeling of continuity before the next fire arrives. Calculating net effects seems reasonable when it is possible to imagine a shared world; as human worlds divide, collapse, and revivify differentially, efforts to calculate overall effects on a shared world will be increasingly painful. Deontological ethics receives its final vindication on consequentialist grounds.

Utilitarianism incentivizes suffering, or victim culture as a child of rationalism

Insofar as people live according to its suggestions, Utilitarianism strangely incentivizes suffering. In a society where utilitarianism operates as the governing philosophy, the accommodation you receive from others will be a function of your propensity to suffer. If a society is maximizing its net utility, then it will effectively care more about solving the problems of those who suffer the most. Does this not select for people who suffer more? Does it not make extreme suffering a viable pathway to survival? Especially if technological change makes it impossible to survive through economic competition, the propensity to suffer could become increasingly adaptive for some groups.

I am not referring to merely strategic exaggerations of suffering (although there will be plenty of that, too, of course). More deeply, individuals who genuinely suffer more from one unit of negative stimuli, would fare better than those who genuinely suffer less from that unit, at least within one of multiple equilibria, in one pocket of society. Everyone can exaggerate, but the truly sensitive would exaggerate more convincingly. Moderate sufferers wither away from redistributive neglect while lacking the steeliness necessary for productivity, dying young and having no kids, while only the super-sufferers have what it takes to win a basic income and other survival-support, living longer and having more kids. Victim culture is a child of modern rationalism, a perverse but inevitable life-path within an economic system that finds its chief ethical defenses in utilitarian or consequentialist frameworks.

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram