fbpx
Scientific introspection and the analytical advantage of average people

When you have thoughts and feelings and drives, note them and try to understand which variables about yourself cause them. The best way to do this is to compare yourself to others, and the best way to do that is by learning where you fall within empirical distributions.

This is a rare context in which average people may enjoy an analytical advantage over “superior” people. For when they note in themselves a particular thought or feeling, they can be relatively confident that many others have similar thoughts and feelings. The introspective data of exceptional people are much less reliable for generating hypotheses or inferences about sociological phenomena.

Intelligence as a political cleavage

Intelligence is increasingly a political cleavage, thanks to the phenomenon of skill-biased technological change.

If your income is earned through competition on an open market, intelligence is an unambiguous good. You need it, you want it, possessing it makes you succeed and lacking it makes you fail. The continued development and maximization of artificial intelligence is an obvious and mundane reality of business development.

If your income is earned through a bureaucratic office of any kind, success in that office increasingly requires opposition to intelligence as such. Unions were always essentially anti-intelligence structures, defending humans from innovative insights that threatened to displace them. But unions were defeated by the information revolution, which was a kind of global unleashing of distributed intelligence. Now, atomized individuals within bureaucratic structures spontaneously converge on anti-intelligence strategies, in a shared sub-conscious realization that their income and status will not survive any further rationalization.

How else do you explain the recent co-occurrence of the following?

  • Mass political opposition to mundane psychology research on intelligence
  • Evangelical public moralizing against competence as an increasingly visible career track (in journalism, some academic disciplines, the non-profit sector, etc.)
  • Social justice culture in general as a kind of diffuse “cognitive tax.” It is a distributed campaign to decrease the returns to thinking while increasing the returns to arbitrary dicta.
  • The popularity of pseudoscientific concepts serving as supposed alternatives to intelligence, e.g. “emotional intelligence,” “learning styles,” etc.

Finally, it is no surprise that many of these symptoms are rooted in academia. This is predicted by the theory. The authority and legitimacy of the Professor is predicated on their superior intelligence, and yet their income and status is predicated on anti-intelligent cartel structures (like all bureaucratic professions). It is no wonder, then, that increasing intelligence pressures are short-circuiting academic contexts first and foremost.

Once upon a time, professors could enjoy the privilege of merely slacking on competitive intelligence application. These were the good old days, before digitalization. Professors could be slackers and eccentrics: a low-level and benign form of anti-intelligenic intellectualism. They didn’t have to actively attack and mitigate intelligence as such. Today, given the advancement of digital economic rationalization, humanities professors work around the clock to stave off ever-encroaching intelligence threats.

The difficult irony is that anti-intelligence humanity professors are acting intelligently. It is perfectly rational for them to play the game they are playing. Not unlike CEOs, they are applying their cognition to maximize the profit of the ship they are stuck on.

You don’t have writer’s block, you’re just being evil

“Boredom is the root of all evil―the despairing refusal to be oneself.” ― Soren Kierkegaard in Either/Or

If you are unable to think and express words, something is certainly wrong, but you are not “blocked.” Your brain is a ceaseless machine. It identifies and creates connections. When it gets blocked, we call that “a stroke.” If you’re not having a stroke, you are not blocked.

Without trying, you will always find yourself having observations, affects, ideas, emotions.

If you are not moved to write or speak about your observations and ideas, if you do not wish to express your affects or emotions, it is typically because you don’t want to share them. For instance, you find your ideas uninteresting or dumb. But how is that even possible? Having arisen in your mind, your ideas are the definition of what is interesting to you, and they arise at precisely your level of intelligence.

The underlying problem is that you lie to yourself about who you are.

You tell yourself you’re more profound than you are, so your actual ideas seem uninteresting.

You tell yourself you’re smarter than you are, so your actual ideas seem dumb.

Lying to yourself about who you are is no less evil than lying to a friend about something important.

An intellectual does not become unproductive because of some mysterious ailment called “writer’s block.” An unproductive intellectual is an intellectual lost in Evil. Many people think “writer’s block” is a real phenomenon and Evil is only a mystical superstition. In fact, “writer’s block” is the superstition, and Evil the real phenomenon.

To escape the sin of intellectual boredom―to think and write and speak with great motivation, no matter what―it is only necessary to affirm what you are, or as Nietzsche put it, to become who you are. When you stop lying to yourself about yourself, what were once dumb ideas and unsophisticated feelings become the most interesting questions you’e ever encountered. For it is only now that you are, in fact, encountering them.

Beware, rationalism!

A fatal flaw of rationalism is that the spread of rational thinking often causes a net-decrease of rationality across the population. For every type of logical fallacy, for instance, there is a fallacy fallacy in which the name of the legitimate fallacy is invoked in a fallacious way. Occurrences of fallacy fallacy are now far more common—and more harmful—than the logical fallacies they supposedly discourage. Social awareness of the ad-hominem fallacy is used to prohibit Bayesian reasoning about the trustworthiness of a source, based on that source’s history and character. The dictum “correlation does not equal causation” increases the sophistication and confidence of those who deny real causal patterns, and so on. Beware, rationalists, you may just get what you ask for!

Reasons why the IHME model might be under-predicting coronavirus deaths in the USA

Most political elites in the United States right now seem to be invoking one epidemiological model from the Institute for Health Metrics and Evaluation (IHME) at the University of Washington.

Deborah Birx, leader of the White House coronavirus team, referring to the IHME forecasts.
Deborah Birx, leader of the White House coronavirus team, referring to the IHME forecasts.

Apparently there is a more complicated story about the forecasting models that have been considered by the US government task force, but elite messaging at this moment seems clearly converged on a model that sees roughly April 15th as the peak of destruction. And the main data source for that seems to be the graph above. A regularly updated version is available here.

As I write this, many people are attacking the projections for over-estimating the severity.

In this post, I want to articulate a few reasons for fearing that the model is under-estimating the severity. At the time I’m writing this, I honestly don’t know how much confidence to have in these concerns, so I want to articulate them and let others be the judge…

The IHME has a history of opaque and incorrect measurement

A 2018 article in BioMed Research International analyzed the IHME’s methodology in some of their past efforts. This article really, really does not inspire confidence…

Apparently the IHME is known for using opaque methods and refusing to share information in response to inquiries — a cardinal sin and huge red flag in scientific research of any kind that’s not private sector.

IHME reported 817,000 deaths between the ages of 5 and 15… In fact, when we look at the UN report data, the deaths are 164 million.

Did anyone else have to read that twice? This kind of discrepancy really makes you wonder what exactly is going on under the hood. But let’s give ‘em a benefit of the doubt and carry on…

Even more alarming to me is this:

IHME's methodology for measuring burden of disease has an unclear stage called “black box step.” In particular, only the Bayesian metaregression analysis and DisMod-MR were used to explain the YLD measurement method that should estimate the morbidities and the patients, but no specific method is described [2]. WHO requested sharing of data processing methods, but was informed of the inability to do so. For this, WHO researches were recommended to avoid collaborative work with IHME [8].

Does anyone else find this extremely troubling? I would really like to learn what possible reason the IHME — presumably a recipient of public funding — would have for declining to share methodological details. It is utterly insane to me that the US government would predicate its public forecasting on an insititution that won’t even disclose basic methodological details.

The model assumes complete social distancing and, um, have you talked to your Grandfather on the phone lately?

Am I missing something? It seems obvious that America is not anywhere near “complete” social distancing. I talk on the phone with my family in NJ, the most badly hit state other than New York, and the vibe throughout my large working-class-to-middle-class family milieu is surprisingly nonchalant. I think most Americans are vaguely going along with public directions by now, but I really don’t see the average American taking it as seriously as would be needed to motivate completely rigorous social distancing.

So then the question becomes, how sensitive is the model to the social distancing assumption? Well, we are not allowed to know for some mysterious reason. So let’s try some some stupidly crude back-of-the-napkin calculations. Deborah Birx said if there is zero social distancing, we would expect between 1.5 and 2.2 million deaths (presumably based on the Imperial model). If there is complete social distancing, we expect 100,000 and 240,000 deaths (based on the IHME).

Extrapolating from that, if you think Americans are doing social-distancing at a 50% level of rigor, then just split the difference: About 1 million deaths.

The IHME model assumes every state has stay-at-home orders

It’s a simple fact that some states still don’t have stay-at-home orders. How are these people using models with assumptions that are observably inconsistent with reality? Again, the whole thing just smells rotten, which is more troubling than any particular quantitative quibble.

The IHME model assumes every US state is responding like China did (?!?!)

“It’s a valuable tool, providing updated state-by-state projections, but it is inherently optimistic because it assumes that all states respond as swiftly as China,” said Dean, a biostatistician at University of Florida.

How is this real?

But there are still more reasons to fear this model is underestimating the coming destruction…

Americans are less likely to go to doctors and hospitals

Americans face higher out-of-pocket costs for their medical care than citizens of almost any other country, and research shows people forgo care they need, including for serious conditions, because of the cost barriers… in 2019, 33 percent of Americans said they put off treatment for a medical condition because of the cost; 25 percent said they postponed care for a serious condition. A 2018 study found that even women with breast cancer — a life-threatening diagnosis — would delay care because of the high deductibles on their insurance plan, even for basic services like imaging. (Vox)

This is important for two reasons.

First, it means that Americans are probably less likely to seek testing, and if the model presumably uses testing data as input, then the model will underestimate the problem.

But second, it means that Americans, on average, may go to doctors/hospitals later in the process of virus onset than the citizens of other countries. This actually has two implications: It would mean the model is underestimating the coming destruction because sick Americans are still hiding at home, but also the longer-term fatality rate may be higher than projected because Americans are less likely to seek and receive early-stage care that could save them.

Practical decisions should never be made on the basis of one model, anyway

Even if the model is the best possible model in the world, all statistical models are intrinsically characterized by what is called model uncertainty. You just never really know if you’re using the right model! All applied statisticians know this. For this reason, much applied data science leverages what are called ensemble methods. You run many models, and combine them in some way, if only averaging out their predictions.

So yea, who knows what the optimal forecast is, but personally I will wager that the worst day of deaths will see more deaths than the IHME point estimates predict.

If you think I’m missing anything, I would love to hear what.

And while I’m at it, why not throw out a numerical prediction just to hold myself accountable later? Personally my guess is that we will exceed 500,000 deaths, based on the reasoning above. I am not highly confident given the obviously informal nature of my reasoning, but I would bet a modest amount of money that the IHME model is under-predicting the coming destruction. I hope I’m wrong.

There are no humans on the internet

The best way to build community and make friends on the internet is to treat all internet interlocutors as if they are real humans in a real-life, local village. If you do this, over time many people will like you and want to form an alliance with you. Because most internet behavior is so atrocious, if you abide by traditional inter-personal norms (reciprocity, manners, courtesy, etc.), you quickly become a strange attractor. You become a kind of weird avatar from another time and place. Of course, you will encounter many haters in the short-run. They will interpret your quaint earnestness as an ironic performance, or “soy boy” pusillanimousness, or some kind of 4-dimensional hyper-grift. But in the long-run, traditional interpersonal ethics are irresistibly attractive because they are, in fact, good and superior.

Now, of course, there is a reason why average internet behavior is so atrocious.

It is seemingly impossible to abide by small-village norms on the internet, simply because those norms evolved in contexts where villagers had no choice but to play iterated games and everyone could remember everyone else’s behaviors. On the internet, neither of these conditions hold: nobody is forced to remain in any grouping over time, and there are so many people that nobody can remember everyone else’s behavior. There are strong incentives to exploit others, and no obvious reason to invest much care into others. So if you treat every potential interlocutor with care, you’ll quickly waste all of your resources and be exploited into nothingness.

However, it is feasible to apply traditional ethics to everyone who enters your personal sphere for the first time, and then simply ignore them as soon as they fail to reciprocate. In game theory this strategy is called “tit for tat,” and in my contexts it is found to be the best possible strategy. Many people seem to follow a variant of this strategy, in their “blocking” behavior. On Twitter, many people will block someone at the first indication of their enemy status. But most of these people are not really playing traditional-ethics tit-for-tat reciprocity because usually they’re usually also lobbing hand-grenades into the enemy camp for fun and profit on a daily basis. I’m saying one should treat the entire universe of internet denizens on a courteous, tit-for-tat basis: If they’ve done me no wrong, then I won’t do them any wrong. If they come into my sphere, I will treat them as a real friend until evidence of bad behavior, in which case I will not retaliate but simply ignore them.

Anyone who abides by this strategy will be surprised by how quickly a meaningful community emerges around them. This might seem obvious, even trite, but what’s not is how to scale this strategy. Most people who operate this strategy find themselves in relatively tiny clusters. And almost inevitably, they form their own imaginary out-groups and all the pitfalls of group-psychological bias emerge. What I’m really interested in is how to make this strategy scale, without limit or cessation.

I think I have figured out why this strategy is so hard to scale. The solution is hidden behind a deeply counter-intuitive paradox. It’s so counter-intuitive that it’s too psychologically difficult for most people to execute. But in certain ways I think I have been learning to do it, which is how I’ve become conscious of it.

The paradox is that to treat internet denizens humanely at scale, one must cultivate a brutal coldness toward all of the internet’s pseudo-human cues, which are typically visual (face pictures and text) applied to your sense organs by corporations for profit. These pseudo-human cues are systematically arranged, timed, conditioned, and differentially hidden or revealed to you by absolutely non-human, artificial intelligence.

Your goal should be to hack this inhuman system of cues on your screen, with a brutal analytical coldness, in order to find and extract humans into potential relationships. One must stop seeing the internet as “a place to connect with others,” but rather see it as nearly the opposite: It is a machine that stands almost impenetrably between and against humans, systematically exploiting our desire for connection into an accelerating divergence and alienation from each other. It is only when one genuinely cultivates this mental model, over time, that it becomes psychologically possible to treat one’s computer for what it is: An utterly inhuman device for conducting operations on statistical aggregates, a device which only accidentally comes pre-packaged with an endless barrage of anthropomorphic visual metaphors.

Those are not people “behind” the avatars on your screen, those are functions in a machine. When we speak of “the algorithms,” we generally imagine them as code behind apps, but the difficult fact to admit is that “the algorithms” are primarily other people, or at least those names and face-pictures we “interact with.” The codebase of the Facebook app doesn’t really manipulate me, the code is not “gaming” me, because I have no biological machinery that allows complicated lines of technical language to trigger changes in my behaviors. It is ultimately the creative energy of other human beings, uploaded to the machine, that is the driving force of what is manipulating me; the codebase only provides a set of game-rules through which other human beings are incentivized to apply their creative effort.

The horror of big social network platforms is not to be found in “technology” or “capitalism,” it is to be found in what we have become. Capitalism is only the name of that which aggregates from the raw reality of what we really want, of what we really do. The solution is to desire differently. Desire is amenable to updating and collective organizing, at least to a degree, which cannot be said of advanced capitalism.

We must get to work, with icy discipline, creating systems to extract humans from the machine, which means to produce human relationships from what we do have in abundance: data. Human relationships are no longer given to anyone by default, so if you want them you must produce them through engineered systems, or else pay someone who can engineer them for you.

As an aside, “independent content creators” are somewhat misleadingly named; perhaps they are primarily community engineers. Truly independent creative effort, which successfully differentiates itself from the passively extracted “creative effort” of social media sheeple, is like a lightning rod that organizes around itself other like-minded humans looking for an exit from the machine. But of course, the independent community is its own machine, and successful “content creators” are essentially disciplined entrepreneurs running often rather sophisticated systems.

We should seek to build independent systems that are even more aggressively inhuman than big social network platforms — because they hack desire with even more precision — but they should output relationships and experiences that are far more authentically human than anything else currently available. And they should be able to do this at scale. More artificial intelligence, more automation, more precisely optimized processes, but engineered by individuals and small-groups against, rather than for, the pseudo-human web.

1 2 3 11

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram