RSS Feeds
Algorithms and prayers

The mild-mannered socialist humanist says it's evil to use algorithms to exploit humans for profit, but the articulation of this objection is an algorithm to exploit humans for profit. Self-awareness of this algorithm may vary, but cultivated ignorance of one's own optimizing functions does not make them any less algorithmic or exploitative. The opposite of algorithmic exploitation is not moralistic objection, but probably prayer, which is only — despite popular impressions — attention, evacuated of instrumental intentions. One point of worshipping God is that, by investing one's desire into an abstraction of perfection, against which all existing things pale in comparison, one may live toward the good and still live as intensely as possible. Secular "good people" often makes themselves good by eviscerating their desire, de-intensifying their vitality to ensure their mundane algorithmic optimizing never goes too far. But a life of weak sin is not the same as a good life. Prayer, the practice of de-instrumentalizing attention, does not feign superiority to the sinful, exploitative tendencies of man (like socialist humanism). Prayer is code. Prayers have never hidden their nature as exploitative algorithms — "say these words and it will be Good" — but they exploit our drive to exploit, routing it into a pure and abstract circle, around a pure and abstract center. Secular solutions to the problem of evil typically involve lying about human behavior, whereas a holy life is the application of one's wicked intelligence to the production of the good and the true.

If education is signaling, does moral signaling become a viable major?

In a recent post, I encountered an interesting empirical fact about the college wage premium accruing to low-ability college grads over the period 1979-1994. Looking at a 2003 article by Tobias,  I wrote: "There is a lot of temporal volatility for the class of low-ability individuals. In fact, for low-ability individuals there is not even a consistent wage premium enjoyed by the college-educated until 1990."

I have begun to wonder if this pattern has anything to do with the non-linear relationship between GPA and PC. If the low-ability college entrants feel they are much less certain to enjoy a wage premium over the "townie losers" they left behind, what better strategy than to invest their college-specific word games with extreme moral significance?  That way, even the dumbest college grad can be confident that they will remain distinguished from the more able among the non-college-grads.

[Hat tip to a few high-quality comments on this blog recently, I don't recall exactly but I think someone may have made a point similar to this; the seed of this post might have been planted there, thank you.]

Although this last point is only conjecture, it is curious that right when the wage premium for low-ability college grads arrives is right when the first wave of campus political correctness kicks off — the early 1990s. Especially if you buy Caplan's signaling theory of education, it's not at all implausible that for low-ability college grads their wage-premium is secured primarily through a specialization in moral signaling

The non-linear effect of ability on earnings in the computer age

A reader/watcher/listener has brought to my attention another paper, which shows that, for college-educated individuals, earnings are a non-linear function of cognitive ability or g — at least in the National Longitudinal Survey of Youth from 1979-1994. The paper is a 2003 article by Justin Tobias in the Oxford Bulletin of Economics and Statistics.

There may be other studies on this question, but a selling point of this article is that it tries to use the least restrictive assumptions possible. Namely, allowing for non-linearities. In the social sciences, there is a huge bias toward finding linear effects, because most of the workhorse models everyone learns in grad school are linear models. Non-linear models are trickier and harder to interpret and so they're just used much less, even in contexts where non-linearities are very plausible.

A common motif in "accelerationist" social/political theories is the exponential curve. Many of us have priors suggesting that, at least for most of the non-trivial tendenices characterizing modern polities, there are likely to be non-linear processes at work. If the contemporary social scientist using workhorse regression models is biased toward finding linear effects, accelerationists tend to go looking for non-linear processes at the individual, group, nation, or global level. So for those of us who think the accelerationist frame is the one best fit to parsing the politics of modernity, studies allowing for non-linearity can be especially revealing.

The first main finding of Tobias is visually summarized in the figure below. Tobias has more complicated arguments about the relationship between ability, education, and earnings, but we'll ignore those here. Considering college-educated individuals only, the graph below plots on the y-axis the percentage change in wages associated with a one-standard-deviation increase in ability, across a range of abilities. Note that whereas many graphs will show you how some change in X is associated with some change in Y, this plot is different: It shows the marginal effect of X on Y, but for different values of X.

Tobias 2003, pp. 13.

The implication of the above graph is pretty clear. It just means that the earnings gain from any unit increase in g is greater at higher levels of g. An easy way to summarize this is to say that the effect of X on Y is exponential or multiplicative. Note also there's nothing obvious about this effect; contrast this graph to the diminishing marginal utility of money. Gaining $1000 when you're a millionaire has less of an effect on your happiness than if you're at the median wealth level. But when it comes to earnings, gaining a little bit of extra ability when you're already able is worth even more than if you were starting at a low level of ability.

The paper has a lot of nuances, which I'm blithely steamrolling. My last paragraph is only true for the college educated, and there are a few other interesting wrinkles. But this is a blog, and so I mostly collect what is of interest to me personally. Thus I'll skip to the end of the paper, where Tobias estimates separate models for each year. The graph below shows the size of the wage gap between the college-educated and the non-college-educated, for three different ability types, in each year. The solid line is one standard deviation above the mean ability, the solid line with dots is mean ability, and the dotted line is one standard deviation below the mean ability.

Tobias 2003, pp. 23

An obvious implication is that the wage gap increases over this period, more or less for each ability level. But what's interesting is that the slope looks a bit steeper, and is less volatile, for high-ability than for average and low-ability. There is a lot of temporal volatility for the class of low-ability individuals. In fact, for low-ability individuals there is not even a consistent wage premium enjoyed by the college-educated until 1990.

Anyway, file under runaway intelligence takeoff...

Genetic research disrupts racist views of welfare

Following on my post from yesterday, I've been thinking about how the widespread and often racist views of "welfare" in the United States — especially among poor whites — fester on top of the educated-progressive party line that heritable IQ differences are bunk.

An interesting wrinkle from the study I cited yesterday (Papageorge and Thom 2018) is that the genetics-earnings link is conditioned by family SES. In other words, children with strong genetic endowments for abstract intelligence will not reach their full earnings potential if they are hampered by a poor family environment.

This is consistent with the left-hereditarian position that the normalization and de-stigmatization of IQ differences and IQ testing would, on net, help poor and stereotyped minorities the most. There are highly gifted children in poor and/or minority communities who are not meeting their potential, and we should do everything we can to support them, including the use of IQ tests to fast-track their selection into new opportunities. One could also argue on this basis that redistributive support for such communities is more necessary and/or more "deserved." I'm not personally interested in gradations of desert as a framing for the ethical necessity of egalitarian arrangements, but others might be.

Some of the anti-welfare and anti-black political sentiment of whites is based on the belief that poor black communities should be written off as hopeless in general. This impression is at least partially due to the fact that a lot of government redistribution over the past few decades has been based on truly naïve and false blank-slate ideology, so people now infer that no amount of redistribution could possibly help poor black communities, if it hasn't yet. They come to think we should stop "throwing good money after bad," when they might well be open to throwing good, smarter money after all the bad, dumb money of past efforts. Understanding the reality of how genetic endowments affect economic outcomes, and how those endowments are distributed, promises more than one way to shake up the whole reactionary, conventional framing of welfare politics in general.

Study finds the relationship between genes and earnings increased after 1980

Someone sent me a recent NBER working paper by Nicholas W. Papageorge and Kevin Thom on polygenic scores and educational attainment/earnings. Most pertinent to my theoretical interests is that the link between genes and income appears to increase over recent decades.

In my lectures on the politics of media (really about the politics of technology more generally), I dedicate a session to the topic of skill-biased technical change (SBTC). While the econometrics and specific interpretations are debated, there is a literature in Economics that suggests certain technological innovations (i.e. computing) increase the earnings of the highly skilled relative to the less skilled. I would sometimes wonder to what degree "skills," which sound like primarily acquired things, in fact reflect heritable traits. Or if one could separate these out...

Papageorge and Thom provide one of the first efforts to study this question explicitly. "This is the first study to estimate the returns to genetic factors associated with education using micro genetic data and disaggregated measures of earnings and job tasks across cohorts."

Here is their summary of the genetic effect, conditional on time period:

The returns to these genetic endowments appear to rise over time, coinciding with the rise in income inequality after 1980. Accounting for degree and years of schooling, a one standard deviation increase in the score is associated with a 4.5 percent increase in earnings after 1980. These results are consistent with recent literature on income inequality
showing not only an increase in the college premium, but also a rise in the residual wage variance within educational groups (Lemieux, 2006). We also find a positive association between the score and the kinds of non-routine job tasks that benefited from computerization and the development of more advanced information technologies (Autor, Levy, and Murnane, 2003). This provides suggestive evidence that the endowments linked to more educational attainment may allow individuals to either better adapt to new technologies, or specialize in
tasks that more strongly complement these new technologies.

Basically, they observe what you would expect to observe if the computerization that begins around 1980 allowed the escape and takeoff of "non-routine analytic" power or abstract intelligence by those most genetically blessed with it. Implicitly, individuals less genetically blessed with "non-routine analytic" powers begin to be left behind around 1980.

Their findings cannot explain the entire postwar dynamic of increasing inequality and relative stagnation of the lower classes, however, because the flatlining of median wages begins around 1973 if I recall correctly. The study seems somewhat coy about naming or even labeling the polygenic score; but my non-expert intuition is that it would have to be something quite akin to what is called the "g-factor" or general intelligence, right?

One limitation of the study is that they use a dummy variable for the period after 1980. I would be curious to see what happens if one re-runs their models with a continuous variable for year. My intuition is that individual-level economic outcomes are more skill-biased/g-loaded today than in the 1980s, but I'm not yet up on any studies this precise on that question in particular.

Now Wars Start Themselves

Major wars have become less frequent, but a curious feature of the wars we still observe is that almost nobody starts them. When wars occur today, they appear to start themselves, or are started by some unknown entity. I learned this from a new article by Hathaway et al. Here are selections from the abstract:

This Article is the first to examine “war manifestos,” documents that set out the legal reasons sovereigns provided for going to war from the late fifteenth through the mid-twentieth centuries. We have assembled the world’s largest collection of war manifestos—over 350—in languages as diverse as Classical Chinese, German, French, Latin, Serbo-Croatian, and Dutch...

Examining these previously ignored manifestos reveals that states exercised the right to wage war in ways that would be inconceivable today. In short, the right to intervene militarily could be asserted in any situation in which a legal right had been violated and all peaceful channels had been explored and exhausted. This Article begins by describing war manifestos. It then explores their history and evolution over the course of five centuries, explains the purposes they served for sovereigns, shows the many “just causes” they cited for war...

Hathaway, Oona A. and Holste, William and Shapiro, Scott J. and Van De Velde, Jacqueline and Lachowicz, Lisa, War Manifestos (September 15, 2017). University of Chicago Law Review, Vol. 85, 2018 Forthcoming; Yale Law School, Public Law Research Paper No. 617. Available at SSRN: https://ssrn.com/abstract=3037538

Self-defense is the most popular justification for war throughout the period studied, but it's interesting that its prevalence steadily grew from the middle of the twentieth century. Most traditional justifications for warring have become obsolete. Religion was once a fairly common reason for going to war, but now explicitly religious wars among states are virtually extinct.

https://ssrn.com/abstract=3037538

Don't be fooled into thinking that interstate aggression has been humanized. Quite the contrary, these data only suggest that war is an increasingly algorithmic process, increasingly devoid of human agents: When every player in the game invokes "human rights" to blame it on some other guy, this is not evidence that human rights have been normalized; it is evidence that humanity has been evacuated from the underlying process, through the cold and calculated manipulation of human emotions for ulterior purposes. 

Stay up to date on all my projects around the web. No spam, don't worry.

This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. The Privacy Policy can be found here. The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE.
rss-square linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram