Over the past week, I’ve conducted more than ten private Skype interviews with a diverse group of internet intellectuals, “content creators” of the higher-brow variety, cancellation-vulnerable professionals, and lesser-known upstarts aspiring to be one of these…
The reason I organized these Skype calls is that I spent the past month studying all the best practices that have emerged from lean/agile tech-startup culture. After nearly exhausting all the relevant Y Combinator and Indie Hackers content, it became apparent that one of the most important ways to succeed in building something effective and financially sustainable on the internet is to talk with the people for whom you plan to build it.
In my case, all I know is that I seem to find myself at the center of something quite new (conducting a financially sustainable academic career purely on the internet) and a decent quantity of people are now contacting me for various forms of advice. This seems to suggest I am in a position to create something of value for people, but I’ve never seen myself as an entrepreneur or “founder” and I’ve never really had any visionary business ideas.
But I really need to start making money lol. I’m now fully 6 months out from exiting academia and, while Patreon and freelancing odd-jobs are currently enough to pay the bills, it would be nice to put some caviar on my nachos.
So I figured I would learn everything about how and why startups succeed/fail, and then transfer that knowledge to the “content creator” game.
I still don’t know what, exactly, I’m going to build. So I’m just doing the one thing that everyone in-the-know says you should absolutely do first: I’m having one-on-one conversations with people in my orbit about their “pain points” (I know you like that business-speak baby). I’m trying to figure out the problems encountered by other internet-based intellectuals, cancelled or cancellable academics, and higher-brow “content creators,” and then I’ll try to solve them with something that people want to pay for.
I have no idea if this will work. After talking with people, I honestly now feel like I’m starting to see a vision of something that could really work, but entrepreneurs are notorious for their irrational over-confidence. Discounting for that, I feel utterly clueless about whether I’m really onto something.
So I’m just going to keep moving forward, in very small steps, trying to converge on an objectively data-driven idea. I’ll keep you posted, of course.
One positive result that’s already emerged from this exploration is I’ve come upon a possible catch-phrase to summarize this weird, pregnant-but-not-yet-born niche I’ve been theorizing. It’s simple, natural, short, and unpretentious. It is at least 10x better than all the awkward and cringey phrases I’ve been using until now, for lack of any better options. Instead of repeatedly saying things like “internet intellectuals, content creators of the higher-brow variety, and cancellation-vulnerable professionals,” from here on out I’m just going to refer to us all as indie thinkers.
By the way, if you feel this describes you, I’m still conducting interviews. Contact me if you’d like to setup a short Skype call. I'll just ask you a few questions about your problems. Who doesn't want to vent about their problems?
When I first laid out my idea for a neo-feudal technocommunistpatch, I only waved my hand at the coming technological pathways to my proposed polity. In that first talk, I just hypothesized that Rousseau's concept of the General Will could be engineered by Internet of Things + Smart Contracts.
But "Internet of Things" is really just a popular shorthand for the deepening integration of our physical and digital worlds. So it's easy to point at such a general class of coming technologies and say "something here is certainly going to solve [insert hitherto unsolvable problem]." One could very well have questioned my original talk on the grounds that what I was describing is not really feasible, or will not be feasible anytime soon.
The technology necessary to make communism game-theoretically stable seems closer than I thought.
One pathway on the sensor front is radar. Google has produced a new sensing device called Soli, which uses miniature radar to measure "touchless gestures." It's basically a tiny chip that holds a sensor as well as an antenna array, in one 8mm x 10mm rectangle:
Though Google's intended applications revolve around hand gestures, some people are already finding more general applications. (A flashy new prototype from a megacorp is one thing; but when some other entity starts tinkering with interesting results, that makes me pay more attention.)
A team of academics at the University of St. Andrews recently used Soli to explore the...
counting, ordering, identification of objects and tracking the orientation, movement and distance of these objects. We detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns, beyond static interaction, which are continuous and dynamic. With a focus on planar objects, we report on a series of studies which demonstrate the suitability of this approach. This exploration is grounded in both a characterization of the radar sensing and our rigorous experiments which show that such sensing is accurate with minimal training.
Take a minute to watch it in action, before we embark on a little thought experiment.
It's easy to imagine — without much extrapolation — how one could use this technology to enforce collective honesty and ethical performance optimization. Consider a large multi-family compound. One individual in one of the families is, by far, the most productive chopper of firewood. But he's a little dumb, and earns little money on the market. Then some other individual is by far the most productive software developer; he makes a lot of money on the market but he sucks at chopping firewood. Of course, rich software developers can already pay dumb manual laborers to produce their firewood, but currently no smart and rich person can enjoy the much more valuable and scarce luxury good of living in genuine harmony with a manual laborer.
So our wood-chopping expert hooks up some Soli chips to the pile of chopped wood he maintains for the community. Whenever a piece of wood is removed, he gets a ping on his phone, or maybe a digest at the end of each week. It tells him how many pieces of wood were taken, their weight, which person took them, and how many tokens were transferred to him by the associated Firewood Smart Subcontract (subcontracts are like clauses added to the original founding Smart Contract established at the founding of the polity; they can be constantly added and taken away by consensus, typically as new people enter or leave the group, or if/when individuals' skills/traits/needs change substantially). The richer the person taking firewood, the more they pay per piece of wood via the Smart Contract, according to a steeply progressive taxation rate agreed and programmed into law previously.
On the other hand, if Mr. Bunyan is not keeping the stock replenished, which leads to some individuals suffering very cold evenings, a certain number of tokens are transferred from him to whoever suffered a cold evening. This transfer can be automatically triggered whenever the data show the wood stock to be beneath some threshold, and the temperature data from a particular house to be beneath some threshold, on the same day. And again, these thresholds can be agreed consensually.
Aside: It might seem that this technocommunism sure does require a lot of group decisions — won't it fail like Occupy failed, because democracy is too much work?! Not quite. First, other than the basic preference thresholds defined in the contracts, there is no discussion or deliberation whatsoever. The code is sovereign, and removes the need for regular meetings and debates. My references to consensus only refer to periodic updates. Second, you know what requires a million decisions? The construction of a modern website. And yet it's easier than ever to make one, even with a group. Why? Because code evolves. With code, future people let the smartest and most successful past people make decisions for them. Over time, the larger global community of neo-feudal techno-communist polity hackers will converge on templates: kits containing a variety of sensor devices with a corresponding code repository, containing all the device+subcontract components found in almost all of the most successful previous patches to date. Groups will add new modules if they enjoy hacking, but many will just use the default settings. Or upon initiation, each person completes a short survey gauging basic traits and aptitudes, which plugs into the template optimal values for the various preference thresholds.
Depending on the use case, perhaps a video rig combined with image detection algorithms would work better than radar. Perhaps multiple, redundant methods leveraging different dimensions (video, radar, sound, etc.) might be used at once, in especially tricky and sensitive cases. Perhaps it turns out that 67% percent of the most destructive community offenses occur in kitchens, so the kitchen is loaded with every method and a heavyweight ensemble model. With some problems our tolerance for false positives might be greater/lesser than our tolerance for false negatives, so perhaps the statistical cutoff for inferring a violation would be set higher or lower accordingly.
Meanwhile, while the wood-chopper's system is managing itself, the rich computer programmer might leave a huge stock of old-fashioned USD greenbacks out in the open, available to all for immediate, interest-free, cash loans. Why? Because the risk approaches zero: Just as you can watch in the video above, all removals and returns are fully identified and recorded with radar, and if anyone fails to repay, the owner of the cash stock will be automatically credited from the taker's account after some agreed time (if the taker doesn't have it, a small portion will be taken from all of the others, all of whom have agreed to guarantee each other).
The only question right now is, what are currently the best technologies available for getting started? That and, who's game?
A reader/watcher/listener has brought to my attention another paper, which shows that, for college-educated individuals, earnings are a non-linear function of cognitive ability or g — at least in the National Longitudinal Survey of Youth from 1979-1994. The paper is a 2003 article by Justin Tobias in the Oxford Bulletin of Economics and Statistics.
There may be other studies on this question, but a selling point of this article is that it tries to use the least restrictive assumptions possible. Namely, allowing for non-linearities. In the social sciences, there is a huge bias toward finding linear effects, because most of the workhorse models everyone learns in grad school are linear models. Non-linear models are trickier and harder to interpret and so they're just used much less, even in contexts where non-linearities are very plausible.
A common motif in "accelerationist" social/political theories is the exponential curve. Many of us have priors suggesting that, at least for most of the non-trivial tendenices characterizing modern polities, there are likely to be non-linear processes at work. If the contemporary social scientist using workhorse regression models is biased toward finding linear effects, accelerationists tend to go looking for non-linear processes at the individual, group, nation, or global level. So for those of us who think the accelerationist frame is the one best fit to parsing the politics of modernity, studies allowing for non-linearity can be especially revealing.
The first main finding of Tobias is visually summarized in the figure below. Tobias has more complicated arguments about the relationship between ability, education, and earnings, but we'll ignore those here. Considering college-educated individuals only, the graph below plots on the y-axis the percentage change in wages associated with a one-standard-deviation increase in ability, across a range of abilities. Note that whereas many graphs will show you how some change in X is associated with some change in Y, this plot is different: It shows the marginal effect of X on Y, but for different values of X.
The implication of the above graph is pretty clear. It just means that the earnings gain from any unit increase in g is greater at higher levels of g. An easy way to summarize this is to say that the effect of X on Y is exponential or multiplicative. Note also there's nothing obvious about this effect; contrast this graph to the diminishing marginal utility of money. Gaining $1000 when you're a millionaire has less of an effect on your happiness than if you're at the median wealth level. But when it comes to earnings, gaining a little bit of extra ability when you're already able is worth even more than if you were starting at a low level of ability.
The paper has a lot of nuances, which I'm blithely steamrolling. My last paragraph is only true for the college educated, and there are a few other interesting wrinkles. But this is a blog, and so I mostly collect what is of interest to me personally. Thus I'll skip to the end of the paper, where Tobias estimates separate models for each year. The graph below shows the size of the wage gap between the college-educated and the non-college-educated, for three different ability types, in each year. The solid line is one standard deviation above the mean ability, the solid line with dots is mean ability, and the dotted line is one standard deviation below the mean ability.
An obvious implication is that the wage gap increases over this period, more or less for each ability level. But what's interesting is that the slope looks a bit steeper, and is less volatile, for high-ability than for average and low-ability. There is a lot of temporal volatility for the class of low-ability individuals. In fact, for low-ability individuals there is not even a consistent wage premium enjoyed by the college-educated until 1990.
Anyway, file under runaway intelligence takeoff...
Have you ever wondered how and why The Life of Samuel Johnson is so damn long (and influential), even though he's just rambling like a livestreamer on adderall? Some thoughts on the current frontiers of idea production.
Organic conversation is one of the most effective ways to generate thoughts; and passive audio recording of organic conversation is one of the most effective ways to convert thoughts into an external output. But audio is highly sub-optimal for searching, arranging, or creatively aggregating recorded fragments into higher-level projects. This is one of the major bottlenecks that has, so far, prevented the explosive production efficiency of podcasting/livestreaming technology from flowering into a proportionately explosive renaissance of independent book publishing in the more sophisticated intellectual domains.
Whoever can solve this bottleneck, or I should say, whoever is at the front of iteratively solving this bottleneck right now, may very well enjoy a unique and substantial intellectual-political edge, perhaps not unlike that enjoyed by Luther. Or so it seems to me, at least — which is why I've been investing some time into testing the current frontiers of speech-to-text technologies (here, and here).
I recently used Youtube's editing tool to split off a 5-minute clip from a recent conversation I had with Michael James, just because it felt like a not-so-bad draft of something I've been thinking about recently but never yet even tried to jot down. Then, for the trivially low cost of $0.1, I had Temi transcribe it. It took me about 10 minutes to edit it, and post it as a blog post. For now, it's not yet worthwhile to transcribe every audio/video conversation I conduct, but as the transcription gets ever more accurate, it will be trivially easy and cheap to make perfect full-text versions of any recording. Put them in a folder, tag the sections, identify higher-order patterns, cut the chaff; repeat until something substantial emerges, concentrate where necessary, extend where necessary, and extraordinary things might be produced more efficiently than ever.
The implications are potentially profound for intellectuals and creative folks. It's also a potential opportunity for internet upstarts to achieve a substantial edge over legacy establishment intellectuals. Those people will be very late to this game. Three crazy people and a little bit of adderall could easily produce a damn interesting book in one weekend, plus a few days of editing and arranging, or just pay a freelance editor on fiverr or upwork...
Thus, here is my proposed answer to the original question, namely, how and why is The Life of Samuel Johnson so damn long (and influential)? It's because Samuel Johnson just rambled for days on end, used James Boswell as a transcription AI, and self-published an 18-volume monstrosity on Amazon.com. Although it was probably only read by .001% of the people who claimed to read it, everyone nonetheless was forced to concede, "Wow, this guy must be freakin' legit!"
Utilitarianism, taken as a world-historical arrival at the level of civilization itself, might very well have a net-negative utility. It is perfectly plausible that an overly refined awareness of, and sensitivity to, utility would have unintended consequences tending toward catastrophically negative outcomes. Deontological ethical systems have often issued from this intuition, I think. Below are the moving parts to this argument.
Our awareness of all the suffering that might be alleviated has recently exploded due to the information revolution. We still have no idea what this will do to human beings in the long run, but it sure seems plausible that it increases the prevalence of guilt feelings and anxiety about a nearly infinite number of global problems. Incentives exist to highlight and report these negative stimuli, and incentives exist to publicly feel bad about them; it strikes me as perfectly reasonable to imagine that globalized modern civilization is already headlong into an irrecovable spiral of collective depressive delusion. While a utilitarian spirit is obviously not the only or even main driver of this dynamic, it is a a necessary condition for it. The counterfactual — large numbers of individuals switching to a deontological worldview in which they’re only felt sense of obligation is to a small number of categorical local rules — would almost certainly increase global utility to an extraordinary degree. Unless you think the utilitarian reflections of the average person cause them to non-trivially improve the world. With respect to the overwhelming majority of people, this strikes me as unlikely.
Our intuitions and institutions get updated slowly. We suddenly understand many sources of suffering way better than ever, but nobody can change all of our institutions to solve these problems at anywhere near the rate our conscience would need to be be at peace. Therefore, the contemporary global village is plagued with a necessary temporal gap between the suffering that exists, and our ability to reduce it. If you consider the fact that our sense of ethically problematic suffering increases rather than decreases with progress, it is possible that this temporal gap increases even as we make technical progress closing it. Again, the more effectively utilitarian we are, the more felt suffering might increase, precisely as we objectively decrease suffering and increase net-utility with respect to most direct measures.
How great is the suffering caused by this gap? That’s anybody’s guess, but it does not seem implausible that it is greater than the entire history of utility gained by human activity heretofore.
If you doubt that the suffering caused by hyper-awareness of suffering could be so large, here are some reasons why you might not want to dismiss this idea. Human experience is recursive, so it seems to me that this makes it potentially exponential, non-linear. If you’re depressed, you feel guilty for being depressed, then you feel stupid for feeling guilty for feeling depressed, all of which makes you more depressed, and so on. Human experience can rapidly approach infinities of low, and high. I see no reason why human suffering could not potentially skyrocket toward infinity given media-driven negative information glut, instant interconnectivity at large scales, and economic incentives to spread and express sad affects, not to mention cognitive bugs such as negativity bias). None of this dismisses the wealth of data marshaled by people such as Steven Pinker, showing that in so many ways, markers of human suffering are decreasing. Felt perceptions might be wildly miscalibrated with objective data about world trends, and still veer off in an explosive detachment from reality.
Additionally, even if you don’t think that’s possible, a small number of highly suffering people can still wreak untold havoc on society at large. Trends such as anti-natalism and anti-civilizational thought more generally, often promoted by sad people who want to wind down life itself, are to some degree children of utilitarian progress. They look at the costs and benefits of humanity thus far and (however miscalibrated) they decide none of it is really worth it, and they speak and act accordingly. This is perhaps because, in the long run, there are no costs and benefits, and thus the validity of deontology gets revealed with particular clarity in end times. Regardless, if anti-life intellectual currents were to produce future policy changes, or some new, crazier version of these thought-patterns were to take hold in the form of the next big moral panic, which in turn leads to centralized policies with negative systemic effects, some portion of these consequences would have to be counted as causal effects of a short-circuiting utilitarianism. You can say that such ridiculous deductions from the utilitarian starting point are unfounded and you might be right; but you still have to chalk-up such consequences as effects of the utilitarian memeplex’s diffusion into the postmodern polity.
The utilitarian ethical defaults of modern western individuals are in meltdown from overheated inputs they do not have the capacity to process. Cooling innovation always follows hot invention, but we live in a unique historical period where the time lag between new inventions is less than the time lag between one invention and the secondary technologies that make it work over time. Fires are no longer put out, but displaced by new fires, which burn only long enough to sustain a feeling of continuity before the next fire arrives. Calculating net effects seems reasonable when it is possible to imagine a shared world; as human worlds divide, collapse, and revivify differentially, efforts to calculate overall effects on a shared world will be increasingly painful. Deontological ethics receives its final vindication on consequentialist grounds.
…modern political history has a characteristic shape, which combines a duration of escalating ‘progress’ with a terminal, quasi-punctual interruption, or catastrophe – a restoration or ‘reboot’. Like mould in a Petri dish, progressive polities ‘develop’ explosively until all available resources have been consumed, but unlike slime colonies they exhibit a dynamism that is further exaggerated (from the exponential to the hyperbolic) by the fact that resource depletion accelerates the development trend.
Economic decay erodes productive potential and increases dependency, binding populations ever more desperately to the promise of political remedy. The progressive slope steepens towards the precipice of supreme radicality, or total absorption into the state…