fbpx
What is an "image of thought" for Deleuze?

From Lecture #3 in my video course for Based Deleuze:

What's really at stake here, I think, is the attack on representational thought... That's one of the core components of the Deleuzian project. Deleuze argued that any philosophy presents an image of thought and that this image of thought, it's not really explicit. It's never really demonstrated or proven. It's sort of a presupposition. Whenever a philosopher or any type of thinker or theologian or whatever presents a philosophy, there is in the background a certain image of what thought is and what thought should be, and what thought can be, and that's never really fully spelled out. It's never really justified.


It's essentially a kind of aesthetic. And there are different images of thought. This is something that Deleuze really wants to show to us… That we have a choice: an essential, irreducible kind of freedom or aesthetic decision to make about what type of thought we want to engage in.

In retrospect, "choice" is not the best word, because Deleuze wants to steer us away from any naive conception of free will. One is almost tempted to use an ugly deconstructionist term here, such as undecidability. The key point is that an 'image of thought' is extra-rational. It's never justified or formalized rationally, although it's implied in modes of justification or formalization. We might not "choose" our image of thought, exactly, although there is a kind of pre-rational selection process that sorts creators and their creations. Perhaps we could say that our 'image of thought' chooses us...

We Are All Conspiracy Theorists Now

The collapse of trust in mainstream authorities is discussed as if it is only one of many troubling data points. It's not. People are still underestimating the gravity of the interlocking trends that get summarized in this way.

For instance, when trust in mainstream authorities is sufficiently low, one implication is that conspiracy theories become true, even if you personally trust the mainstream authorities, even if you're a rational Bayesian, even if you're the type of person who is resolved to be above conspiracy theories.

Let's say you're an optimally rational person, with the utmost respect for science and logic and empirical reality. An optimally rational person has certain beliefs, and they are committed to updating their beliefs upon receiving new information, according to Bayes' Rule. In layman's terms, Bayes' Rule explains how one should integrate new information with one's past beliefs to update one's beliefs in the way that is best calibrated to reality. You don't need to understand the math to follow along.

How does a Bayesian update their beliefs after hearing a new conspiracy theory? Perhaps you wish to answer this question in your head right now.

For my part, I just watched the Netflix documentary about Flat Earth theorists the other night. I spent the next day puzzling over what exactly is the rational response to a film like that. The film certainly didn't convince me that the Earth is flat, but can I really say in all honesty that the documentary conveyed to me absolutely no new information corroborating a Flat Earth model of the world?

One could say that. Perhaps you want to say that the rational response to conspiracy theory documentaries is to not update your beliefs whatsoever. The whole documentary is clearly bunk, so I should assign zero credence to the thesis that the Earth is flat. This would be a little strange, in my view, because how many people understand astronomy deeply enough with first-hand familiarity to possess this kind of prior confidence? Ultimately most of us, even highly smart and educated non-astronomers, have to admit that our beliefs about the celestial zones are generally borrowed from other people and textbooks we've never quite adversarially validated. If I'm confronted with a few hundred new people insisting otherwise, I surely don't have to trust them, but giving them a credence of absolute zero seems strange given that my belief in the round Earth pretty much comes from a bunch of other people telling me Earth is round.

Personally I become even more suspicious of assigning zero credence because, introspectively, I sense that the part of me that wants to declare zero credence for Flat Earth theory is the part of me that wants to signal my education, to signal my scientific bona fides, to be liked by prestigious social scientists, etc. But I digress. Let's grant that you can assign Flat Earth zero credence if you want.

If you assign Flat Earth a zero likelihood of being correct, then how do you explain the emergence of a large and thriving Flat Earth community? Whether you say they're innocent, mistaken people who happen to have converged on a false theory, or you say they are evil liars trying to manipulate the public for dishonorable motives — whatever you say — your position will ultimately reduce to seeing at least the leaders as an organized cabal of individuals consciously peddling false narratives for some benefit to themselves. Even if you think they all started out innocently mistaken, once they fail to quit their propaganda campaigns after hearing all the rational refutations, then the persistence of Flat Earth theory cannot avoid taking the shape of a conspiracy to undermine the truth. So even if you assign zero credence to the Flat Earth conspiracy theory, the very persistence of Flat Earth theory (and other conspiracy theories) will force you to adopt conspiracy theories about all these sinister groups. Indeed, you see this already toward entities such as Alex Jones, Cambridge Analytica, Putin/Russia, etc.: Intelligent and educated people who loathe the proliferation of conspiracy theories irresistibly agree, in their panic, to blame any readily available scapegoat actor(s), via the same socio-psychological processes that generate all the classic conspiracy theories.

If I'm being honest, my sense is that after watching a feature-length documentary about a fairly large number of not-stupid people arguing strongly in favor of an idea I am only just hearing about — I feel like I have to update my beliefs at least slightly in favor of the new model. I mean, all the information presented in that 2-hour long experience? All these new people I learned about? All the new arguments from Flat Earthers I never even heard of before then? At least until I review and evaluate those new arguments, they must marginally move my needle — even if it's only 1 out of a million notches on my belief scale.

In part, this is a paradoxical result of Flat Earth possessing about zero credence in my mind to begin with. When a theory starts with such low probability, almost any new corroborating information should bump up its credence somewhat.

So that was my subjective intuition, to update my belief one tiny notch in favor of the Flat Earth model — I would have an impressively unpopular opinion to signal my eccentric independence at some cocktail party, but I could relax in my continued trust of NASA…

Then it occurred to me that if this documentary forces me to update my belief even slightly in favor of Flat Earth, then a sequel documentary would force me to increase my credence further, and then… What if the Flat Earthers start generating Deep Fakes, such that there are soon hundreds of perfectly life-like scientists on Youtube reporting results from new astronomical studies corroborating Flat Earth theory? What if the Flat Earthers get their hands on the next iteration of GPT-2 and every day brings new scientific publications corroborating Flat Earth theory? I've never read a scientific publication in Astronomy; am I suddenly going to start, in order to separate the fake ones from the reliable ones? Impossible, especially if one generalizes this to all the other trendy conspiracy theories as well.

If you watch a conspiracy documentary and update your beliefs even one iota in favor of the conspiracy theory, then it seems that before the 21st century is over your belief in at least one conspiracy theory will have to reach full confidence. The only way you can forestall that fate is to draw an arbitrary line at some point in this process, but this line will be extra-rational by definition.

Leading conspiracy theorists today could very well represent individuals who subjectively locate themselves in this historical experience — they see that this developing problem is already locked in, so they say let's get to the front of this train now! One could even say that Flat Earth theorists are in the avant-garde of hyper-rationalist culture entrepreneurs. Respectable scientists who go on stages insisting, with moral fervor, that NASA is credible — are these not the pious purveyors of received authority, who choose to wring their hands morally instead of updating their cultural activity in a way that's optimized to play and survive the horrifying empirical process unfolding before them? Perhaps Flat Earth theorists are the truly hard-nosed rationalists, the ones who see which way the wind is really blowing, and who update not only their beliefs but their entire menu of strategic options accordingly.

It's no use to say that you will draw your line now, in order to avoid capture by some hyper-evolved conspiracy theory in the future. If you do this, you are instituting an extra-rational prohibition of new information — effectively plugging your ears, surely a crime to rationalism. Even worse, you would be joining a cabal of elites consciously peddling false narratives to control the minds of the masses.

Algorithms and prayers

The mild-mannered socialist humanist says it's evil to use algorithms to exploit humans for profit, but the articulation of this objection is an algorithm to exploit humans for profit. Self-awareness of this algorithm may vary, but cultivated ignorance of one's own optimizing functions does not make them any less algorithmic or exploitative. The opposite of algorithmic exploitation is not moralistic objection, but probably prayer, which is only — despite popular impressions — attention, evacuated of instrumental intentions. One point of worshipping God is that, by investing one's desire into an abstraction of perfection, against which all existing things pale in comparison, one may live toward the good and still live as intensely as possible. Secular "good people" often makes themselves good by eviscerating their desire, de-intensifying their vitality to ensure their mundane algorithmic optimizing never goes too far. But a life of weak sin is not the same as a good life. Prayer, the practice of de-instrumentalizing attention, does not feign superiority to the sinful, exploitative tendencies of man (like socialist humanism). Prayer is code. Prayers have never hidden their nature as exploitative algorithms — "say these words and it will be Good" — but they exploit our drive to exploit, routing it into a pure and abstract circle, around a pure and abstract center. Secular solutions to the problem of evil typically involve lying about human behavior, whereas a holy life is the application of one's wicked intelligence to the production of the good and the true.

Semantic Apocalypse and Life After Humanism with R. Scott Bakker

I talked to fantasy author, philosopher, and blogger R. Scott Bakker about his views on the nature of cognition, meaning, intentionality, academia, and fiction/fantasy writing. See Bakker's blog, Three Pound Brain.

Listeners who enjoy this podcast might check out Bakker's What is the Semantic Apocalypse? and Enlightenment How? Omens of the Semantic Apocalypse.

This conversation was first recorded as a livestream on Youtube. Subscribe to my channel with one click, then click the bell to receive notifications when future livestreams begin.

Big thanks to all the patrons who keep this running.

Download this episode.

Against the Epistemic Status

I've been considering the idea of assigning an "epistemic status" to each of my blog posts, in the fashion of Scott Alexander. Basically: adding an addendum at the top of each blog post indicating the degree to which I really believe what is said in the blog post. Perhaps I no longer believe what I wrote a year ago — in that case, I might add an epistemic status warning readers that I no longer believe it. That's the idea.

I've decided I'm against epistemic statuses. TLDR: I think at best they are useless, begging the problem they seek to address; and at worst, I think they could very well decrease the total, long-run truth-value obtained within a writing/reading community.

The epistemic status gives a false sense of rigor and humility. One reason is because there's no epistemic status for the epistemic status. An ES is not a confidence interval, derived by some transparent calculation procedure. It is probably more subjective and error-prone than the full blog post. One reason I never post an ES — when I've sometimes had the urge to, especially after weaker posts — is that I always feel so radically unsure of my post-writing impressions that for an ES to actually increase the transparency/reliability of the post, I feel like I'd have to say I'm also utterly unsure of the ES, and so on to infinite regression. Thus, tacking on an ES at the top of the article feels to me primarily like rational self-skepticism/humility-signaling, which doesn't in any way solve the problem. Also, from the reader's perspective, the epistemic status begs the question of how reliable any blog post is, because they still have to decide whether they trust the epistemic status. For new visitors, the epistemic status therefore solves no problem, and merely adds text while bumping the trust/credibility problem up a level.

The practice of adding post-hoc epistemic statuses lends to the entire blog an impression of always being epistemically up to date, but I don't feel I will ever have the time or conscientiousness to really keep all the posts' epistemic statuses up to date with my current judgment. Therefore if I simply overlook some old posts I don't really care about anymore, and readers see there is no epistemic status downgrading them, they might reasonably infer I still fully own those beliefs.

For return visitors and regular readers of a blog, the ES is essentially an appeal to one's own authority, a cashing-in on past trust and cultural capital earned by the author's substantive content.

Ultimately, every claim I make, or inference I imply, nested in every article I write, nested in every collection of articles, has to be given some level of credence by each individual reader. Whether one line is a joke or not, whether one claim is likely to be true or mistaken — these are questions every reader must make for themselves based on whatever information they have about my claims, and the project I'm embarked on, and my reliability as a source. Assigning an ES to each unit I publish would be to lull the reader's vigilance into an unjustifiably comfortable slumber. It might make them feel like I can take care of their meta-rationality for them, when in fact it's an irreducible existential burden for all thinking adults. I don't want my readers to feel like they are cast adrift in the wilderness, but alas they are. So I don't really want to make them feel otherwise.

I think the normal presumptions about the nature of blogging are meta-rationally superior to epistemic statuses. It's just a blog: take everything with a huge grain of salt, but if something is really well demonstrated and supported then believe it, as you see fit. If you see a post from three years ago, of course the author has probably changed their views to some degree. The best response to this is to read more contemporary posts, to judge for yourself what this author really thinks on the whole. If a reader doesn't care to do this, no epistemic status is going to ensure their initial exposure is lodged into their long-term memory correctly. Such a person will either never remember the blog post or, if they are so unwise as to memorize and repeat to their friends something I reported in one blog post three years ago, I suspect they would bulldoze right over even the most cautious epistemic status warnings.

Better is to just put super-wide confidence intervals on everything one writes. Some things I say will be dumb, biased, and/or mistaken. But some things I write will — hopefully — get closer to way bigger truths than I can even appreciate! If you assign epistemic statuses to your blog posts, you really should also say when and where you think you are super correct. Most sane people will not want to place at the top of a blog post "Epistemic status: I feel a 5% chance that the claims below could change the course of world history." But any serious and passionate intellectual gets some taste of this genuine feeling every now and then! Thus, if this epistemic status business does not include such self-aggrandizing caveats, that too might be systematically biasing. I'd rather just give one big caveat about my whole body of writing, that it is merely the inspired guesswork of one person trying their best to be correct. Implicitly, some stuff will be more wrong than it might seem, and some stuff will be even more right than it seems. The only commitment one needs to make is to do one's best, in a way that updates moving forward, rather than attempting to move backward with post-hoc re-evaluations.

I admit that some of my intuition on this question is due to my temperament: I like to work fast, always move forward, never look back. I can do the disciplined work of editing but I'm not exceptionally high in Orderliness; I run mostly on the dopaminergic movements of exploration, inspiration and creation, adding just enough conscientiousness to complete things responsibly. As far as bloggers and "content creators" go, I'm high-variance: I put out a lot of high-quality stuff that I take very seriously, but I also put out a lot of random stuff sometimes bordering on bad comedy. So part of what I wrote above is just rationalizing all of this. But this is also my personal alternative to the epistemic status: self-conscious reflections weaved immanently into any given unit of production.

A conversation with Joshua Strawn

We talked about Joshua's music career, theory and psychoanalysis, Josh's time at the New School, Christopher Hitchens, neoreaction and patchwork, internet culture, instrumental rationality, and other things.

With Zohra Atash, Josh is in the band Azar Swan. azarswan.com

This podcast and my blog have now been unified! The website is now theotherlifenow.com.

Huge thanks to my supporters at patreon.com/jmrphy

Subscribe wherever you get your podcasts, or download this episode.

1 2 3

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram