Rationality and Doubt
I've been reading Joseph Weizenbaum's Computer Power and Human Reason. It's a bit surprising I haven't read it before, given my whole deal, but I'm glad I'm finally getting to it. The book is primarily concerned with the ways that the rigid logic of computers reinforces the allure of behaviorism and physicalism (as well as making it easier for a certain type of compulsive person to come to conceive of themselves as godlike). The logic is thus: at its lowest level, a Turing machine is a symbolic system that can solve any problem expressible within that system. It follows naturally that people who think in terms of a computer's logic believe all problems are inherently solvable, that everything has a right or a wrong answer, and, given the immense self-assurance that the prior two assumptions require, that they are the person who can solve it.
I have spent a lot of time thinking about the ways Programmer Thought has negatively affected society. I made the mistake of studying analytic philosophy rather than computer science and thus came into the technology world convinced mostly that I knew nothing and everything is underdetermined. Ironically a decision I made in the hopes of coming to a better understanding of the universe led me to an irreversible certainty that I will never understand anything.
The computer scientist - indeed, most scientists who have not read Karl Popper - is prone to fall on the opposite end of the spectrum. These are people who think because they can write a compiler they can solve the hard problem of consciousness. The philosophical umbrella programmers who fancy themselves philosopher-kings tend to gather under is Rationalism: a group of mostly online, mostly Silicon Valley-based weirdos who attempt to use a poorly-defined concept of "reason" to understand the universe. These are fundamentally unserious people. They make blogs called things like Qualia Computing, asserting that definitionally undefinable phenomena can actually be computable. They get scared of an imaginary future robot. And they tend pretty easily to fall into weirder, more extreme beliefs, from the fascist and technocratic to the outright insane. The problem is that a lot of these inmates are running the asylum, especially in the AI field, which is full of rationalists who may well have joined out of the fear of Roko's Basilisk.
Weizenbaum was writing in the '70s, well before the advent of LessWrong or the Zizians, but he saw the problem clearly even then. The fact of the matter is that, despite the fact that science and reason are constantly operating under the drunkard's search principle, they instill in their practitioners a sense of confidence in their model's ability to solve a problem that is akin to that of a religious faith. If a statistician's model fails, they are certain that their prediction would have been accurate if they had simply had more data. The programmer is certain that if his program fails it's human error, or just requires more compute. The world, seen through this lens, is a Turing machine: a fundamentally predictable input-output system which can be solved if we have the right tools and set of resources.
The problem, of course, is that this is unprovable. I am not here to say that the scientific method isn't useful, or that the many manifestations of the physicalist philosophy I'm critiquing haven't contributed a ton to the world - they have! But the confidence in the rationality of these heuristics can lead to horrific outcomes. Longtermism taken far enough can justify a genocide now for an Eden later. The largely unchallenged faith that if we can just get to AGI we'll be able to solve, say, poverty or climate change is being used to justify the ruinous effect of AI slop on the internet and on people's minds today. All of this under a veneer of science that examined even a little bit closely is functionally no different than a religious belief that A will lead to B.
A funny thing about computing is that in environments that are fundamentally deterministic, randomness is often desirable. A huge number of the problems computing tries to address can't actually be resolved without introducing unpredictability. I think this an informative little metaphor for how we ought to think when it comes to computing, reason, and how the world works. The rigid assumption that the world will behave like a computer rarely leads to good results. It's easy to fall into the trap of believing that if we can simply achieve A, B will follow - hell, it's the basis of formal logic systems - but these are just models, streetlights in the dark that could be blocks away from where our keys are.
While the fact that I tend to second-guess everything, including myself, is paralyzing at times, I also think it's the only way to live in the world. Doubt is the force that prevents us from turning other people into problems to be solved or bugs to be fixed. We know so little about how the self or the world or the universe actually works. All you can really be certain of is that you exist. A Turing-complete system is a beautiful one: given enough time, any system can take the same input and create the same output, any result can be recreated. It is also, tragically, a model. No model can contain reality. Introducing reasonable doubt, expecting that the unpredictable will happen: this is how the world actually works. It's a lot messier than a Turing machine, but it's also what we actually have.
(Ed. note: I am taking enough time between posts now that I'm not sure I can call this a hundred days project in earnest, but I am still aiming to get to 100 blog posts this year. This is 77/100).