Between Curiosity and Despair
A friend asked me recently what I'd do if with my time if money and the endless obligations of attempting to maintain a middle-class lifestyle weren't a problem anymore. I had had a few drinks surprised myself by saying that I'd just like to spend time learning things. This is one of those good conversation starters that actually cuts to the core of somebody, and while I think at a surface level of thought I'd say something like "make the world a better place" or "make great art" or whatever, on some instinctual level the thing I like most in the world is learning. I am really, really curious about the world, and the millions of different ways people live in it and the little things that I'm not even aware of that people center their lives around. This is why, as a rule, I tend to be happiest when I'm in an educational environment. Most of the best times in my life were either in college or grad school. Not having a full-time job and being surrounded by people with similar interests who want to make friends probably helped there, too, but I feel fulfilled on a personal level when I'm in school in a way I never really have anywhere else.
The problem with that curiosity, and again, why I tend to do best in a school environment, is that it's hard to stay passionate about something when you see what it actually does in the world, at least if your curiosity leans in a technical direction like mine. I've always been curious about machine learning, but over the course of my work with it, I went from excited to pessimistic about its prospects: I still think it's interesting, but for the most part it's being used to fill the internet with slop, make shitty products people don't need, or for nefarious military-industrial purposes. I couldn't even really maintain my enthusiasm for being critical of it: over the course of working with a lot of artists and organizations attempting to be critical of machine learning, or to develop "ethical" frameworks, I became pretty skeptical of the whole endeavor. A lot of people are AI critics solely because it gets them speaking fees, not because they actually understand the problems or even have any curiosity about it. A lot of attempts at doing things ethically are, at best, largely focusing on a small subset of the larger problem or, at worst, cynically providing cover for bad actors.
I feel much the same about the broader art world. After working in it for years, I went from feeling like art was something meant to change minds and give life value to either being an investment vehicle (in the private sector) or a combination of tax writeoff and image rehab (in the public sector). I was faced with the harsh reality that most artists who don't have a trust fund have to beg, borrow, scrape and steal if they want to do non-commercial work, and most artists who do commercial work, well, aren't very good. I couldn't really maintain my passion in the face of that reality. Being an artist professionally sucks, and takes some type of psychotic drive or willingness to sacrifice that I don't have. Working in the art world outside of that, as I did while running programs at an arts nonprofit, tends to mean having to fight for every inch of freedom and help you want to give artists. I went to the new media art world as a way to get out of startup work, and found the experience, for the most part, to be as bad or worse.
Beautiful things tend to stop being beautiful once they run into the messy realities of capital. And people who embrace their curiosity without recognizing how that curiosity will be twisted once market forces have their way with it tend to end up creating monsters. I think a lot about Joseph Redmon, the creator of You Only Look Once (a groundbreaking one-shot object detection algorithm). He could have gotten a pretty cushy ML research job, but chose to quit research entirely because his work would always be used by the military. I think about the quieter architects of web 2.0 like Pierre Omidyar, who became a billionaire by making a thing that would go on to help ruin the world, but has since dedicated his time to funding progressive media and political causes. I think about Robert Oppenheimer, who spent his later life arguing against diplomacy by coercion after inventing the greatest tool of coercion mankind has ever known. I think about all the AI researchers making absurd amounts of money at OpenAI and Anthropic, who are probably wealthy enough to be insulated from the consequences of their work. Do they feel bad about what their work is doing to the internet and to people's brain?
Curiosity without critical thought for its consequences is no different in practice from acting maliciously. No matter how little they intended it, Redmon and Oppenheimer and so many other researchers made things in their pursuit of knowledge that were used to harm people on a massive scale. But they recognized the harm their work had done and either withdrew or worked to fix it. Will today's AI researchers do the same?
A lot of the people are adjacent to (or fully bought into) the longtermist community, which argues that if you make enough money doing something evil now, you can use it to make the world that much better of a place later. That seems like a stupid gamble to me! If Oppenheimer had spent his entire life investing in cancer research, would that make up for making the bomb that could kill the world?
I guess the title of this post gets at the problem for me. When I think about what effect the things I make could have, I often wind up despairing at the very idea of making them. It troubles me that so many people can still be curious and goofy in the tech world without giving thought to the broader circumstances around this: decent people who mean well are still working at Meta. People like Neal Agarwal are still making goofy websites. But even these whimsical little web projects, in the absence of critical thought, often serve as unwitting advertisements for AWS or Google or all the other companies killing the internet and gnawing on the bones of American democracy.
I don't know what the right answer is. I don't think I should let this sense of despair stop me from making things entirely. I think curiosity is a good thing, in general. I am still, in my own little way, trying to be curious when and where I can, and share them with the world through writing and making games and open-sourcing my work. But I can't really stomach the idea of being curious about technical or academic topics without being clear-eyed about what they actually do in the world. People are working on the next hydrogen bomb because they think there are interesting math problems involved. How are they not despairing?
Member discussion