100days

29
Jan

Programmatically Creating Styled Cells In Jupyter Notebooks

I've been playing around with the idea of using Jupyter notebooks as a narrative game interface for a while, and finally got around to doing some prototyping this week. One thing I really wanted to do was require users to run code cells in Jupyter to unlock story beats: Jupyter notebooks are usually presented as a step-by-step tutorial, which is perfectly fine if you're trying to teach someone how to run a linear regression in sklearn but less so if you're trying to build suspense over the course of a story.

It's pretty easy to get output from just running

Read more
3 min read
28
Jan

The Future Is Too Easy

It is rich cynics trying to make something lifeless grow in the way that living things do, and lock the dying present they rule in for the foreseeable future by effectively removing everyone from it but them. They are impatient not just because they are high-handed and avaricious, but because they know that the only future they can rule in the way they want is one that is passive, stupid, small and shrinking.

David Roth, excellent as always, on CES in Defector. I do think it's easy to get myopic about AI as someone who exists mostly in the spaces it does have genuine utility in (programming, games, and digital art) and thus is something to fear. This stuff is still mostly useless in practice! Roth also raises a point I think isn't made enough (because I have made it a bunch and no one listens to me). Most of the run-of-the-mill uses of AI as it is currently are the various and sundry tedium of life in late capitalism, things we shouldn't have to do in the first place: filling out forms, writing cover letters, desperately trying to get someone to fix your medical bills.

A point Roth doesn't make, but one I think ties into the quote above, is that by putting their own offal into the well of data they're drawing from, AI companies are effectively freezing usable data at around the year 2023. So much of the internet is already generated slop, and it's so difficult to actually determine which data is usable, that bot-free datasets can't stay up to date. The rich are freezing AI's knowledge of the world at the dying present, functionally preventing its growing from the same means it was created. I gave a somewhat tongue-in-cheek talk about this last year, and while I'm not sure how well some of the points I made there will hold up, I do think this is something to watch. Synthetic data and curated datasets may be a way out of this hole, and maybe the slop will get good enough that it can train on its own output (model distillation is a big thing right now) but I can't help but question how far that can take us. How useful is an LLM that can't grow at the pace human culture does? What will come of them endlessly consuming their output? Or will the proliferation of LLMs prevent us from growing at all?

27
Jan

The Great Decentralization Debate: Bluesky vs. Mastodon

I've been meaning to write a bit about this for a while, but finally getting into Bluesky last week has given me the push I needed to get to it. Back in November, Christine Lemmer-Webber, one of the original authors of the ActivityPub protocol behind software like Mastodon, wrote an excellently-researched piece called How Decentralized Is Bluesky Really? A lot of the technical ins-and-outs of the essay go a little over my head, but I think it's a worthwhile read if you, like me, are enough of a nerd to want to understand what's happening technically under the hood of

Read more
4 min read
26
Jan

DeepSeek-R1 Initial Notes

While I've been traveling, little-known Chinese research lab Deepseek released an open-source model that can compete with the best closed-source products OpenAI and Anthropic have to offer. Everyone appears to be freaking out about it.

The reason for this is its astonishingly low training cost compared to performance (it reportedly cost $5 million dollars to train), the fact that it's open-source, and even if you choose to pay for its API, it's about 90% cheaper than its American competitors. The model is also available to try for free immediately. This is all possible because of how much cheaper the

Read more
5 min read
25
Jan

What Comes After Post-Modernism?

I saw a talk at MAGfest today by Rym Decoster and Emily Compton which attempted to sketch a theory of current post-post-modern cultural output. It's very much a work in progress, but their concept of "metacontextualism" is the idea that after postmodern deconstruction and irony, we are starting to see a trend emerge towards building meaning through choice out of its ashes: being aware of the fractured contexts of a postmodern world, and that there's no single way of knowing or being, but choosing a forking path and going down it. They argue that art is how a society engages

Read more
2 min read