llms

17
Apr

The Death of Learning

I've been thinking more about vibe coding lately and getting pissed about the whole concept. I think there's something deeply wrong with the mindset behind "prompt, don't read the code, don't think about it." I have written in the past about how artificial intelligence is a tool that accelerates the worst impulses of capitalism, and vibe coding is the latest iteration on what is possibly the one I hate the most: sacrificing understanding for efficiency.

Here's the thing. I like learning. I know it's very east coast liberal arts elitist of me but I think there is inherent value in

Read more
4 min read
03
Apr

Thou shalt not make a machine in the likeness of a human mind

Today I am questioning even my minimal attempts to give the benefit of the doubt to AI. A thing is ultimately what it does and dear god there is so little good coming out of this technology right now and so much bad.

As far as anyone can tell, the tariffs announced yesterday (which will, if upheld, most likely result in a recession at best) were literally just calculated based on asking a chatbot how to eliminate trade deficits. Gemini even prefaced its response with "this is a super bad idea, but if you want to, here's how you'd do

Read more
2 min read
02
Apr

Vibe Coding Has Weak Aura

This was exactly two months ago!

Vibe coding is all the rage right now. Apparently Andrej Karpathy only coined this term back in February, which is wild to me because I feel like I've been seeing takes about it for the last five hundred years. Probably if you're reading this I don't need to define it for you, but for friends who are less stuck in the programmer suffering mines: vibe coding is when you write your code entirely with LLMs and it mostly works.

Plenty of people have pushed back on the concept of vibe coding because, fundamentally, it

Read more
2 min read
27
Mar

Living in different worlds

If you're online at all you've probably seen the flood of Studio Ghibli-style images from ChatGPT's new image model update. It is technically impressive, ethically pretty gross, and most likely something that will fade in usage after this viral moment but continue to contribute to the critical mass of slop on the internet.

Beyond "this sucks", I don't know how to feel about it. This is a weird moment where a lot of what's being said seems to lack context on one side or the other. A few things are true at the same time:

1) Lots of people are

Read more
2 min read
28
Jan

The Future Is Too Easy

It is rich cynics trying to make something lifeless grow in the way that living things do, and lock the dying present they rule in for the foreseeable future by effectively removing everyone from it but them. They are impatient not just because they are high-handed and avaricious, but because they know that the only future they can rule in the way they want is one that is passive, stupid, small and shrinking.

David Roth, excellent as always, on CES in Defector. I do think it's easy to get myopic about AI as someone who exists mostly in the spaces it does have genuine utility in (programming, games, and digital art) and thus is something to fear. This stuff is still mostly useless in practice! Roth also raises a point I think isn't made enough (because I have made it a bunch and no one listens to me). Most of the run-of-the-mill uses of AI as it is currently are the various and sundry tedium of life in late capitalism, things we shouldn't have to do in the first place: filling out forms, writing cover letters, desperately trying to get someone to fix your medical bills.

A point Roth doesn't make, but one I think ties into the quote above, is that by putting their own offal into the well of data they're drawing from, AI companies are effectively freezing usable data at around the year 2023. So much of the internet is already generated slop, and it's so difficult to actually determine which data is usable, that bot-free datasets can't stay up to date. The rich are freezing AI's knowledge of the world at the dying present, functionally preventing its growing from the same means it was created. I gave a somewhat tongue-in-cheek talk about this last year, and while I'm not sure how well some of the points I made there will hold up, I do think this is something to watch. Synthetic data and curated datasets may be a way out of this hole, and maybe the slop will get good enough that it can train on its own output (model distillation is a big thing right now) but I can't help but question how far that can take us. How useful is an LLM that can't grow at the pace human culture does? What will come of them endlessly consuming their output? Or will the proliferation of LLMs prevent us from growing at all?