lesser daemon
  • Home
  • About
  • RSS
27 Jan 2025 link

DeepSeek FAQ

Just came across this nice technical breakdown of what, exactly, R1 accomplished. Interesting news to me was that they had to bypass CUDA entirely and write optimizations in PDX. Good complementary reading if you're here from my DeepSeek post.

link deepseek ai llms

Published by:

Brent

You might also like...

22
May

So Far It's All Still AI

People online collectively lost their mind yesterday over this video a Reddit user generated with Google's Veo 3. It's built with Google's new multimodal model, Veo 3, that can do video and audio from a prompt or a source image. Depending on who you talk to, this is the newest frontier in hellish AI slop and/or the future of video content writ large. I try to avoid goalpost-moving too much, so I'll be the first to admit this is wild. As a flashpoint moment it's comparable to the original Dall-E release. We're lightyears from Will Smith eating spaghetti: the

Read more
3 min read
08
May

The ChatGPT Cheating Crisis Was Inevitable

Once again I find myself writing about the latest AI outrage cycle, this one sparked by an NYMag article about the rampant abuse of ChatGPT in higher education. It should surprise no one that kids are using AI to cheat constantly and we are about to see a generation of functionally illiterate college graduates hit the workforce. Seems bad!

I feel obligated to point out that the article is a more than a little sensationalist and references a study claiming that 90% of students are using ChatGPT that isn't particularly rigorous. Its narrative also nicely cherry-picks a truly despicable guy

Read more
6 min read
25
Apr

Just Because It's Useful Doesn't Mean It's Good

Independent of yesterday's dive into a "useful" application of AI, discourse erupted on Bluesky over Hank Green saying he didn't think the "useless" critique holds water anymore. For the most part people are very angry at the perception of ceding any rhetorical ground to Big AI, which is a fair position to take. Green also put it in a fairly condescending "just asking questions" kind of way, which doesn't necessarily help his case. I don't love the phrasing but there is something to his point: people are finding utility in these tools whether we like it or not, and I

Read more
5 min read
24
Apr

Research Models and the Future of Search

In my self-appointed role as moral and practical judge of artificial intelligence and its uses I have rarely come across a product that I didn't end up finding boring or uninteresting after its novelty wore off. Most base or chatbot models are unreliable. Code models can be nice for tabbed autocomplete or boilerplate but I usually don't find them very helpful in the context of the large, complex, and often legacy codebases I work with professionally. I believe people who say they've found ways to make these useful in their workflows, but between the time it takes to establish a

Read more
4 min read
17
Apr

The Death of Learning

I've been thinking more about vibe coding lately and getting pissed about the whole concept. I think there's something deeply wrong with the mindset behind "prompt, don't read the code, don't think about it." I have written in the past about how artificial intelligence is a tool that accelerates the worst impulses of capitalism, and vibe coding is the latest iteration on what is possibly the one I hate the most: sacrificing understanding for efficiency.

Here's the thing. I like learning. I know it's very east coast liberal arts elitist of me but I think there is inherent value in

Read more
4 min read
lesser daemon © 2025
Powered by Ghost