All I have to say is this post warmed my heart. I'm sure people here associate him with Go lang and Google, but I will always associate him with Bell Labs and Unix and The Practice of Programming, and overall the amazing contributions he has made to computing.
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
The most surprising part of uv's success to me isn't Rust at all, it's how much speed we "unlocked" just by finally treating Python packaging as a well-specified systems problem instead of a pile of historical accidents. If uv had been written in Go or even highly optimized CPython, but with the same design decisions (PEP 517/518/621/658 focus, HTTP range tricks, aggressive wheel-first strategy, ignoring obviously defensive upper bounds, etc.), I strongly suspect we'd be debating a 1.3× vs 1.5× speedup instead of a 10× headline — but the conversation here keeps collapsing back to "Rust rewrite good/bad." That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
My dad was a busy construction contractor. One summer he tore himself away from work and took the family to a week long boat camp out next to a big beautiful lake. It turned out that our campsite was actually in the lake by a few inches at high water, but dad saw a way to dam it off and keep it dry, so he grabs the shovel and starts digging trenches and building walls and ordering us around.
About an hour into that, pouring sweat, he stops cold and says "what the hell am I doing?" The flooded camp was actually nice on a hot day and all we really had to do was move a couple of tents. He dropped the shovel and spent the rest of the week sunbathing, fishing, snorkeling and water skiing as God intended. He flipped a switch and went from Hyde to Jekyll on vacation. I've had to emulate that a few times.
I think this post does a really good job of covering how multi-pronged performance is: it certainly doesn't hurt uv to be written in Rust, but it benefits immensely from a decade of thoughtful standardization efforts in Python that lifted the ecosystem away from needing `setup.py` on the hot path for most packages.
Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Did Google, the company currently paying Rob Pike's extravagant salary, just start building data centers in 2025? Before 2025 was Google's infra running on dreams and pixie farts with baby deer and birdies chirping around? Why are the new data centers his company is building suddenly "raping the planet" and "unrecyclable"?
What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
I don’t really understand the hate he gets over this. If you want to thank someone for their contribution, do that yourself? Sending thank you from an ML model is anything but respectful. I can only imagine that if I got a message like that I’d be furious too.
This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.
Sending an automated thank you note also shows disdain for the recipient's time due to the asymmetry of the interaction. The sender clearly sees the thank you note sending as a task not worthy of their time and thus hands it off to a machine, but expects the recipient to read it themselves. This inherently ranks the importance of their respective time and effort.
This seems like a tragedy of the commons -- GitHub is free after all, and it has all of these great properties, so why not? -- but this kind of decision making occurs whenever externalities are present.
My favorite hill to die on (externality) is user time. Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
Externalities lead to users downloading extra gigabytes of data (wasted time) and waiting for software, all of which is waste that the developer isn't responsible for and doesn't care about.
I'm a T1 diabetic, have worked on open source diabetes-tech (OpenAPS), and have used a number of different CGMs (though not this one specifically). This story... does not make very much sense.
CGMs (of any brand) are not, and have never been, reliable in the way that this story implies that people want them to be reliable. The physical biology of CGMs makes that sort of reliability infeasible. Where T1s are concerned, patient education has always included the need to check with fingerstick readings sometimes, and to be aware of mismatches between sensor readings and how you're feeling. If a brand of CGMs have an issue that sometimes causes false low readings, then fixing it if it's fixable is great, but that sort of thing was very much expected, and it doesn't seem reasonable to blame it for deaths. Moreover, there are two directions in which readings can be inaccurate (false low, false high) with very asymmetric risk profiles, and the report says that the errors were in the less-dangerous direction.
The FDA announcement doesn't say much about what the actual issue was, but given that it was linked to particular production batches, my bet is that it was a chemistry QC fail in one of the reagents used in the sensor wire. That's not something FOSS would be able to solve because it's not a software thing at all.
To be clear, this email isn't from Anthropic, it's from "AI Village" [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.
At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
So, about one mushroom species in five is poisonous. Why is the ratio so low, why are there lots of edible ones? Without hard-shelled seeds to spread, why be eaten? And the poisonous ones apparently don't use color as a warning signal, and don't smell all that bad, and some of the poisons have really mild effects, like "gives only some people diarrhea" or "makes a hangover worse". Meanwhile three of the deadliest species seemed to need their toxin (amanitin) so much that they picked it up through horizontal gene transfer. Why did just those ones need to be deadly? In addition to which we have these species that don't even make you sick, just make you trip out, a function which looks to have evolved three times over in different ways. What kind of half-assed evolutionary strategies are these? What do mushrooms want?
Somebody has to be the brave experimenter that tries the new thing. I'm just glad it was these folk. Since they make no tangible product and contribute nothing to society, they were perhaps the optimal choice to undergo these first catastrophic failed attempts at AI business.
>"For myself, the big fraud is getting public to believe that Intellectual Property was a moral principle and not just effective BS to justify corporate rent seeking."
If anything, I'm glad people are finally starting to wake up to this fact.
Not as hugely generous as this story, but during his whole college professor career since the 70s, my father always took care that none of his students spent any major holidays alone and away from home, so we always ended up having 2 or 3 of them around for Christmas, the New Year, Easter...
They were from everywhere around the country and the world, and it was so very enriching for me and my siblings. I had a huge postage stamp collection from the ever increasing well wishing mail that arrived.
It's also kind of comforting to think that anywhere in the world you are not that far from someone that remembers you fondly.
The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
> I then decided to contact Insulet to get the kernel source code for it, being GPLv2 licensed, they're obligated to provide it.
This is technically not true. It is an oversimplification of the common case, but what actually normally should happen is that:
1. The GPL requires the company to send the user a written offer of source code.
2. The user uses this offer to request the source code from the company.
3. If the user does not receive the source code, the user can sue the company for not honoring its promises, i.e. the offer of source code. This is not a GPL violation; it is a straight contract violation; the contract in this case being the explicit offer of source code, and not the GPL.
Note that all this is completely off the rails if the user does not receive a written offer of source code in the first place. In this case, the user has no right to source code, since the user did not receive an offer for source code.
However, the copyright holders can immediately sue the company for violating the GPL, since the company did not send a written offer of source code to the user. It does not matter if the company does or does not send the source code to the user; the fact that the company did not send a written offer to the user in the first place is by itself a GPL violation.
The set of toys I spent the most time playing with was a big bag of wooden blocks my grandfather gave me when I was very small. They are well designed, with a good selection of different shapes, e.g. it has cylinders and arches and thin planks as well as cuboids. They got a lot of use because they're so flexible in combining with other toys, e.g. you can build roads and garages for toy cars, or obstacle courses for rolling marbles. The edges and corners are rounded and the wood tough enough that clean-up was just dropping them back into the bag.
I've since given them to a nephew and I'm happy to see he gets just as much entertainment out of them as I did. Plain wooden blocks can represent almost anything. There are no batteries or moving parts to fail. Mine got a little bit of surface wear but they still work just as well as they did when they were new and small children don't care about perfect appearance. I wouldn't be surprised if they end up getting passed down to another generation and continue to provide the same entertainment. I highly recommend this kind of simple toy for young children.
Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.
Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
The authors report that restoring NAD+ balance in the brain -- using a compound called P7C3-A20 -- completely reversed Alzheimer's pathology and recovered cognitive function in two different transgenic mouse models (one amyloid-based, one tau-based). The mice had advanced disease before treatment began.
- There's room for skepticism. As Derek Lowe once wrote: "Alzheimer's therapies have, for the most part, been a cliff over which people push bales of money. There are plenty of good reasons for this: we don't really know what the cause of Alzheimer's is, when you get down to it, and we're the only animal that we know of that gets it. Mouse models of the disease would be extremely useful – you wouldn't even have to know what the problem was to do some sort of phenotypic screen – but the transgenic mice used for these experiments clearly don't recapitulate the human disease. The hope for the last 25 years or so has been that they'd be close enough to get somewhere, but look where we are."
- If the drug's mechanism of action has been correctly assigned, it's very plausible that simply supplementing with NMN, NR, or NADH would work equally well. The authors caution against this on, IMO, extremely shaky and unjustified grounds. "Pieper emphasized that current over-the-counter NAD+-precursors have been shown in animal models to raise cellular NAD+ to dangerously high levels that promote cancer."
I wish they'd let me recover my original -- I lost my TOTP generator, and the codes I'd written down in a paper notebook were rejected. I even hunted down the electronic copy in case there was a transcription error -- seemed like some failure in their systems was causing me to lose access despite having followed proper procedures.
Lost a decade and a half of correspondence dating back to my teenage years. I had imported my phone number I'd had since I was 16 into voice, and it doubled as my Signal number. I even had a Gsuite subscription so I could use their (admittedly decently) UI to power my firstname @ lastname dot com email address.
I will never use their services again, I was really digusted by this failure.
- I read the entire “Frog & Toad” collection. Probably about 30 times, some stories more.
- “Little Shrew’s Day”… probably 25 times.
- Many of the “Construction Site” series books, especially the OG “Goodnight, Goodnight, Construction Site”. The “Garbage Crew” and “Airport” books featured heavily.
- Started to mix in some “Pete the Cat” titles.
- “Detective Dog Nell” got a lot of air play.
Lots of others, but those are definitely the frequent fliers.
It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.
Just the haters here.