Well hello there.

A blue heron on a grassy slope, water and a covered pier in the background.

AI companies can’t endlessly subsidize their AI products by charging users less than it costs to actually run them.

[The AI Compute Crunch Is Here (and It

We have built a working simulacrum of knowledge work.

The incentives almost guarantee we are in big trouble. Many workers, quite rationally, want to do well on whatever dimension they are being measured on. If they are judged by the surface-level quality of their work, then it’s no surprise most of “their” output will be written by LLMs.

Simulacrum of Knowledge Work | One Happy Fellow - blog

Ask anyone who has ever put a slide deck in front of me: I judge those proxy measures. To me those have always been a combination of how done do you think this is and how much do you care about the audience you are putting it in front of?

If an institution — or an industry — is declining, adding AI won’t magically make it better. In the cases that these Cornell researchers highlight in this piece, there were only meaningful improvements when the underlying systems were working well and the human infrastructure around the software was well-developed.

AI is not a magic wand and it won’t fix your problems

Looking forward to reading the study referenced in this article.

The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.

The AI industry is discovering that the public hates it | Hacker News

We call this taxes.

These costs could be ignored, or even accepted, if there was a clear idea of how precisely AI would streamline and improve the workplace—or offer any tangible public benefit significant enough to make these underlying trade-offs acceptable. But the answers to these questions remain extremely tenuous. According to a February 2026 paper by the National Bureau of Economic Research, 80 percent of companies that have begun actively using AI have reported no impact on company productivity. A separate, widely cited 2025 MIT study revealed that 95 percent of corporate AI pilot programs received zero return.

The AI Industry Is Discovering That the Public Hates It | The New Republic

I’m not sure the calculations are the same for nonprofits because they have never been fully staffed. How to get the benefits while mitigating the costs is still a very big question.

Even companies with the biggest IT budgets will need to prove returns on AI spending over time, especially if they’re answering to shareholders on quarterly earnings calls.

AI can cost more than human workers now

Nonprofits have an opportunity to get the headcount they have NEVER been able to afford. Only if we have a new delivery model to scale adoption and use.

If the value in AI is shifting from models, which are commoditising rapidly, to the agentic infrastructure that connects models to real-world actions, then the open-source project that defines that infrastructure layer is the most important thing in venture capital that cannot be invested in.

Sequoia distributes 200 engraved Mac Minis at AI event as OpenClaw becomes the infrastructure layer VCs cannot own

What is the new moat? Context organized and the skills of real people.

These failures are the result of leaders who skipped the boring, hard, necessary work of bringing their organizations along on AI initiatives.

Why can’t newsroom leaders just be normal about AI? - Poynter

Lessons to be learned for nonprofits as well.

I believe these are used by people who prefer intentionally limited visual choices, for low-key diagramming to put in source code, and – increasingly – as an entry point to gen AI.

“Plain text has been around for decades and it’s here to stay.” – Unsung

More and more I’ve been using the terminal and markdown files to interact with my notes and GenAI. It’s changing my actual workflow for the better.

Compared with earlier models, you can often use shorter, more outcome-oriented prompts: describe what good looks like, what constraints matter, what evidence is available, and what the final answer should contain.

Prompt guidance | OpenAI API

Still need to figure out where the judgement on the path forward comes from.

It’s unusual, to be clear, for federal authorities to take this kind of action when federal funding is not involved, and the SPLC does not accept government grants. That’s because the attorney general for the relevant statenormally handles litigation against charities suspected of wrongdoing.

And it’s atypical for federal or state authorities to step in on behalf of a nonprofit’s donors without citing any complaints from specific donors.

Trump administrations indictment of the Southern Law Center Poverty Law Center breaks with norms

Good explanation of what is different about this case.

There is no sustained follow-through. No durable narrative. No accumulation of scrutiny that forces consequence. The story appears, registers, and dissolves—replaced by the next spectacle, the next outrage, the next carefully timed distraction.

The Quiet That Erases | dangerousmeta!

In a note this week, Moody’s Ratings flagged the disruptive effects from a shortage of helium, critical for chip production.

Supply chains don’t always work

So on the one hand, Anthropic itself is the one describing Mythos as a dangerous national security threat. On the other hand, their own security is so sloppy that rando hooligans on Discord have had access to Mythos since the day it was announced, and regularly access other unreleased Claude models.

Daring Fireball: Unauthorized Users in Discord Group Had Weekslong Access to Anthropic’s Supposedly-Super-Dangerous Claude Mythos Model

Security indeed.

The right historical parallel is the assembly line. It didn’t replace workers. It restructured what they did, removing the overhead of moving parts between stations so each person could focus on the work only they could do. What disappeared was the wasted motion between the valuable work.

AI isn’t the product, context is | HackerNoon

I have to think about whether I agree with this statement. However much of the article – and making the context explicit – is a lot of what I’ve been trying to think through.

I also want to delegate work and get different model perspectives on the same draft, spec, or implementation, instead of relying on a single vendor’s subagents.

A lightweight way to make agents talk without paying for API usage

This is interesting. I often use the same kind of process writing. I will work on a draft with one agent and then ask another to review. I wonder if I could copy this so I could more easily do that process from the command line.

Leafy wooded scene in California’s Muir Woods.

In Muir Wood.

I think the second organizational casualty is “the system”. When speed is the priority, there’s no incentive to improve or invest in the shared system (e.g. a design system or codebase) under a tight deadline.

When moving fast, talking is the first thing to break - daverupert.com

As a manager of teams of teams the balance between speed and synthesis is something I struggle with all the time. How to get people to drive forward together. Like, what the OKR or metric or magic word that makes this happen?

Finished reading: Things Become Other Things by Craig Mod 📚I was expecting a lot from this book. And it was so much more.