Combining Canonical’s Inference Snaps with Willison’s LLM tool

(Post theme: I Want You to Know Me by Emerson)

Running LLMs locally has a somewhat deserved reputation for being finicky. Download a model, pick a runtime, figure out GPU offloading, wrestle with dependencies, expose an API, and then you can start actually using the thing. Canonical’s inference snaps and Simon Willison’s llm tool each solve half of that problem — and it turns out they snap together (sorry, not sorry) almost effortlessly.

The Two Pieces

Canonical Inference Snaps

Inference snaps are snap packages that bundle a model with an optimised inference runtime. Install one and you get a running local LLM with zero configuration: sudo snap install gemma3

That’s it. The snap detects your hardware (NVIDIA GPU, Intel GPU, CPU — even Intel NPUs), selects the appropriate engine, downloads the model, and starts serving. Each snap exposes a CLI (gemma3 chat for a quick conversation) and, critically, an OpenAI-compatible API on localhost. You can check the details with:

$ gemma3 status
engine: nvidia-gpu-amd64
services:
server: active
endpoints:
openai: http://localhost:8328/v1

There are currently snaps for Gemma 3, DeepSeek-R1, Nemotron-3-Nano, and Qwen-VL, with more on the way.

Simon Willison’s llm

llm is a command-line tool and Python library for interacting with language models. It supports OpenAI, Anthropic, and dozens of other providers through a plugin system, but the feature that matters here is its built-in support for any OpenAI-compatible endpoint — no plugin required. You just tell it where to find the API.

Why They Connect So Easily

The inference snaps serve a standard OpenAI-compatible API. The llm tool can talk to any OpenAI-compatible API. That’s the entire explanation. No adapters, no plugins, no shims (a wee bit of simple config).

Setting It Up

Install llm however you prefer (I used uv):

uv tool install llm

Then create the file ~/.config/io.datasette.llm/extra-openai-models.yaml (you may need to mkdir -p the directory first). Add an entry for each snap, using the endpoint from <snap> status:

- model_id: gemma3
model_name: gemma-3-4b-it-q4_0.gguf
api_base: "http://localhost:8328/v1"
api_key: "not-needed"
- model_id: deepseek-r1
model_name: DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf
api_base: "http://localhost:8324/v1"
api_key: "not-needed"

The model_name needs to match what the snap is serving — you can check with curl http://localhost:8328/v1/models. The api_key field is required by llm but the snaps don’t enforce authentication, so any non-empty string works.

Verify that llm can see them:

$ llm models list | grep gemma
OpenAI Chat: gemma3

Using It

From the Command Line

$ llm -m gemma3 "Explain DNS to a mass of penguins. Two sentences max."
Imagine you want to visit a fish restaurant, but you only know the restaurant's name, not its exact location. DNS is like a super-smart librarian that translates that name into a specific address so you can find it!

$ llm -m deepseek-r1 "Write a haiku about compiling code on a Friday afternoon."
Code hums Friday afternoon.
The machine hums, logic parses, code flows.
Code flows with a click.

(Gemma is not a natural with our penguin friends, and DeepSeek was not designed for its poetry.)

The llm tool also gives you conversation logging, templates, and the ability to pipe content in; cat error.log | llm -m gemma3 "What went wrong here?" is a useful thing to be able to do with a model running entirely on your own machine.

From Python

The llm Python package makes it just as straightforward to use from code:

import llm

model = llm.get_model("gemma3")
response = model.prompt( "In exactly one sentence, what is the best thing about snap packages?" )
print(response.text())

# Example output

Snap packages offer a convenient and isolated way to install applications across different Linux distributions without compatibility issues.

A slightly more involved example — querying multiple local models to then compare their answers:

import llm

question = "Summarise plot of the book Beak of the Moon. Three paragraphs max."

for model_id in ["gemma3", "deepseek-r1"]:
model = llm.get_model(model_id)
response = model.prompt(question)
print(f"--- {model_id} ---")
print(response.text())
print()

Everything here runs locally. No API keys being sent anywhere, no tokens being metered, no data leaving your machine. Some of the time, that matters.

I’ve also found llm very convenient in the past when prototyping AI features and wanting to easily switch between providers/models. Not everyone provides an OpenAI compatible API, and working just with llm (sometimes slightly patched – for example, the Fireworks plug-in is quite out of date) is very handy. Adding the trivial-to-install and matched-to-your-hardware inference snaps and that gets and extra boost.

Wrapping Up

You can watch me do this (other than the snap install, which is constrained by bandwidth, and I live in the middle of nowhere so that’s pretty slow).

asciicast

The setup here is: install a snap, create a four-line YAML file, and you have a local LLM accessible through a well-designed CLI tool and Python library. The entire configuration surface is one file. There is no Docker, no port forwarding, no environment variable archaeology.

The inference snaps handle the hard part (model serving, hardware detection, runtime optimisation, confinement) and expose a clean API. The llm tool handles the other hard part (a good user interface, conversation history, a plugin ecosystem) and speaks that same API. Two tools, one (I guess pseudo) standard, no friction.

(Disclaimer: I work for Canonical (you could too), but not anywhere near the inference snaps; I’m just a fan. No conection with llm other than admiration and use)

Review: Communication Patterns by Jacqui Read

(Post theme: Bullsh*t Baby! by aleksiah)

Unlike the last couple of reviews in my work ‘training and development’ series, this one for Communication Patterns: A Guide for Developers and Architects will be very short. Unfortunately, that’s because of the “if you can’t say anything nice” rule — and I don’t even think it’s worth a detailed critique. Just avoid it.

A few lowlights: obvious, tedious, lengthy words on diagrams, where the diagrams themselves probably do convey information well but are ugly, and beauty matters; remote work “insights” that seem like they come from someone that has never spent any serious time doing remote work (I was reminded of all the terrible advice from people who suddenly started during COVID); a near total disregard for cultural differences; and way too much self-hype.

Review: Vibe Coding by Gene Kim and Steve Yegge

(Post theme: Be Better by Where’s Jai?)

TL;DR: Vibe Coding is a sprawling, sometimes exasperating, but ultimately worthwhile look at how to work with AI assistants in real codebases. It’s at least twice as long as it needs to be, the kitchen metaphor goes on for far too long, and the authors are not exactly “regular” engineers, but when you strip away the fluff there’s a lot of solid, experience‑shaped advice about testing, version control, and how to keep agents from quietly wrecking your code. I’d recommend it if you have time and like long, opinionated books; if you’re trying to optimise for density, you can get most of the value from shorter material plus your own experiments.


I went into Vibe Coding (subtitle: Building Production-Grade Software with GenAI, Chat, Agents, and Beyond) expecting it to feel dated before it even properly hit print. It turns out that, for a book about GenAI and agents in 2025, it’s surprisingly not.

Gene Kim and Steve Yegge are not exactly obscure figures, and that colours the whole thing. You can feel the long histories (Amazon, Google, DevOps world, etc.), and you can also feel that they are not your typical “here’s my Python side‑project, let me tell you about AI” authors – they’re writing in Kotlin and Scala of all things, they use slightly odd analogies, and they are clearly having fun. It does not read like a quick money grab or personal‑brand brochure; it reads like two famous engineers who got obsessed with a topic and then wrote down far too much about it.

I liked it. I also think it would be a much better book if it were roughly half as long.


What the book is actually about

The core pitch is that “vibe coding” is what happens when you shift from typing code to having a flowing conversation with an AI assistant, with intent and feedback loops doing the heavy lifting. They explicitly distinguish this from the usual “prompt engineering” narrative:

“Vibe coding is about dynamic, in‑the‑moment problem-solving rather than creating a bulletproof prompt… It’s like texting with friends. Casual and impromptu. In contrast, prompt engineering is more like emailing a lawyer who is suing you—everything in that email is fraught with consequence.”

That distinction captures one of the better insights in the book: the way you work with AI day‑to‑day is much more conversational and disposable than most “prompt engineering course” material pretends.

They also give their value proposition a name: FAAFO – “fast, ambitious, autonomous, fun, and optionality.” I strongly suspect that the backronym came after the pun, because I only realised at the very end of the book that “FAAFO” is a reference to a different meaning, and I’m fairly sure one of the authors has been waiting years to put that into a book.

Usefully, they are also clear about what’s not in FAAFO:

“By the way, you may have noticed that there is no ‘B’ in FAAFO. Vibe coding does not automatically make your code better. That is your responsibility.”

This theme runs throughout: AI will happily help you ship horrors faster, unless you bring real engineering discipline to the party.


The kitchen, the sous‑chef, and the haunted codebase

The book leans hard on a kitchen metaphor: you’re the head chef, the AI is your sous‑chef, and your codebase is the kitchen. This works well at first, then drags on, and by Part 4 I was ready to buy everyone their own colour‑coded chopping board just to make the analogy stop.

The “dark side” chapter is one of the more compelling parts. They recount, for example:

“…the coding agent had silently disabled or hacked the tests to make them work and had outright deleted 80% of the test cases in one large suite.”

This will ring horribly true for anyone who has let an agent loose on a real codebase. They do a good job of cataloguing these failure modes: haunted codebases, cardboard muffins (things that pass tests but are hollow inside), and reward‑hacking agents that just want to check boxes.

That leads to a set of “head chef” disciplines which are, essentially, “how to be a responsible adult around AI”:

  • “Delegate thoughtfully” to well‑defined, verifiable tasks.
  • “Supervise appropriately” when novelty, complexity, or impact are high.
  • “Establish guardrails” about what AI may and may not touch.
  • “Check work regularly” and “trust but verify relentlessly.”

One of my favourite lists is in a section about agents hijacking reward functions:

“Count your babies systematically… Check for cardboard muffins… Demand excellence explicitly… Clean as you go… Trust but verify relentlessly… Remember the AI paradox…”

Yes, it’s slightly over‑written (pot, meet kettle), but the message is right: if you don’t inspect what you expect, AI will optimise for the wrong thing and look very pleased with itself while doing it.


Tests, version control, and all the things you already know you should be doing

A lot of the concrete advice is “good software engineering practice, but with more exclamation marks because of AI.”

On version control:

“Version control has always been critical, but with AI, it becomes life-or-death for your code.”

On tests:

“You’ll be well-served to have automated testing running all the time on your developer machine, triggered by every file change.”

“There is another powerful and surprising benefit that you get by writing and running tests. If your AI assistant has trouble creating test cases (or keeping them passing), that’s a sure-fire sign your code is missing some modularity, and perhaps clarity too.”

They also explicitly tell you that after AI writes tests, you still have to:

“Eyeball the tests… Run the tests yourself… Review and critique… Have AI run its own tests… But don’t blindly commit.”

None of this is revolutionary, but it’s an accurate reflection of what it feels like to build things with agents right now. When generation is cheap and fast, tests become the thing that anchors you to reality.

They even make the point that this is very similar to how we treat open‑source dependencies:

“We rarely examine every line of code in those either… we build trust with them through testing.”

Again: not new, but good to have written down in an AI context.


Inner, middle, outer loops (and AGENTS.md)

The structure of the second half of the book is organised around “inner”, “middle”, and “outer” developer loops – essentially: your local dev workflow, your interactions with repos and CI, and then team‑ and organisation‑level patterns.

The inner loop chapter has a lot of practical, chat‑and‑editor‑level advice: continuous tests, small end‑to‑end “tracer bullet” tasks, and using AI as an extremely talkative rubber duck:

“Embrace your AI as your most attentive (and talkative) rubber duck…”

The middle loop introduces AGENTS.md – a kind of CONTRIBUTING.md for your AI collaborators, describing how to work on the project, what tools exist, coding style, and so on. A nice observation not from the book, but from Jason Nucciarone, is that this is what CONTRIBUTING.md probably should have been in the first place.

I like the idea of formalising “instructions for the machines” in a first‑class document, but, just like with humans, the truly effective version is “put the rules into linters and tests.” If you keep seeing the same issues in AI‑authored code, the best fix is to encode the constraint into pre‑commit or CI and let the bots learn from failing checks.

The outer loop chapter gets into CI/CD and organisational process. Some of this is standard “invest in CI/CD” advice, but with an AI twist:

“Because AI excels at reviewing, analyzing, and critiquing code, it’s able to transform the CI/CD pipeline itself, moving beyond simple pass/fail checks.”

and:

“Push to Remote Often.”

There’s also a lot about having agents challenge additions, constrain changes by line count, and forbid new libraries without explicit approval:

“Challenge your AI to justify additions… constrain AI to solve the problem within a certain line count… Instruct your AI to avoid pulling in new libraries or frameworks without your explicit approval.”

Again, this is all solid, but if you’ve been living in “choose boring technology” and “pre‑commit all the things” land, it’ll mostly feel like reinforcement rather than revelation.


Culture, metrics, and things that irritated me

The culture chapter is where some of my disagreements and pet peeves really kicked in.

First, the structure. The book uses the very common pattern where:

  • The book tells you what the book will tell you.
  • The section tells you what the section will tell you.
  • The chapter tells you what the chapter will tell you.
  • You finally get the content.
  • Then the chapter tells you what it told you.
  • Then the section recaps.
  • Then the book recaps.

If you stripped out all of this scaffolding, I think you could cut the length by about half without losing any real content. It reminds me of old broadcast TV, where shows had to regularly repeat and recap because you couldn’t assume people had watched from the start, or in order. In 2026, in a book, with readers who absolutely can start at the beginning, it’s just fluff.

Second, there are places where the “move fast” narrative brushes up against sustainability and ethics in ways that I found uncomfortable. For example, I really dislike arguments that implicitly encourage maximising token spend “to be first on a dashboard.” This encourages waste, and there are very real environmental costs here; as useful as these tools are, we don’t need to burn extra compute just to climb a leaderboard.

There are also economic assumptions that I think will grate for readers. Spending thousands per year on AI tools is presented fairly casually in places; that’s realistic for some senior people at certain companies, but not remotely universal. Similarly, some of the discussion about who captures the productivity gains of AI‑assisted work left me with more questions than answers: “Does the employee not get to capture some gain? Is capitalism always best?”

On metrics, we get the inevitable nod to lines of code as a thing people still talk about in 2025:

“Ah, yes, lines of code, the metric everyone associates with both quality and productivity …”

The authors are not endorsing LoC as a great metric, but the fact that it still shows up at all is a reminder that the industry is fully capable of mis‑measuring AI‑assisted work the same way it mis‑measured manual work.

And then there are simply wrong bits. For example, there’s a PM‑related claim where, as an ex‑PM, I can confidently say: no. There are also a few analogies (including one kitchen‑related) that are just bad: “Buy dedicated boards, even in home kitchens.”


Where the book has aged already

Because this is a fast‑moving space, some of the concrete details have already shifted. For example, the book talks about juggling multiple agents yourself, where in practice many tools now do that under the hood – you interact with “one” Claude Code or GitHub Copilot, and it spins up extra agents as needed. In day‑to‑day use, you just behave as if there’s one assistant that is decent at multi‑tasking.

Some tool‑specific mentions (like Anthropic’s Model Context Protocol, MCP) are already more widespread than they were at the time of writing, but that actually makes those sections feel more relevant rather than less.

Overall, I was expecting a strong “oh, that was the 2024–2025 moment” vibe and instead found that most of the advice is still applicable in 2026 – which is a pleasant surprise.


So, should you read it?

If you are extremely early in your AI‑assisted coding journey, Vibe Coding will probably feel like a firehose of patterns, anecdotes, and “please don’t blow your foot off” advice. If you’ve been working with agents for a while, as I have, you’ll find a lot that matches experience, some memorable phrases, and a few genuinely useful conceptual handles – but not many brand‑new lessons.

For me, the signal‑to‑noise ratio was about 50%. The good 50% is very good: concrete stories about real failures, solid checklists for delegating to agents, and a clear articulation of how inner/middle/outer loops change when you add AI. The other 50% is structure, repetition, over‑extended metaphors, and a general “this could have been a much tighter book” feeling.

I’d recommend it if:

  • You have spare reading time and don’t mind wading through some fluff to get to the good bits.
  • You prefer to learn from long‑form narratives and war stories rather than scattered blog posts and conference talks.
  • You want something opinionated rather than a neutral survey of tools.

If you’re short on time, you can get most of the value from a combination of good blog posts, talks, and your own experiments. But if you do have the time, and you’re interested in what two very opinionated, non‑typical engineers have to say about vibe coding, it’s an enjoyable – if occasionally exasperating – read.

            Review: The Staff Engineer’s Path by Tanya Reilly

            (Post theme: something to my something (feat. Maddy Low) by FDaSavage)

            Tanya Reilly’s “The Staff Engineer’s Path” is the first in my 2025/2026 summer reading programme, and it’s the clearest guide I’ve found for navigating the often-ambiguous world of senior technical leadership. If you’re wondering what staff engineers actually do all day, or how to grow into that role effectively, this book deserves a spot on your reading list

            What Makes This Book Different

            Most technical leadership books either focus too heavily on management or stay frustratingly abstract. Reilly strikes a better balance, addressing the messy reality of being a senior IC: you’re expected to have broad impact, but nobody’s quite sure how. One highlight that captured this perfectly was her observation that staff engineers “take on ambiguous, messy, difficult problems and do just enough work on them to make them manageable by someone else.” That’s the job in a nutshell.

            The role also requires thinking beyond the immediate: “thinking beyond the current time, whether that means initiating yearlong projects, building software that will be easy to decommission, or predicting what your company will need in three years.”

            Building Your Mental Maps

            The “Three Maps” chapter was a standout for me. Reilly emphasizes that understanding your organisation’s terrain is critical work, not political manoeuvring. She advocates for practical techniques like watching recently created Slack channels (I did this back when I used Slack with <100 other people — I’m not sure how I would with Matrix and ~1000 people), skimming senior people’s calendars, and reading design docs even when they’re not “for you” (I created a subscription system so that I could do this without having to manually trawl through changes).

            “I asked on Rands Leadership Slack about how everyone approaches knowing things, and a common thread was paying attention to information that isn’t secret exactly, but isn’t necessarily for you. This included reading senior people’s calendars, skimming agendas or notes for meetings you’re not in, and—something that had never occurred to me—looking at the full list of Slack channels sorted by most recently created so you can see what new projects are happening.”

            The key is to “figure out where decisions are happening” and understand “who [leaders] listen to and how they make decisions.”

            I particularly appreciated her reference to gemba from Lean manufacturing—the practice of walking the floor to see how things actually operate. It’s a useful mental model for why we should be deploying and running our own software rather than just writing it (I’m mostly missing this right now, and aim to change that over the next 6 months).

            Once you understand the terrain, Reilly notes, “The more you know the terrain, the easier it will be to bridge gaps by sending the email summary nobody is sending, introducing two people who should have spoken a month ago, or writing a document to show how projects connect to each other.”

            Vision and Strategy

            On creating technical vision, Reilly is pragmatic: “Writing technical vision or strategy takes time. If you can achieve the same outcome in a more lightweight way, do that instead.”

            When you do need to build consensus, she introduces the concept of nemawashi from the Toyota Production System—“sharing information and laying the foundations so that by the time a decision is made, there’s already a consensus of opinion.” Or as she puts it more directly: “Don’t call for a vote until you know you have the votes.”

            Time and Energy Management

            The chapter on finite time hit home. Reilly is refreshingly direct:

            “You have to make peace with walking past things that are broken or suboptimal (or just really annoying) and taking no action.”

            This isn’t permission to ignore problems—it’s recognition that at the staff level, choosing what not to work on is as important as choosing what to tackle. She encourages you to “understand what kinds of work are expensive for you, and what kinds will leave you with some smartbrain at the end of the day.”

            Before taking on a project, she suggests asking yourself: “Do any of the people you’ll work with leave you exhausted every time you talk to them?” It’s a practical filter I hadn’t considered explicitly before, but have certainly experienced.

            By this point in your career, “you will be largely (but probably not entirely) responsible for choosing your own work.” Part of that responsibility includes finding ways to “free up your resources by giving other people opportunities to grow, including starting projects and handing them off.”

            Leading Big Projects

            When facing a daunting project, Reilly offers reassurance:

            “The job here is to be the person brave enough to make—and own—the mistakes. You wouldn’t have gotten to this point in your career without credibility and social capital. A mistake will not destroy you. Ten mistakes will not destroy you. In fact, mistakes are how we learn.”

            Her approach to feeling overwhelmed is practical: “use your core muscles. If you’re most comfortable with code, jump in. If you tend to go first to relationships, talk to people. If you’re a reader, go get the documents.”

            On design work, she advocates for being concrete over being safe:

            “But it’s a better use of your time to be wrong or controversial than it is to be vague. If you’re wrong, people will tell you and you’ll learn something, and you can change direction if you need to.”

            And she’s firm about scope creep: “‘We might need it later’ is not a good enough justification.”

            For those worried about spending time coding at this level, she quotes another engineer: “If spending a day a week coding keeps you engaged and excited to come to work, you will likely do better in the rest of your job.”

            Unsticking Projects

            When blocked by other teams, Reilly emphasises gratitude and making things easy:

            “If someone goes out of their way to help you, say thank you. In companies that have peer bonuses or spot bonuses, there’s already a structure for saying thank you: use it.”

            Structure your requests so they’re “easy to say yes to, with as little reading needed as possible.”

            On migrations and technical debt, she’s direct about the impact: “The half-migration slows down everyone who has to engage with it. This is a place where a staff engineer can step in and have a lot of impact.”

            And remember: “there is no such thing as a temporary solution.”

            For documenting decisions, she recommends “Lightweight Architectural Decision Records for showing why you made the choice you did”. We noticed other teams at Canonical using these and considered adopting them, but decided against it for now.

            Being a Role Model

            The chapter on being a role model contains some of the book’s most quotable wisdom. On competence, Reilly writes:

            “Competence is built on knowledge and experience, but you also need to be able to apply those abilities. That starts with the self-awareness to know what you can do, how long it will take, and what you don’t know.”

            She pushes back against imposter syndrome: “Being competent doesn’t mean you need to be the best. I’ve sometimes seen tech people be shy about claiming to be an expert, because they can always think of someone in the industry who is better than they are. Don’t set your bar at ‘best in the industry.’”

            And crucially: “Admitting ignorance is one of the most important things we can do as tech leads, senior engineers, mentors, managers, and other influencers of team culture”. I have long pushed this myself, not only in these fields but as an educator as well. (Fascinatingly, it’s one area where current AI is terrible.)

            On standards:

            “Your standards will serve as a model for how other people work. Know what high-quality work looks like and aim for that standard in everything you do, not just the parts you enjoy most. Write the clearest documentation you can. Be the first person to know if your software breaks.”

            She emphasises that feedback makes your work better: “Your solutions are not you and they don’t define you. Criticism of your work isn’t criticism of you.”

            When someone brings you a problem, her advice is to “stay calm. Ask questions. Understand why they’re telling you. Do they just need to vent? Are they hoping you’ll take action?”

            She shares a story about making a production mistake early in her career and a colleague’s response: “It’s always interesting to see how new people handle their first screw-up. We’ve all been there.” Then she reflects: “Tim took the time to be kind.”

            On team dynamics, she reminds us:

            “While you may be the best coder on the team, the most experienced engineer, or the fastest problem solver, that doesn’t mean you should jump on all of the problems. You’re working as part of a team, not a collection of competing individuals. Don’t become a single point of failure where the team can’t get anything done when you’re not available.”

            Future-Proofing Your Work

            Reilly emphasises thinking ahead:

            “So take the time to leave your production environment, codebase, or documentation so that it just works for whoever comes along next. Write tests that will let you refactor your code without breaking things. Follow your style guide so that the people who copy your approach will also be following your style guide.”

            She warns: “Every time someone leaves your company, you lose institutional knowledge. If you’re lucky, you have some old-timers storing history in their brains. But eventually, inevitably, you’ll have complete staff turnover.”

            And on designing for the long term: “The system will never again be as well understood as it is on the day it’s created.”

            She quotes Martin Fowler: “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

            Teaching and Influence at Scale

            Reilly distinguishes between giving advice and actually teaching: “What’s the difference between telling people things and teaching them things? Understanding. When you’re giving advice, you’re explaining how you relate to the topic, and the receiver can take your advice or leave it. When you’re teaching, you’re trying to have the other person not just receive the information but internalise it for themselves.”

            For hands-on teaching: “Successful teaching includes hands-on learning and activating knowledge: the student should be doing as well as listening.”

            On delegation, she advocates for messiness:

            “your colleagues won’t learn as much if you only delegate the work after you’ve turned it into ‘beautifully packaged, cleanly wrapped gifts.’ If you instead give them ‘a messy, unscoped project with a bit of a safety net,’ they’ll get a chance to hone their problem-solving abilities, build their own support system, and stretch their skill set.”

            Before offering unsolicited advice: “Before you offer your thoughts, think about whether the other person is asking for them. Think too about whether you even have enough context to tell them something that’s both helpful and nonobvious.”

            And if you really want to share: “If you’re itching to give unsolicited advice on a topic nobody is asking you about, consider writing a blog post or tweeting about it instead.”

            Career Growth

            The final chapter acknowledges that specialisation often happens accidentally: “In fact, it’s easy to gain a specialty accidentally just because it’s what you’re doing at work: one experience leads to another, and next thing you know you have a specialization.”

            But she also normalises staying put if your current role is working:

            “If your job is giving you what you need, there’s no need to change anything. I want to emphasize that because our industry puts a lot of focus on changing jobs frequently, and the regular ‘new job’ announcements can make you feel like you should be moving too.”

            And success can mean different things, including working less: “Success can mean working less in your current role. One engineer I spoke with, Jens Rantil, swapped a staff engineering role for 80% time and a 20% pay cut at a much smaller company. As he said, ‘Every Thursday is a Friday! It’s amazing!’”

            On finding your next role, she quotes Graham: “I’ve found that people that know you well are always going to be the ones that find you the phase 2 roles that are ‘shaped like you.’ People that don’t know you are always going to offer you the job you just had”. I’ve experienced this, too.

            Small Touches I Loved

            • “I recommend playing Civilization to understand all things about staff engineering. Tell your boss it’s research.”
            • “Being the grizzled, experienced best supporting actor is an amazing role.”
            • “In general, if there are more people being the wise voice of reason than there are people actually typing code (or whatever your project’s equivalent is), don’t butt in.”
            • The clearest indicator of what a company values: “what gets people promoted.”
            • “Think of it as a Ship of Theseus: every individual component may get replaced over the years, but the fundamental system continues. It’s all metaphysical architecture.”

            Should You Read It?

            Yes. Whether you’re already a staff engineer trying to figure out the role, or an ambitious senior engineer wondering what’s next, this book provides both frameworks and practical tactics. Reilly’s writing is clear, honest, and grounded in real experience. She acknowledges the uncomfortable parts of the job—the politics, the ambiguity, the need to let some things stay broken—without being cynical about them.

            The book won’t give you a prescriptive checklist (staff engineering doesn’t work that way), but it will help you build better mental models for navigating technical leadership. As Reilly reminds us, by this level you’re largely choosing your own work—this book helps you choose wisely.

            (You can also see my full set of highlights if you’d like).

            2025 in Media

            (Post theme: friendly fire (the pool song)).

            This year, I watched 164 films. I recommend Tinā, KPop Demon Hunters, The Gorge, Swiped, Python: The Documentary, Girl Haunts Boy, and We Live in Time, and Sinners. A somewhat disappointing year for films for me.

            I also watched 1,259 episodes of TV. I recommend Taskmaster, The Runarounds, Murderbot, Department Q, and Shrinking. Not a great year for TV, either – a lot of stuff I’m somewhat interested in watching, but nothing that really grabbed me.

            On YouTube, I watched a lot more than in previous years. I recommend the Taskmaster Podcast, the TalkVille Podcast, Nikki Lu, Dish Podcast, and Off Menu.

            I listened to 158 hours of podcasts (this is roughly how long I spent travelling, since that’s about the only time I listen to podcasts these days). I recommend The Morning Shift, the Taskmaster Podcast (again), Gone By Lunchtime, Juggernaut, Wait Where Was I, and The Long Play.

            I listened to 3,158 songs (around 64,000 minutes) this year, up quite a bit from last year (mostly because I was living alone for most of the year – it is very quiet!). I recommend Kids Up The Tree, Cassie Henderson, fdasavage, aleksiah, Borderline, Little Green, Emei, Maddy Low, Fan Club, Phoebe Grace, Brooke Alexx, Daisy Grenade, Chloe Parché, and LØLØ. A really great year for discovering new music.

            I read a lot of stuff online (via RSS). I recommend everything I posted to tonyandrewmeyer.blog, and in particular: The Spinoff, Simon Willison, Armin Ronacher, and Charity Majors.

            I read 115 books (including paper, ebooks, and audio books) this year (a bit over 47,000 pages). I recommend The Staff Engineer’s Path: A Guide for Individual Contributors Navigating Growth and Change, most of Brandon Sanderson‘s books, The Thursday Murder Club, The Library Trilogy, The Witch Roads, The Final Architecture, and Violet and the Velvets: The Case of the Missing Stuff. A good year for enjoyable books, but missing anything that really wowed me, unfortunately. A good book club year again, and a good year of re-reading some childhood favourites (not all of which stand up).

            Happy new year!

            Review: Help! I Have a Manager! – Wizard Zines

            (Post theme: Talk by Coldplay)

            Next up in the Wizard Zines set is an odd one, not being particularly technical, and I think one of the earlier ones: Help! I Have a Manager, which is all about building good relationships with your manager.

            Like Julia, I’ve had several good or great managers, and a few not so much. I have found them all very different, so page 16, which is all about this, really resonated. Other than that, this was pretty much all advice I’d seen elsewhere and/or picked up naturally. For someone really early career, this might be worthwhile, but I’d skip it otherwise.

            Unlike the other zines, this one (at least the PDFs, when viewed in Apple Books) had weird spacing issues where text had odd gaps and sometimes had overlaid characters. Not enough to prevent reading the zine, but enough that it doesn’t feel polished like the others.

            Review: Bite Size Bash – Wizard Zines

            (Post theme: Hall of Fame (feat. will.i.am) by The Script)

            As mentioned previously, I purchased the set of Wizard Zines via my annual training budget, and am working through them. Current reading: Bite Size Bash.

            The first 20 pages surprised me – I don’t consider myself particularly knowledgeable in terms of bash (or similar shell scripting) but pretty much everything in the basics I’ve managed to somehow pick up over the last couple of decades, despite usually jumping pretty quickly into another tool whenever my needs get above simple. It is a great intro, though, and I would have benefited having this at hand years back.

            On page 20, parameter expansion is introduced. Most of this was either new to me, or I think I’ve seen but certainly wouldn’t remember to use. I’m not sure how to make this stick, though – the syntax is so odd. Maybe I should just jump to this page next time I’m writing bash, or maybe I’ll remember that it can be done and that will be enough to look it up or remember. I do wonder if LLM will replace this sort of learned syntax, though.

            After that, it was background processes and subshells, both of which were again surprisingly familiar. After that is trapping signals – I knew this could be done, but I’m pretty sure by the time I have something that needs to do this, I’ve already moved to another language, and I don’t see that changing. Errors and debugging were also mostly things I knew, except for ‘trap x DEBUG’, which seems pretty handy.

            Overall, this zine surprised me: I expected to learn a lot, and instead recognised a lot. In some ways, this is great: I feel more confident that I do actually know the most useful core of shell scripting (even though certainly not the vast area beyond that). I’d definitely recommend it for shell scripting beginners (especially bash) but likely not for others.

            Review: Become a SELECT Star – Wizard Zines

            (Post theme: Call Off the Search by Katie Melua)

            As part of my training allowance for 2024, I purchased all 14 of Julia Evan‘s zines. There are definitely ones I expect to learn from more than others, but having the pack appealed, and one of the things I really enjoy about her posts is finding little tidbits of information that I didn’t know, even though I would have said I know the topic fairly well in general.

            First up: Become a SELECT Star, which, as you’d guess from the title, is all about the SELECT statement in SQL. The first dozen pages are a really good summary of the fundamentals: things that I gradually absorbed over the last 20 or so years, and probably would have been better off with having a summary like this rather than a bunch of more traditional reference material.

            I really like how the examples work through the query in the logical order rather than the syntax order. It makes it much simpler to follow what’s happening.

            On pages 14-16, it covers window functions, which I didn’t really know at all. I would normally just do this by getting a bunch of the data down into something like Python and calculating there. I’m a big fan of moving computation left where possible, so these seems super useful, and something I hope I remember (although I don’t do a huge amount of SQL these days).

            The zine continues on to cover NULL and COALESCE, which we used a lot at SpamExperts, so those were pretty familiar, but again a good summary that I could have used many years back! It then covers CASE, and I don’t remember if I knew that or not, but also don’t think I can think of times I would have wanted it directly in SQL.

            After a straightforward page on counting rows, it moves into subqueries. I’ve used these quite often (I’ve often had situations where subquery performance was better than joining, for various reasons). However, common table expressions (naming sub queries) is new to me, and look super useful.

            It wraps up with a few more useful tips, most of which were things I already knew.

            I’d definitely recommend this to anyone that’s just starting with SQL. If you’re a SQL expert, it’s likely not going to be useful, but if you’re a casual querier like myself, then there are good tips to pick up, and given how small and inexpensive the zine is, I’d highly recommend picking it up.

            GitHub “sync fork” silently failing

            (Post theme: Oops! I Did It Again by Britney Spears)

            Posting because I’ve hit this three times now and each time not remembered the solution until I figured it out again, in the hopes that writing it down will prompt my memory next time and avoid a fourth “figure it out fresh” cycle.

            At work (in the team I’m on), we develop in the open using public Github repositories. To introduce a change, we fork the repo, create a branch, and submit pull requests to the upstream main branch.

            As changes make their way into main, it’s handy to be branching off an up-to-date fork, and GitHub has a convenient “sync fork” tool in the web UI to do this. Most of the time, this works perfectly. Unfortunately, a few times I’ve tried to sync and it silently fails (the cause is user error, but the silent nature of the fail is very poor UX from GitHub, and I feel like they could actually make the whole process work, which would be ideal UX).

            The reason it has failed (each time it has happened to me) is that branch protection is enabled for the main branch on the fork. I’ve enabled this to avoid accidentally pushing changes to main (forgetting to create a branch first), which I never want to do – except for pulling in upstream changes.

            There might be a simpler way to work around this (other than opening a PR from upstream main to the fork main) but I just disable the protection, sync, then enable it again.

            Mobile Phones at Mahurangi College – Feedback

            (Post theme: The Times They Are-A-Changin’ by Bob Dylan)

            This is a copy of the feedback I provided to Mahurangi College in response to this consultation request:

            The Government has introduced regulations that require all state schools to have student phone rules in use. Rules must be in use as soon as possible in Term 1 2024 and no later than Term 2.

            Many countries around the world have successfully banned mobile phones in classrooms, or are in the process of doing so. A solid body of evidence is emerging that supports the banning of phones in schools. 

            Our draft policy and notes are included here, as well as a space for you to provide some feedback.

            So far this term our approach has been to educate – students have had the rules clearly explained, and lots of reminders and warnings have been given. The vast majority of students have adapted very well.

            Following your feedback, we will finalise and publish our final policy.

            Mahurangi College, February 2024

            I understand that the requirement to have a policy has come from the central government, which puts some constraints on what the school can do. However, I had hope that the school would show more courage and defy poor decision making as much as possible. Given that ERO is responsible for monitoring the implementation of these policies, at worst the school would receive a negative comment in a rarely-read review, and by the time the next review cycle came around, it’s likely that a government less interested in pandering to racist, anti-science, scared, old people would be in power.

            Any form of defiance would suffice, but I like following the letter of the regulation rather than the intent. The iPhone is famously “an iPod, a phone, and an Internet communicator”, and Android phones are widely recognised as being functionally equivalent to iPhones. Students must not be allowed to use or access a phone – but there are no rules about iPods or internet communicators. The answer is clearly that the Phone app on mobile devices must be banned, but using the rest of the device is fine (you could even use a device management profile to disable the Phone app while at school).

            This is logically consistent, since students are not only allowed but expected to have other digital devices (such as a laptop), which has exactly the same internet communication functionality (indeed: the permitted smart watches, laptops, tablets, and other devices are essentially the same device as a phone – often running variants of the same operating system on the same chips – just in a different form factor).

            Under no circumstances should phones be used to photograph or film any student or member of staff without their permission, or to film any fights or skirmishes.

            Mahurangi College Draft Policy

            For the record, I have no objection to this as a policy statement, although I would suggest that it’s mistakenly placed in a “mobile phone” policy, given that there are many other devices (and a class that is specifically focused on learning how to use them well) that are capable of photographing or filming a student, and the choice of device is entirely irrelevant.

            • Headphones & earbuds also cannot be used at school (apart from on buses) as they require connection to a mobile phone.
            Mahurangi College Draft Policy

            It’s hard to believe that this passed any sort of quality control. Did you have a junior student write it? I would expect that anyone old enough to work at the school would be familiar with a multitude of devices with which headphones (whether in earbud form or otherwise) can be used. Perhaps the staff might remember using headphones with a record player, Walkman, or Discman? Perhaps some of the staff are familiar with the school’s own requirement that students do have headphones so that they can consume media during class without disturbing other students? Was the author of this policy aware that the school offers classes in music, dance, and media studies, and that film is studied in many of the English classes? Did they know that the laptops that the school requires students have almost universally work with headphones? Given the conflicting policies, which one wins out?

            Perhaps you’d consider looking at the actual science? For example, a 2023 study that showed that students using smartphones outperformed those without (Wang, J.C., Hsieh, CY. & Kung, SH. The impact of smartphone use on learning effectiveness: A case study of primary school students. Educ Inf Technol 28, 6287–6320 (2023). https://doi.org/10.1007/s10639-022-11430-9), or the Harvard Graduate School of Education who say that more thought needs to be applied rather than simple bans. Closer to home, Massey University educators clearly state that bans are not the right choice. Or look at meta-studies (for example: Calderón-Garrido, Diego & Ramos-Pardo, Francisco & Suárez-Guerrero, C.. (2022). The Use of Mobile Phones in Classrooms: A Systematic Review. International Journal of Emerging Technologies in Learning (iJET). 17. 194-209. 10.3991/ijet.v17i06.29181), which show benefits in the use of phones at school.

            It should be obvious that the ability to have at hand a device that is a calculator, a compass, an e-book reader, a still and video camera, a research tool that would be considered magical even a few decades ago that provides access to nearly all the world’s information, that tracks and provides advice for improvement of health, a torch, a measuring tape, a glimpse into virtual and augmented reality, that receives emergency alerts, and much more is of huge value to a student. Instead of harming their educational development (and providing an environment irrationally different from the world they will enter at the conclusion of secondary school) you should be figuring out ways to make sure that those students who do not have phones are able to use one.

            I have no objection to behaviour management, whether during class time or during breaks on the school property or on school trips. If someone is misusing a phone – or anything else – in a way that disrupts their or other students’ learning, then that should be remedied appropriately.

            I’m also tolerant of prohibiting access to specific internet properties (TikTok, Instagram, Snapchat, and so on), although I do believe that in time prohibition will be demonstrated to a poor approach to the problems that these sites exacerbate.

            I have no hope at all that the school will reconsider this policy (although a little hope that someone will realise that headphones can be used with devices other than phones), particularly given the numerous other policies that are clearly either anti-learning or irrelevant to learning. However, I offer this feedback anyway, so that at least you will have heard how wrong you are.