GitHub “sync fork” silently failing

(Post theme: Oops! I Did It Again by Britney Spears)

Posting because I’ve hit this three times now and each time not remembered the solution until I figured it out again, in the hopes that writing it down will prompt my memory next time and avoid a fourth “figure it out fresh” cycle.

At work (in the team I’m on), we develop in the open using public Github repositories. To introduce a change, we fork the repo, create a branch, and submit pull requests to the upstream main branch.

As changes make their way into main, it’s handy to be branching off an up-to-date fork, and GitHub has a convenient “sync fork” tool in the web UI to do this. Most of the time, this works perfectly. Unfortunately, a few times I’ve tried to sync and it silently fails (the cause is user error, but the silent nature of the fail is very poor UX from GitHub, and I feel like they could actually make the whole process work, which would be ideal UX).

The reason it has failed (each time it has happened to me) is that branch protection is enabled for the main branch on the fork. I’ve enabled this to avoid accidentally pushing changes to main (forgetting to create a branch first), which I never want to do – except for pulling in upstream changes.

There might be a simpler way to work around this (other than opening a PR from upstream main to the fork main) but I just disable the protection, sync, then enable it again.

Mobile Phones at Mahurangi College – Feedback

(Post theme: The Times They Are-A-Changin’ by Bob Dylan)

This is a copy of the feedback I provided to Mahurangi College in response to this consultation request:

The Government has introduced regulations that require all state schools to have student phone rules in use. Rules must be in use as soon as possible in Term 1 2024 and no later than Term 2.

Many countries around the world have successfully banned mobile phones in classrooms, or are in the process of doing so. A solid body of evidence is emerging that supports the banning of phones in schools. 

Our draft policy and notes are included here, as well as a space for you to provide some feedback.

So far this term our approach has been to educate – students have had the rules clearly explained, and lots of reminders and warnings have been given. The vast majority of students have adapted very well.

Following your feedback, we will finalise and publish our final policy.

Mahurangi College, February 2024

I understand that the requirement to have a policy has come from the central government, which puts some constraints on what the school can do. However, I had hope that the school would show more courage and defy poor decision making as much as possible. Given that ERO is responsible for monitoring the implementation of these policies, at worst the school would receive a negative comment in a rarely-read review, and by the time the next review cycle came around, it’s likely that a government less interested in pandering to racist, anti-science, scared, old people would be in power.

Any form of defiance would suffice, but I like following the letter of the regulation rather than the intent. The iPhone is famously “an iPod, a phone, and an Internet communicator”, and Android phones are widely recognised as being functionally equivalent to iPhones. Students must not be allowed to use or access a phone – but there are no rules about iPods or internet communicators. The answer is clearly that the Phone app on mobile devices must be banned, but using the rest of the device is fine (you could even use a device management profile to disable the Phone app while at school).

This is logically consistent, since students are not only allowed but expected to have other digital devices (such as a laptop), which has exactly the same internet communication functionality (indeed: the permitted smart watches, laptops, tablets, and other devices are essentially the same device as a phone – often running variants of the same operating system on the same chips – just in a different form factor).

Under no circumstances should phones be used to photograph or film any student or member of staff without their permission, or to film any fights or skirmishes.

Mahurangi College Draft Policy

For the record, I have no objection to this as a policy statement, although I would suggest that it’s mistakenly placed in a “mobile phone” policy, given that there are many other devices (and a class that is specifically focused on learning how to use them well) that are capable of photographing or filming a student, and the choice of device is entirely irrelevant.

  • Headphones & earbuds also cannot be used at school (apart from on buses) as they require connection to a mobile phone.
Mahurangi College Draft Policy

It’s hard to believe that this passed any sort of quality control. Did you have a junior student write it? I would expect that anyone old enough to work at the school would be familiar with a multitude of devices with which headphones (whether in earbud form or otherwise) can be used. Perhaps the staff might remember using headphones with a record player, Walkman, or Discman? Perhaps some of the staff are familiar with the school’s own requirement that students do have headphones so that they can consume media during class without disturbing other students? Was the author of this policy aware that the school offers classes in music, dance, and media studies, and that film is studied in many of the English classes? Did they know that the laptops that the school requires students have almost universally work with headphones? Given the conflicting policies, which one wins out?

Perhaps you’d consider looking at the actual science? For example, a 2023 study that showed that students using smartphones outperformed those without (Wang, J.C., Hsieh, CY. & Kung, SH. The impact of smartphone use on learning effectiveness: A case study of primary school students. Educ Inf Technol 28, 6287–6320 (2023). https://doi.org/10.1007/s10639-022-11430-9), or the Harvard Graduate School of Education who say that more thought needs to be applied rather than simple bans. Closer to home, Massey University educators clearly state that bans are not the right choice. Or look at meta-studies (for example: Calderón-Garrido, Diego & Ramos-Pardo, Francisco & Suárez-Guerrero, C.. (2022). The Use of Mobile Phones in Classrooms: A Systematic Review. International Journal of Emerging Technologies in Learning (iJET). 17. 194-209. 10.3991/ijet.v17i06.29181), which show benefits in the use of phones at school.

It should be obvious that the ability to have at hand a device that is a calculator, a compass, an e-book reader, a still and video camera, a research tool that would be considered magical even a few decades ago that provides access to nearly all the world’s information, that tracks and provides advice for improvement of health, a torch, a measuring tape, a glimpse into virtual and augmented reality, that receives emergency alerts, and much more is of huge value to a student. Instead of harming their educational development (and providing an environment irrationally different from the world they will enter at the conclusion of secondary school) you should be figuring out ways to make sure that those students who do not have phones are able to use one.

I have no objection to behaviour management, whether during class time or during breaks on the school property or on school trips. If someone is misusing a phone – or anything else – in a way that disrupts their or other students’ learning, then that should be remedied appropriately.

I’m also tolerant of prohibiting access to specific internet properties (TikTok, Instagram, Snapchat, and so on), although I do believe that in time prohibition will be demonstrated to a poor approach to the problems that these sites exacerbate.

I have no hope at all that the school will reconsider this policy (although a little hope that someone will realise that headphones can be used with devices other than phones), particularly given the numerous other policies that are clearly either anti-learning or irrelevant to learning. However, I offer this feedback anyway, so that at least you will have heard how wrong you are.

Writing as a metaphor for coding

At work, our book club is reading Code Complete, and had our first discussion session today. I first read this in early 2011 – roughly 6 years into professional software development – and remember thinking quite positively of it at the time. We’re only three chapters in, but I’m decidedly more mixed on it this time around – I’ve had a lot more varied experience with development projects than in those early years, and a lot of the book lands quite differently (I do suspect that the later chapters will have stood the test of time better).

The second chapter of the book is focused on the importance of choosing a metaphor for software development – it ends strongly suggesting that building construction is an elegant metaphor. I agree that metaphor can be useful (but is also easily overused, and people get fixated with the imperfection of the chosen metaphor – “But if you extend them too far or in the wrong direction, they’ll mislead you”, as the author says) and that construction can work as one for software development.

However, the author, says:

For an individual’s work or for small-scale projects, the letter-writing metaphor works adequately, but for other purposes it leaves the party early—it doesn’t describe software development fully or adequately.

Steve McConnell, Code Complete, Chapter 2 “Software Penmanship: Writing Code”

This struck me as deeply ironic. Writing is an excellent metaphor for software development, and the author of a book – especially a large technical one – should be extremely aware of this.

The author seems to only consider the analogy of the metaphor to be of “writing a casual letter”, rather than writing in general. This is unfair to the metaphor and not applied to that of construction (which covers all sizes, not just flat-pack assembly). Writing ranges from a succinct social media post to a collection of textbooks or connected story that ranges over dozens over books by multiple authors (Wild Cards, Cosmere, Marvel, Riftwar, to name a few). This is hardly unique to writing – construction, farming, and other metaphors the chapter explore all range in sizes.

Writing is usually a one-person activity, whereas a software project will most likely involve many people with many different responsibilities.

Steve McConnell, Code Complete, Chapter 2

This quote was particularly interesting, as one of my colleagues brought it up in our discussion as an example of how McConnell was correct about writing being a poor metaphor, whereas I read it as an example of how wrong he was.

The “many people with many different responsibilities” absolutely applies to (non letter!) writing – you have editors, alpha and beta readers, illustrators, publicists, and many more. A lot of these work well in the software development metaphor – early readers are code reviewers, illustrators are designers, publicists are, well, publicists, and so on. This is the aspect that I was focused on when I read it.

My colleague, however, saw the truth in the “one-person activity”. It’s true that a lot of writing is done by one person. There are a lot of examples where this is not the case (co-authoring, anthologies, shared universes, ghostwriting, a lot of journalism, etc), but it’s probably safe to say that there’s less cowriting (particularly of the same block of text, however you define “block”) in code than other writing. I think where we differ is that I see this as one area where the metaphor is weak, but the rest is so strong that it outweighs that.

I also do suspect that that there’s an awful lot of software that is written by a single person. This isn’t just sole-developer products, but also components of large, multi-person, systems where there’s a single person that does nearly all the writing and maintenance of that one component (or module or library or micro-service or …). This isn’t good software development, but in my experience it is, unfortunately, fairly common. Often there are other people involved, but only lightly, in the same way that an editor or sub-editor might be with other writing.

In writing, a high premium is placed on originality. In software construction, trying to create truly original work is often less effective than focusing on the reuse of design ideas, code, and test cases from previous projects.

Steve McConnell, Code Complete, Chapter 2

I’m not sure that originality is that much of a premium in letter writing – but then I never did much of that. More broadly, originality has a “premium” in some forms of writing, but not in others – people are generally looking for the same broad style in a new romance novel, they find it easiest to read news articles that are structured in the same way, the latest Reacher novel is not that different from the previous one (or the last Will Robie, or the last Alex Cross, and so on)1.

There’s a concept in software development that you get three “innovation tokens”. Similarly, when writing a novel or series of novels, you can have a truly original character, or world, or plot, or theme, but you almost always need to have the majority of the work be fairly familiar to the reader (particularly a reader who is familiar with the genre). Just as with software development, you can go wild with innovation and originality, but it’s high risk and makes it harder for readers/users.

When you finish writing a letter, you stuff it into an envelope and mail it. You can’t change it anymore, and for all intents and purposes it’s complete.

Steve McConnell, Code Complete, Chapter 2

This is true of a letter, but untrue of most modern writing. Pushing the irony more, this is a quote from the second edition of Code Complete. Writing that ends up in print often is in a final form – but there’s also a lot of software out there that just runs without any updates.

To be fair to McConnell, 30 years ago there was a lot less revision of writing – no editing Wikipedia, pushing out ebook updates, rapidly revised news articles, or tweaked blog posts. On the other hand, there was also a lot of “finalised” software back then too, where you’d buy a box with a disk or cartridge and, unless you did buy an updated version, you’d have that same version forever.

But extending the metaphor of “writing” software to a plan to throw one away is poor advice for software development, where a major system already costs as much as a 10-story office building or an ocean liner.

Steve McConnell, Code Complete, Chapter 2

I’m very much in favour of a reasonable level of well-executed planning and design, but beginning a software project without being willing to throw away any dead ends would strike me as a huge red flag. Defining the role of a software architect is complicated (see also this great Charity Majors post) but I maintain that the core task of an architect is working with people to minimise the number of decisions that are regretted in the future. That might be a wonderful design where almost nothing is thrown away. It might also be settings things up so that throwing things away is inexpensive and experimentation leads to market success.

I’d argue that with the really huge projects, we have plenty of examples (Novopay, INCIS, and others just in Aotearoa New Zealand) that show that big upfront planning with a reluctance to throw anything away leads to terrible cost overruns and missed deadlines.

Building software implies various stages of planning, preparation, and execution that vary in kind and degree depending on what’s being built.

Steve McConnell, Code Complete, Chapter 2

This is exactly what writing is like. Just like when writing a small “throw-away” software script, almost no planning goes into a social media post. When writing a novel, there’s a huge amount of planning, preparation, and execution (here’s a series going into it from one author – there are many, many, others – books on how to write books are almost as common as podcasts about making podcasts!). I find it difficult to believe that this was not the case for writing Code Complete itself.

McConnell extends that argument to compare the cost of making a mistake on a small project with one on a large project (ignoring any learning that may have come from making the mistake) – this equally applies in writing. If I end up rewriting a section of a blog post, that’s fairly trivial. If I have to restructure the entire third act of a novel, that’s a bigger issue, and if I have already finished two books in a trilogy and need to make changes to them to make the final volume complete, that’s very expensive to fix.

In building a house, you won’t try to build things you can buy already built. You’ll buy a washer and dryer, dishwasher, refrigerator, and freezer. Unless you’re a mechanical wizard, you won’t consider building them yourself. You’ll also buy prefabricated cabinets, counters, windows, doors, and bathroom fixtures. If you’re building a software system, you’ll do the same thing. You’ll make extensive use of high-level language features rather than writing your own operating-system-level code.

Steve McConnell, Code Complete, Chapter 2

When writing a novel, you don’t start from first principles. You’ll probably have a three-act structure, your story probably fits – at least somewhat – into one of seven basic plots, and you’ll build on top of whatever tropes are appropriate to the world you’re writing about. You don’t need to explain enemies-to-lovers or the misdirects in a mystery story, and you can call your Hobbits “halflings” and most readers will immediately have the right characteristics to mind. Just as in McConnell’s example of luxury housing having some of these fittings custom-made, the highest quality novels often deviate from the norms in a few specific areas (an imaginative magic system, a innovative take on local history, an unusual prediction about the future, and so on).

Finally, the construction analogy provides insight into extremely large software projects. … [Builders] build in margins of safety; it’s better to pay 10 percent more for stronger material than to have a skyscraper fall over. A great deal of attention is paid to timing. When the Empire State Building was built, each delivery truck had a 15-minute margin in which to make its delivery. If a truck wasn’t in place at the right time, the whole project was delayed.

Steve McConnell, Code Complete, Chapter 2

Cost and timing are important in extremely large writing projects too. Delays in editing or proofing or illustration or printing or many other things can delay an entire book launch (the Potter books have apparently reached nearly US$8B in sales, the Empire State Building cost under US$1B to build, adjusted for inflation). When a book is very high profile, it’s worth spending extra to have additional early readers and editors, just as you build with stronger material (obviously an unsuccessful book is less significant than a collapsing building).

Overall, I do agree with some of the core takeaways of the second chapter – that there’s value in metaphor, that they are “a little sloppy”, and that you should use the best (mix) of metaphors that work in the context. I just can’t agree that construction is a better metaphor for writing code than writing (non-code) is.

This chapter might have hit the sweet spot for me – the combination of reading/writing and software development – but I’m hopeful that the upcoming ones will also be as thought-provoking and enjoyable to discuss with my colleagues. If you don’t have a work book club, I highly recommend starting one! (Non-work book clubs are great too!).

  1. To be clear: I really enjoy a good romance novel and all of the book series listed there. The familiarity has the same comfort as a good TV procedural or film sequel. ↩︎

Inconsistent public service efficiency

(Post theme: drivers license by Olivia Rodrigo)

I renewed my driver’s license last week (having turned 45 recently) and it was striking how different the experience was to the other common government-issued identification: a passport. New Zealand’s passport service, Te Mata Uruwhenua, (run by Te Tari Taiwhenua the Department of Internal Affairs) is remarkably efficient – everything from the application process through to delivery is handled extremely well. They promise reasonable process times, and almost always better them.

Last Friday, on the other hand, despite arriving less than 15 minutes after they opened, I had a 30 minute wait for someone to spend less than five minutes reviewing a form, take a photo, and provide an image of a signature.

There are a lot of flaws in the system, such as:

  • Why not have the form available online? Presumably someone is going to be entering the relevant details into a digital system anyway. The passport renewal process is entirely digital.
  • What benefit is there in requiring physical possession of the expiring license? They have all this data in the system, obviously, so if they wanted to check that I looked like the 10-year-old photo, they have that already. No-one checked my signature. Having the card doesn’t prove identity – I could have picked it up off the street. I didn’t have to go anywhere or show my old passport to renew it. Sadly, an elderly lady in front of me in the queue, who had travelled almost an hour to get there, had forgotten her old license and had to get rejected at the counter.
  • Why take the photo there rather than just providing one? That works for passports, where the requirements are much more strict. This could easily be part of the submission process.
  • I’m not particularly convinced that the written signature provides much value, but even if it’s needed, this is done on some sort of digital system anyway, so could also easily be done as part of a submission.
  • My expiring license was invalidated (it had about 6 weeks left) and replaced with a paper temporary license that lasts for 3 weeks (which will be tight to get the new one in the post, but that’s a separate rural post issue). Why not just leave me with the old one until the new one arrives?
  • Why is someone manually checking a form anyway? The address checks were making sure they matched the ones on file (both inaccurate anyway, with the regular address wrongly including the rural delivery number, and the mailing one not matching the form NZ Post likes), which is trivially automated. The other details would similarly be simple to check by a system.

It’s strange to have one service work so well and one so poorly but easily improved. Can’t we just loan some people from Te Mata Uruwhenua to Waka Kotahi NZTA to get things improved?

Asking ChatGPT for help, take 2

(Post theme: With a Little Help From My Friends by Joe Cocker)

People (at lunch) at Kiwi PyCon were praising ChatGPT as a tool to help write code more efficiently, so I had another go.

I recently moved a Django project to use Whitenoise, and more specifically to have manifest files, where the version is appended to the collected file names. This all works great, except that there are a few files that users download, and instead of getting, for example, template.xlsx, they were getting template.{md5hash}.xlsx.

I asked how to have the file downloaded with the original name, rather than the one with the version hash. In retrospect, I realise I asked the wrong question, because this is an HTML question not a Django staticfiles one, but I hadn’t thought about it enough at the time, and, anyway, surely the assistant is meant to pick up on that.

The answer told me to use the ‘as’ functionality of the static tag, but that’s completely wrong. That’s to get the URL into a variable to use in the template.

To be fair, it then did give the correct answer, that the download attribute of the a tag should be used. So I guess I’d give this a B- grade, for getting the solution but confusing it with a bunch of irrelevant info first.

Python return styles

I was reading through the charmed tech ops code today and noticed a bunch of code like:

def f():
    if check:
        return val
    else:
        return None

I don’t particularly love this (although I do feel you should match the style of an existing code base and should avoid changes that don’t meet a reasonable level of value-add). I would generally leave out the else (and dedent the second return). It seemed likely that these generate the same opcodes so this is purely about readability, but it was worth checking that:

>>> def f1(a):
...   if a == 1:
...     return 'A'
...   else:
...     return None
... 
>>> def f2(a):
...   if a == 1:
...     return 'A'
...   return None
... 
>>> def f3(a):
...   return 'A' if a == 1 else None
... 
>>> def f4(a):
...   if a == 1:
...     return 'A'
... 
>>> dis.dis(f1)
  1           0 RESUME                   0
  2           2 LOAD_FAST                0 (a)
              4 LOAD_CONST               1 (1)
              6 COMPARE_OP               2 (==)
             12 POP_JUMP_FORWARD_IF_FALSE     2 (to 18)
  3          14 LOAD_CONST               2 ('A')
             16 RETURN_VALUE
  5     >>   18 LOAD_CONST               0 (None)
             20 RETURN_VALUE
>>> dis.dis(f2)
  1           0 RESUME                   0
  2           2 LOAD_FAST                0 (a)
              4 LOAD_CONST               1 (1)
              6 COMPARE_OP               2 (==)
             12 POP_JUMP_FORWARD_IF_FALSE     2 (to 18)
  3          14 LOAD_CONST               2 ('A')
             16 RETURN_VALUE
  4     >>   18 LOAD_CONST               0 (None)
             20 RETURN_VALUE
>>> dis.dis(f3)
  1           0 RESUME                   0
  2           2 LOAD_FAST                0 (a)
              4 LOAD_CONST               1 (1)
              6 COMPARE_OP               2 (==)
             12 POP_JUMP_FORWARD_IF_FALSE     2 (to 18)
             14 LOAD_CONST               2 ('A')
             16 JUMP_FORWARD             1 (to 20)
        >>   18 LOAD_CONST               0 (None)
        >>   20 RETURN_VALUE
>>> dis.dis(f4)
  1           0 RESUME                   0
  2           2 LOAD_FAST                0 (a)
              4 LOAD_CONST               1 (1)
              6 COMPARE_OP               2 (==)
             12 POP_JUMP_FORWARD_IF_FALSE     2 (to 18)
  3          14 LOAD_CONST               2 ('A')
             16 RETURN_VALUE
  2     >>   18 LOAD_CONST               0 (None)
             20 RETURN_VALUE
>>> 

Interestingly, there is one difference out of those four variants, where instead of a second RETURN_VALUE op, there’s a JUMP_FORWARD to the single RETURN_VALUE. I assume that means that it’s slower in a way that’s so imperceptible that it’s not worth caring about, but let’s check:

>>> timeit.Timer("f(random.random())", setup="import random\ndef f(x): 'a' if x > 0.5 else 'b'").timeit(10000000)
1.4031157046556473
>>> timeit.Timer("f(random.random())", setup="import random\ndef f(x):\n if x > 0.5:\n  return 'a'\n else:\n  return 'b'").timeit(10000000)
1.326032117009163

I feel this does land in the expected place, where it’s about style & readability. I suppose f1() explicitly shows that it’s making a choice between two cases, but I feel that’s countered by f2() and f3() showing that the function returns a value. In general, it’s nice to have the smallest amount of indentation, so I would not choose to use f1().

Between f2() and f3(), I would generally choose the single-line approach of f3() if the line didn’t wrap (or exceed whatever the wrapping length is for the project, e.g. 100 characters). I would not do this:

def f3_alt():
    return (
        really_really_really_really_really_really_really_long_val
        if another_very_very_very_very_long_condition
        else also_quite_long_here_and_now_we_wrap
    )

If you’re doing this, then there’s no benefit over the f2() style (and, as it turns out, there is a very tiny performance penalty).

Python has an implicit return None at the end of a method that doesn’t otherwise return. This means that for this specific case, where one of the values is None, there’s also the choice between f2() and f4(). In reality, f4() would be slightly better than it is here, in that it would probably have a return type (with |None) and a docstring that explains that None is a possible return value. However, I still (weakly) prefer f2() where it’s called out explicitly rather than f4() where you need to know that this is what Python does.

Kiwi PyCon 2023 – Waihōpai Invercargill – 15th-17th September 2023

(Post theme: ‘Not Given Lightly’ by Chris Knox)

TL;DR

Kiwi PyCon had two days of scheduled talks followed by one day of lighting talks and unconference sessions, held over a Friday plus weekend in mid September in Waihōpai Invercargill. I’ve attended Kiwi PyCon once before, but ten years ago.

The talks were almost all at a beginner level, with a few reaching intermediate. I would have preferred a mix with at least a couple at a more expert level. Some were well done but even for novice level talks not all were great quality.

I thought these talks were good (details below):

  • Moving from unittest to pytest
  • A practical guide to using OpenTelemetry in Python
  • Using React as a template language in Flask
  • DuckDB: Supercharging Your Data Crunching
  • Sometimes this is a pipe, or close enough
  • You’re Doing Great! The underappreciated art of appreciation
  • Automating Victory: Beating browser games with accessible Python
  • Building Kubernetes Operators with Python and Kopf
  • Several lightning talks

The venue was good, and the conference was generally well organised (despite several hurdles), although had significant microphone issues. There was minimal organisation of any social activity (just a quiz night), although obviously the usual meal break socialising took place. There were strong anti-COVID meaasures in place, and a lot of work on diversity & inclusion (but not great gender/ethnic diversity of speakers). They have a good Code of Conduct, but ruined that by ignoring it.

I’m glad I attended for non-conference reasons (the surrounding travel and an induction session I tacked on), and to re-evaluate my opinion of the conference. However, I can’t recommend attending it, unless you’re very early in your Python journey, or are attending strictly for the socialising/networking aspect.

Note that for the last 14 years, Kiwi Pycon has moved around AoNZ (normally alternating between the north and south) but from 2024 will be permanently held in Wellington.

Read more: Kiwi PyCon 2023 – Waihōpai Invercargill – 15th-17th September 2023

Overall comments

This was the second time I’ve attended Kiwi Pycon (the first was 10 years ago in Auckland), and the first time I’ve travelled for one.

I re-read my notes from attending in 2013 and my recollections of being underwhelmed seem accurate. A lot of this is that conferences in general aren’t that appealing to me (much of the value is in the social aspect and that is extremely unappealing for me, particularly attending solo), rather than anything specific to Kiwi PyCon. This is one of the reasons that I haven’t attended one since, but it’s also partly typically on dates that are personally difficult (close to my son’s birthday, right at the end of the sports season, not coinciding with school holidays or any other natural holiday point). Working more on the product side in the last few years also made it a more difficult sell, although I’m pretty sure I could have managed (or just taken personal time) if I had felt it would be worth it.

(I see that all the links in that post to the schedule are, sadly, now invalid, although the wayback machine appears to have them.)

This year, when I initially decided to attend, I wasn’t sure what I would be doing in September – I knew I would not be with N-able any more, but I didn’t know at that point that I’d have started with Canonical. There was some risk, since asking for time off just after starting a job isn’t a straightforward proposition, but the ticket was inexpensive and I held off booking accomodation and travel until after I’d accepted the offer from Canonical and discussed attending with the hiring lead and hiring manager in advance of actually starting.

(Canonical has a really great policy around travel to relevant Conferences, so even though I’m only just finishing up my second week, I didn’t need to use up my brand-new leave allotment to attend, and probably could have even claimed some of the cost (I felt that wasn’t necessary, since I made the decision to attend prior to even applying at Canonical)).

Last time, I wrote up my experience in prose; this time it’s more bullet-pointed – I’m not sure what that says about how I have changed over the last decade 😀

The conference is more polished than (what I recall of) the last time, which you’d expect being the 12th iteration (Auckland was maybe 4th or 5th?), although it was still pretty good in 2013. In my post at the time, I wrote a lot about how inaccurate I thought the “intended audience” labels were. Interestingly, those do not exist now (I’m not sure for how long). I wonder if that was because of the accuracy issues, or for other reasons. I do think a label (if accurate against some fairly agreed scale) does provide value.

(Amusingly, in 2013 I was keen on being able to attend virtually. That’s very dated now, given both the rise of streaming and making video available and then also the last few COVID years).

(Also amusingly: I applied for a position with Kraken earlier in the year, and literally never heard anything back, not even a “thanks but you’re not what we’re after”. So while they had a lot of cute swag on offer, I recommend against applying to work for them).

Friday was more lightly attended than Saturday. I assume this must be people that can’t/don’t want to arrange time off a Monday-Friday job, but can/will do a weekend. It’s interesting because I would much rather have the entire conference during the Monday-Friday period and ‘save’ my weekend (even if I was using a general leave allowance rather than conference leave). Sunday morning was the lightest of all (at least the half I was there for) – probably to be expected given that it was the unstructured part of the conference.

Good

Less Good

  • Waihōpai Invercargill seems like a nice place, but it’s not just far away from the rest of the world, it’s far from pretty much all of AoNZ (intestingly this is changing from 2024 – more on that below) – bad weather can happen anywhere, but this was extra noticeable at the end of the conference when many people got stuck after flights were cancelled and diverted
  • Single track: some talks are very basic, and it’s awkward to skip
  • Not a very diverse group of speakers (in terms of gender & ethnicity), with quite a lot from sponsoring organisations
  • Whoever was doing the AV was very impatient with the speakers and spoke condescendingly and briskly to them – some people are giving a talk for the first time or inexperienced with mics, and it doesn’t take much effort to be patient and a bit kind
  • Poor mic’ing
  • The Saturday start almost clashed with the second pool game for the All Blacks in the men’s Rugby World Cup 2023 (I left my room less than a minute after the final whistle, walked through to the main room, and arrived 2 minutes before the opening). Maybe the expectation is that the percentage of people wanting to watch a rugby game is lower at a Python conference than for the wider population, and that this wasn’t likely to be an amazing match, given the history, but this is AoNZ, and it’s the All Blacks, and it’s a World Cup game. There was a Slack ‘#rug-py-con’ channel but there was only two of us there – some sort of communal viewing and starting 30m later would have been nice (there was also a critical Warriors game on the Saturday, with kick-off was 30 minutes after the end of the day’s sessions, so I suppose shifting to be a little later would have hit that)

Bad

  • Good CoC, social/employment stickets, etc. But the CoC was ignored!

The CoC is a bit messy (it’s confused about what colour t-shirts people will be wearing, and I don’t believe there was a phone number offered as it says) but generally ok. However, I reached out to the CoC team via the provided email, and never received any response at all. It was a minor issue (although from an influential person) but if they ignore reports that are minor, I have to assume they ignore everything. A “thanks for your report – we don’t agree this is a CoC breach, but please discuss more if you wish” type response would have been sufficient, but complete silence is unacceptable.

There are many possible reasons:

  • Deliberating ignoring messages – this seems very unlikely
  • Ignoring this specific message because it didn’t seem important – this seems possible, and if so it’s not ok
  • Ignoring this specific message because of who the subject was – this seems possible, and if so it’s not ok
  • Getting distracted with the conference and forgetting about the safety channels – this seems most likely to me, and if so it’s maybe the least bad, but still not ok

The best action I could take would be to volunteer to help in 2024 and ensure that it was handled better. If I was considering attending then I would do this. However, since I’m not, I have to recommend that unless you are willing to do this yourself, or unless the organisers publicly address the shortcomings and how they will fix them, you don’t attend.

Value

(For reference, attending Kiwi Pycon 2013 cost me around $400 ($509 in 2013 dollars, although I had no accomodation costs since I lived locally).

My travel was a bit indirect – I took the opportunity to spend 3 days in Christchurch with my new colleague & manager Ben on my way, so flew to Christchurch, then flew to Invercargill the day before the conference (those 3 days were really great). Rather than immediately flying home, I took up a kind offer from Bernie to drive me to Queenstown (checking off another one of the scenic AoNZ drives I had on my list) and stayed a night there before flying home from Queenstown. Canonical kindly paid for my flight from Christchurch to Invercargill (approximately the delta of going directly) as well as some of my accomodation cost in Christchurch.

  • Flights: $650 (I actually have a lot of AirNZ credit to consume, so in some ways this was ‘free’, and also meant I didn’t take much care in selecting cheaper flights)
  • Accomodation: $510 (I paid for a nice room; not really necessary given how much it was used, but since I was paying rather than a company I figured I might as well)
  • Food: $31.50 (dinner Thursday), $24.50 (breakfast Friday), $31 (dinner Friday), $24.50 (breakfast Saturday), $33.20 (dinner Saturday), $24.50 (breakfast Sunday). Morning/afternoon tea and lunch were included with the conference ticket, although I generally skipped morning & afternoon tea (if I was being cost-conscious, I would have skipped buying breakfast and eaten the included food instead). I ate from the restaurant at the accomodation (Thursday it was too late in the evening to be trying to find something else, and Friday I was too tired to bother and the weather wasn’t great) – I’m sure I could have found much better value meals elsewhere
  • Airport travel: none in Invercargill (courtesy shuttle when arriving, a lift from a friend when departing); I drove to/from the Auckland Airport, roughly 80km/75min (it would take 30 minutes to drive to the closest PT, and I would have multiple transfers, so it’s not particularly viable, unfortunately) – at IRD rates this would be roughly $150; parking at Auckland Airport $107 (this is cheaper than having someone drop me off and pick me up, as well as being considerably more convenient); I was given a lift to Queenstown Airport
  • Ticket: $296 (early pricing, including a t-shirt – late was $330 plus ~$70 for a shirt)

In total, approximtely: $1,890 (note that this was entirely a personal expense, not paid by Canonical or anyone else)

Conference swag: t-shirt (included in early ticket purpose price); socks & a mini octopus plush, and a large octopus plush (from Octopus Energy); a “gentle steam eye mask” (Japan has such weird cool things that we don’t have!), foldable hand fan, ‘erasable’ pen, and screen & keyboard cleaning tool from Hennge; a small booklet about Te Reo & one promoting Waihōpai Invercargill & Murihiku Southland; post-its and note paper from Hype & Dexter; (cheap) pens from Google; assorted stickers; and a pack of COVID RATs & a mask (not really swag – intended to replace the ones used getting to the conference, although I ended up 5 RATs ahead and 2 masks down).

Venue

As a hotel, the venue was nice – perhaps a little dated, and not amazing, but certainly comfortable. My room had two nicely sized desks, a comfortable bed, and plenty of space. The conference room was large and looked nice, and was generally well set up. The food (both conference and food service) was average at best.

The mic’ing setup was very problematic (surprisingly poor for 2023). There were considerable issues with the signal dropping out (to a receiver that was a few meters away, so definitely should not have been the case), almost always a light echo, and several talks where it was even worse. I’m not sure how this will turn out in the videos.

Session Summaries

Opening 0907-0920

  • Started 7 minutes late 😦
  • Paired experienced (Tom) and inexperienced (Jordan?) MCs (and also throughout the conference) – good sustainability/training approach
  • Conference Director (Carlos) got sick 2 days before the event, so taken over by a new volunteer (who is volunteering at a conference for the first time) at that point!

Keynote 1: Robin McNeill, CEO Space Operations New Zealand 0920-1020

  • spaceops is one of the conference sponsors (presumably that came with this speaking opportunity)
  • Nice tying of space to the (Earth) environment (and environmentalism, climate, etc)
  • Bit of general space history, NZ space history (William Pickering onwards through to Rocket Lab etc)
  • Covered what spaceops do (owned by local govt, indirectly), and so forth, tangentially related to dev (they have devs, but only offhand mentions)
  • Good speaker
  • Several oblique references to Apple 🙂
  • Some talk about the Warkworth satellite station (very close to where I live)
  • Very good introduction to space (particularly radio & space) and particularly where AoNZ and especially Southland fit into that picture
  • Nice ‘soft’ introduction to the conference with a talk generally interesting for people who are into science, without delving deep into technical details

Interesting note: Member of the New Zealand Order of Merit, for conservation work

Python Code Organisation Without Tears, Grant Paton-Simpson (2degrees) 1047-1112

  • Essentially an introduction to an opinionated way to handle (project-local) module/package imports
  • Very introductory (for example, I would expect everyone @ Canonical would already know this and/or have their own opinions on it)
  • Argues against using relative imports (seems counter to my experience where absolute runs into collisions with paths, like collisions)
  • Core is an argument that everything should be “import core.module.submodule” where ‘core’ is the top-level code folder
  • Grant feels that the issue that that existing documentation on this is generally outdated, unclear, or delves deeply into internals instead of giving a “how to”
  • Good speaker
  • Unfair comparison with his talk in 2013: I don’t remember the 2013 one in great detail, and my notes on it are minimal, but I believe I liked this one more

Interesting note: prior to his Python work, worked in social sciences on interesting topics like alcohol and gambling issues. All the papers seem locked away (as academic papers often are, sadly).

Moving from unittest to pytest, Tim Penhey (Stacklet) 1021-1200

  • Had the same realisation as I did (in the shuttle ride to the accomodation) that some of the younger people may have been born after we started using Python!
  • Talked about initial dislike of pytest because of the implicit magic (conftest.py, and fixture naming matching) (agreed!)
  • Not entirely convincing arguments about why he changed his mind – basically this came down to “the good outweighed the bad”, which I suppose is also where I have landed, but it seems like you could have most of the good without having to have all the magic. Maybe there’s some other testing framework that does that, although it seems like everyone is standardising on pytest (for these use cases, rather than others like Hypothesis, Playwright or Robot)
  • Nice examples from real-world (slightly simplified) code unittest->pytest
  • Several examples of fixtures from real-world code that they find useful (lots of composition)
  • Medium level, expects understanding of non-trivial code, but would all be known to anyone that knows pytest moderately well (aimed more at people familir with unittest and not familiar with pytest)
  • Very good speaker

Interesting note: previously worked for Canonical (on Launchpad)

Automate business processes for all your companies with Odoo, Danny Adair & Graeme Gellatly (Roofing Industries) 1333-1417

  • Automating business process (from a roofing material company, not a software company)
  • Graeme is from the business side, Danny is the tech side (he was core to the early Kiwi PyCons, started NZPUG, etc)
  • Considerable amount of LoB automation (seems like a huge amount for a roofing supplies company, but perhaps that’s just ignorance on my side). For example, email comes from supplier with an invoice attached, automatically extract that, match to a purchase order, raise issues if there are differences, otherwise automatically approve an pay, with no human involved
  • A “this is a tool we use and really love and this is why” talk
  • General introduction to Odoo and how it’s used
  • Deep dive into the way that you customise Odoo, with a system that builds classes (and figured out a MRO) on the fly. Interesting, although complex (in implementation, but simple to actually use)
  • Interesting talk, although not sure that it was to the right audience?
  • Good speakers, handled the dual speaking well
  • Some mic issues 😦

Interesting note (Danny): Event Director for Kiwi Pycon 2013, when I previously attended

Interesting note (Graeme): (Previous?) Board member at Waitakere Primary School

A practical guide to using OpenTelemetry in Python, Tom Eastman (Kraken) 1423-1455

  • Very good introduction to OpenTelemetry (probably too basic if you know OpenTelemetry but as someone who has some experience with observability but not much with OpenTelemetry it was good, particularly since OpenTelemetry has improved a lot in recent time and is a fairly young project)
  • A “this is something I use and really like and this is why” talk
  • Very good speaker
  • Inferior mic’ing
  • Unfair comparison with his talk in 2013: both good quality, but I liked the fun aspect of the 2013 one more than the practical aspect of the 2023 one

Interesting note: source of the fairly viral quote about the internet being 5 sites filled with screenshots of text from each other (CW: Twitter/X link)

Using React as a template language in Flask, Paul Hallett (Sharesies) 1457-1525

  • Replacing server-side HTML with React, but page-by-page rather than as a SPA. In particular, keeping the routing done with Flask. At the same time, wanting to have strong types that are shared between the Python and Typescript code. Created tooling to automate generation of Typescript that had type definitions based on the type definitions in Python.
  • Then expanded this to also generate Typescript that provides types (and generally code-competion in an IDE) for API use, but without using (e.g.) OpenAPI since everything is in one local space.
  • It seems like using something like Vue would have been a simpler way to solve the “TypeScript/JS Framework but page by page” problem, although the coupled typing in the API would still be an improvement over that.
  • Explained the Sharesies use, but also has a similiar OSS implementation
  • Interesting, assumed an ability to keep up with examples without much context, which generally worked (started out with some overly basic explanations, like “this is how decorators work”, but jumped quickly past that)
  • Very good speaker

Interesting note: has a (very new) blog reviewing (HTTP) APIs

DuckDB: Supercharging Your Data Crunching, Richard Wesley (DuckDB) 1550-1624

  • Significant mic issues 😦
  • General introduction to DuckDB and where it fits in the DB space, and what the advantages it offers are
  • Good level (would be known to people that use DuckDB, but e.g. if you use something like Clickhouse and don’t know DuckDB this is a good intro)

Interesting note: worked for Tableau for a long time

Sometimes this is a pipe, or close enough, David Hood 1628-1657

  • Non-technical talk
  • Aimed at where he (a data journalist) sees developers needing help when moving into working with data
  • Important to always consider and document the context and the bias, how valuable the data is depends on what you are wanting to do with it
  • Can do much with very simple analysis – complicated models are really needed more when figuring out ranges
  • Very interesting, even though it seems like it would maybe would have been better at a different conference
  • Talked about examples have seen previously on Mastodon (e.g. time of day/week driving injuries/fatalities, changes in COVID incidence after changes in legal requirements)
  • Important to understand what you are doing with the data and the context
  • Good speaker

Interesting note: yes, it’s the David Hood

Using Python to build gateways to space, Chris Bull (Space Operations New Zealand) 1703-1739

  • Book-ending the first day with spaceops
  • MIMO scheduling (multiple ground stations, multiple spacecraft, need to maximise the communication, choosing which station to talk to which craft)
  • He mentioned that he put together the talk the day before, and it was a little apparent. He spoke well and knew his material, but it didn’t have a lot of structure – some of it was praising GNU Radio, some was about what spaceops does, some was (overly trivial) why Python is good/bad, some was about how they’re using Python – it was like 2 good and one less good lightning talks merged into one
  • Good speaker

Interesting note: interned in Whangārei – maybe from there?

Saturday Conference Opening 0906-0915

  • No notes

Keynote 2: You’re Doing Great! The underappreciated art of appreciation. Loren Crary, Director of Resource Development of the Python Software Foundation 0917-1007

  • Very good speaker
  • Brief intro to the PSF
  • Brief speaker bio
  • Why expressing gratitude (written, spoken) matters, tips on how to get better and more regular at doing this
  • Really good (non-technical) talk
  • This is something I am also super passionate about (particularly in work, where I think it’s forgotten or overlooked more than elsewhere, and where ‘forced’ appreciation is common and not useful). Hopefully this changes how people behave

Interesting note: was asked to speak at Kiwi PyCon 2023 in late 2012, just days after starting work at the PSF!

Python: from the perspective of an applied mathematician, Indranil Ghosh (Massey University 1041-1107

  • A speaker in shorts & a t-shirt, many thumbs up!
  • Generally covering his journey to Python, and what he uses it for as a PhD mathematics student
  • Simple to understand although some mentions of mathematical concepts

Interesting note: a poet!

Refactoring for fun and profit, Evan Kohilas 1117-11:58

  • Polished talk (given previously at PyConAU I think)
  • Good speaker
  • Advocating using formatters (black), linters (ruff), IDEs (PyCharm), annotation (monkeytype, mypy), enums/databases, function signatures, documentation, DRY/SOLID, tests, profiling (scalene)
  • Very introductory (but very good as an introduction)
  • Would likely have been better as a 15 minute shorter talk (or maybe just not as the slot before lunch)

Interesting note: interesting talk from 2020 on hacking playable Ads

Panel: A conversation for inclusion in tech, Samantha Nickson, Loren Crary, Kelsey Vavasour, Daisy Brenecki, Tom Eastman, and Christopher Neugebauer (replacing Carlos Cordero) 1220-1302

  • (I’m cis, straight, white, middle-aged, male, and work in a well-paid industry (the privilege jackpot), so that and my lack of experience on the minority side may bias this)
  • Held during lunch, quite lightly attended
  • No intro of the panelists 😦
  • Weird having the moderator stand while everyone else sat
  • Moderator too quiet
  • Very low energy
  • Loren was great, although didn’t say a lot; Kelsey was the strongest panelist; Daisy had notes, which seemed odd for a panel
  • No opportunity for questions from audience, which seems odd for a panel
  • Speaker diversity at the conference was noticeably lacking PoC, was light on women (every single session on Friday was given by someone that presented as male and ~white (I realise that there are issues with assuming gender & ethnicity) – Saturday was better, but not hugely). It would have been interesting to have introspection on that

Delivering Customer Projects in a Rapidly Evolving Python Landscape, Romi Dexter, Benoit Chabord 1331-1412

  • Talk from Integration Glue and Hype & Dexter (sponsors), these are the founder/co-founders of the two businesses (which are connected in some way I didn’t fully get)
  • Practical suggestions particularly around improving devex
  • Quite a practical talk for a sponsor talk
  • Talked about the specific tools they use (Ruff, Pyright, Black, pytest, coverage.py, Github Actions, Sentry, GCP, Clickup, Google Docs, Cookiecutter)
  • An introductory level talk
  • Good speakers
  • A little on the long side

Interesting note (Romi): was previously CIO for GrabOne

Interesting note (Benoit): first AoNZ role was at Yellow (Pages)

Building Kubernetes Operators with Python and Kopf, Sam Bishop (Runaway) 1422-14:50

  • Much more interesting level (it felt intermediate, although if you know k8s then probably more novice) and nice speed and consistency working through it
  • Nice slides
  • Exactly what the topic says (not that much else to say: exactly what was on the label)
  • Good speaker
  • Easily the best talk
  • Recommended packages: pendulum for datetimes (I believe I’ve seen this recommended before), deltaman for parsing human-readable time intervals, dynaconf for configuration management
  • Recommended tool: Lens

Interesting note: had previously seen his very good talk on timekeeping

Robust Data Workflows Made Easy: Classes with Pandera and Pydantic, Nathan McDougall 1500:1522

  • Essentially exactly what the topic says (not much else to say: exactly what was on the label)
  • Good speaker
  • Entry-level to Pandera/Pydantic, but very good for that

Interesting note: interesting project for citation checking

Building an OAuth2 Authorisation Server with Flask and Authlib, Sam Watson (Runaway Play) 1548-1611

  • Walkthrough as per topic (not much else to say: exactly what was on the label)
  • Intro level, technical, assumes can keep up with code

Interesting note: the OAuth2 auth server described went live a ~week before the conference (and is working well so far)

Automating Victory: Beating browser games with accessible Python, Jon Gaul (Heenge) 1620-1651

  • Very good speaker
  • Project management, useful I/O libraries
  • Reminded me of Tom’s talk from 2013. Fun but pratical
  • Technical issues with live demo (screenshoting/GUI automation with projector screen secondary) but had backup videos
  • Close second for best talk of the conference
  • Basically: scripting an app by taking screenshots, using (basic) computer-vision to recognise the board, some (simple) rules to decide on moves, and then GUI automation to make the move, aiming to beat the speed run times (the game is like Minesweeper, but with levels of the dangerous squares, so much more maths, Momono Sweeper)

Interesting note: wrote a kid’s book (my guess is that it’s this one)

The Complexity of Simplicity, Christopher Neugebauer 1655-1725

  • A lot of words to not really say anything
  • Basically delved into “simple is better than complex”, pointing out that often “simple” is achieved by moving complexity elsewhere, and that it’s best when complexity is managed by those who are most familiar with the domain

Interesting note: obnoxious to audience members but received an award from NZPUG; pfft.

Quiz night

  • Python trivia knowledge (teams of 8)
  • I skipped this. I needed a break by this point, and I didn’t believe the extra COVID risk was worth it for taking part in a quiz. I also lacked the energy to find 7 other people looking for an extra team member (it seems like the conference could have facilitated something here, even if it was just a whiteboard where you could post teams looking for people, or a Slack channel – except that most of the attendees seemed missing from Slack)

(Interestingly, this was the only organised social activity – e.g. there was no conference dinner).

Sunday Opening and Awards 0906-0927

  • Typical conference thanks etc – done early to accomodate people needing to get a limited number of flights out of Waihōpai Invercargill
  • General AoNZ Python history & update
  • Dropping having the conference in the location of the organiser (rotating North & South island), having it in Wellington every year instead

Changing where the conference is located going forward is an interesting choice. I’m sure there must have been much debate about this (although I’m a NZPUG member and don’t recall seeing it). I can see both advantages and disadvantages:

Advantages:

  • I assume some logistics are easier, particularly if you tend to stick with a single venue and so forth
  • The ‘average travel’ probably decreases, given that Wellington is located in the (rough) centre of the country, although this sort of calculation is tricky, because in reality it really depends how the origin is distributed
  • Travel is likely much simpler for international travellers, since you can potentially arrive in AoNZ in Wellington, or at worst would have one connecting flight from Auckland (and there must be many flights Auckland/Wellington every day), rather than travelling to more off-the-path locations like Waitangi or Invercargill

Disadvantages:

  • A drawcard (I assume for others as well) is seeing other places. This was certainly the case this year for me. I’ve been to Wellington many times (likely most people have) so there’s no appeal there, and even if you haven’t been, then you will have after attending once.
  • Either the organiser needs to be located in Wellington, or have members of their team in Wellington, or you have to do a lot of organising remotely – this is probably not too difficult this year, and if you’re re-using pieces (e.g. a venue) then it matters less
  • There are no cheap travel/accomodation years for anyone outside of Wellington (although every year is a cheap travel year for those in Wellington). Not having to pay for accomodation or flights was definitely a drawcard the first time I attended, and I imagine that it still would be if I was early in my career – maybe a lot of this is handled by companies, but it still hits students for sure
  • Development/boosting the local Python communities (outside of Wellington) is lost

It’ll be interesting to see how this goes.

Lightning Talks, Christopher Neugebauer 0933-10:55

I enjoyed many of these, although a lot were good as lightning talks rather than something where I’d want to see a 30-45 minute version.

  1. Jonas, Promo for PyCon APAC 2023 and other PyCons in the APAC region
  2. David, Live Updating the COVID Info site
  3. Grant, Opinion that Python (and Python-like, e.g. Mojo) will become the norm for development and other languages all exceptions (by 2045)
  4. Sarah Greeff, Issue of low uptake of programming by females in high school education
  5. Sam, “Loss & Grief” (the Tamagochi Effect, need to be careful about downstream consequences of development)
  6. Simon “DevOps for Highly Regulated Environments” – comes up against arguments that devops practices conflict with security regulations, but argues that it’s entirely possible
  7. Kristina Photos of the conference – Kiwi PyCon is about people, and not only developers, basically thanking everyone through photos (started with te reo & a mihi :tada:)
  8. Steve Baker Wavetable Synthesis, live demo playing an instrument (electric ukelele) (music and live demos always very popular)
  9. Kelsey Vavasour How to bring more people into programming (started with a te reo greeting :tada:)
  10. Kesara PyPI Supply Chain Attacks (brief history of improvements that PyPI has made recently, and where there is still work to be done)
  11. Warwick ?Walter? Basic introduction into binary/logic gates for addition (half/full adders), live demo of a home-built 1-bit adder with transistors (again live demo with hardware always popular)
  12. Chelsea Finnie Learn A Sea Shanty (audience participation, learning a sea shanty), mostly just an excuse to get the room to sing a song together (audience partipation and music popular with a subset)
  13. Hugh Davenport Showing a toy OS he has written (in x86 assembly), how he debugs issues with it using Python, live demo (one-handed), includes extremely minimal Python in the OS
  14. Vicki McKay What to do when you don’t have a plan (decide to do something, follow intructions, listen and learn, follow the most passionate person), plan things, unclear on the message of the talk (maybe it was that people should participate in groups?)
  15. Joelle Maslak How Networks Surprise Developers – works for Netflix, examples of how physical networks aren’t always as expected, how distance matters to speed
  16. Daisy Talked far too fast (not the point of lightning talks), history of supermarket development in Austrailia and AoNZ (!)

Unconference

During the first two days of the conference attendees could suggest topics for ‘unconference’ sessions, and vote on suggested topics. My ride for Queenstown was leaving just before this, and none of the topics were ones I was particularly wanting to take part in, so I skipped this.

I think the rise of social media over the last few years have significantly decreased my interest in unconferences. It (possibly unfairly) feels like this would just be 30 minutes of ‘arguing with someone on the internet’ but in person. If there was a practical point, that would be different, but if it’s just a discussion, then I don’t feel like this is going to give me value, and will just get me annoyed. Again, this could definitely be unfair, particularly since I didn’t try these.

The topics were:

  • Can the Python Community Support Python Education in NZ? (answer: yes – and for the details this really relies on having people from education here, and I don’t think there were enough)
  • How Do We Improve the Python Packaging Experience, Especially for Beginners? (this seems like a talk for a bigger PyCon or elsewhere)
  • What are the Most Fun Things You’ve Done with Python (seems like this should just be lightning talks)
  • What Should Change in the Python Language Over the Next 10 Years / What Are Python’s Biggest Regrets or Mistakes? (this seems like it would drive me crazy)
  • Southern Tech: Is There Interest in Setting Up A Tech Advocacy Group Across Invercargill/Dunedin/Lakes District (probably good, but entirely out of my scope)
  • Is ChatGPT/Copilot etc Ruining Programming (this would also drive me crazy)
  • What Does Best Practice Python Data Science Look Like in 2023? / Do Notebooks Rot the Brain? (just not interesting for me personally)
  • What Can Python Learn from Other Languages? (another drive me crazy one)
  • Linters: Who Should Use Them? Which one? (Everyone, whichever one suits you/your team)
  • GNU Radio Live Demo (probably better during lunchtime)

Conference Closing

I obviously skipped this as well.

Closing Thoughts

I had three reasons for attending Kiwi Pycon this year:

  • To re-evaluate my now 10-year-old conclusion that attending Kiwi Pycon was not high value for me
  • To continue exploring areas of AoNZ that I haven’t travelled to
  • Originally, I was potentially not going to have a full-time development job at this time, and thought it could possible have some job search value; instead this became the value from combining the trip with working in Christchurch for a few days

I did get to see more of Christchurch, and visited Invercargill for the first time, and did the Invercargill->Queenstown road trip that I was keen on, so from that perspective it was a win. The tacked on induction visit in advance was really great (and also very valuable) and I doubt I would have done that without having this additional reason (Christchurch isn’t that far from Auckland, but it’s far enough, as compared to Christchurch and Invercargill when already travelling), particularly since Ben & I would be meeting in Riga in a few more weeks anyway. It’s interesting that the most positives from my trip are in this area, which vanishes when the conference becomes anchored in Wellington.

I did get a chance to re-evaluate attending Kiwi PyCon, and that was valuable. My conclusion ends up the same: I don’t get enough value out of it to attend. This is even more the case than in 2013, since I could watch any of the talks when the videos are available (fairly soon, I expect), so it’s really the experience and the networking that’s providing value. There were plenty of interesting and smart people, and I had some interesting conversations, but nothing that was so amazing (or so unable to be replicated online) that I would pay >$1000 and consume 3 days for it.

If the conference had more of the really great talks, and especially more that were aimed at people that have been using Python for 10+ years, then that might change my opinion (this is something I thought would have changed since 2013), although there’s still the “videos are all online” counter. I would like a mix something like 10% beginner, 80% intermediate, 10% experts, and 20% ‘fun and interesting’ rather than entirely practical, and roughly 30% soft, 70% technical talks. Obviously, other people will have other tastes.

If you’re someone that really enjoys the social & networking aspects (I don’t) or looking specifically to be hired by one of the sponsors or are very early in your Python journey or want somewhere to practice a talk you’re going to give elsewhere, then it could be worth it for you. However I have to recommend everyone against attending Kiwi PyCon because of the Code of Conduct issue. That’s a complete blocker to attending, and should be for anyone in 2023 and beyond (unless it does get publicly addressed).

Thanks to Ben for reviewing a draft of this post.

Top 5 Books That Should Be Movies

(Post theme: Storybook Love by Mark Knopfler and Willy DeVille)

More Filmspotting catch-up. Hard to resist a Top 5 list that involves books, obviously!

I’m using roughly the same criteria as Josh & Adam (and Kristen) – in particular: anything that already has a film (even if it’s terrible) is excluded, and it has to be a full-length book. The latter is particularly challenging: in my opinion, short stories & novellas make the best page-to-film adaptations. Full-length books that would translate well to the screen are almost always going to be better as TV series. I’ve also avoided cheating this by suggesting a trilogy or series of films, although some of these could obviously have sequels.

There’s quite a bit of recency bias here. I’ve read a lot of books, but find it difficult to think back over which ones would work particularly well as films. I skimmed over my Goodreads list, but only recent reads have (my) ratings, and it’s tricky relying on average ratings – plus until recently I didn’t add books that I read but didn’t own (library, friends, family, etc).

5. Pastwatch: The Redemption of Christopher Columbus by Orson Scott Card. I’m generally wary of time travel stories (but am a sucker for time loops) although there are some great film examples. Ender’s Game was a terrible adaptation, so there’s a lot of risk here. (I’m aware of Card’s politics and religion, and my disagreement with most of it doesn’t change my enjoyment of his writing). There are some elements to the story that would play very well today: particularly, the environmental aspect, and the reality of Columbus’s impact on the people already living in the Americas. However, the titular redemption might be an issue. Gina Prince-Bythewood to direct (or maybe Roseanne Liang?), introducing new actors for the main cast, with either Justin Baldoni or Chris Evans to star as Columbus.

4. The Lies of Locke Lamora by Scott Lynch. This would need a careful screenwriter, to emphasise the heist/con story and ensure the fantasy element is almost incidental (until it’s critical to the story). It would have to be a long film, but you could probably sacrifice some parts of the story that lead towards the sequels without losing what makes it a great story. Apparently Warner Bros. were working on an adaption, and after that fell through someone else was, but as far as I know there is nothing announced, so it’s up for grabs in this list. Ajay Friese as Locke, Ian Samuels to direct.

3. The 7½ Deaths of Evelyn Hardcastle by Stuart Turton (first published as The 7 Deaths of Evelyn Hardcastle). Apparently Netflix were going to make a series based on this, but gave up, so that frees it up for the list. I mentioned my love of time loops already, and this would be a great addition, mixed in with the Christie-esque aspects. I didn’t love the ending of the book so I feel this could be one of the rare examples of a film that’s better than the source material). Maybe Mark Ruffalo as the protagonist, but I don’t have a director in mind.

2. The Loneliest Girl in the Universe by Lauren James. I love the genre twist in the middle of this story, and it would translate well to the screen, although you’d need to ensure that marketing (especially trailers, which have become awful in recent years) didn’t ruin it. There’s a lot of written communication in the story, and not seeing one of the characters is important, but a good screenwriter could solve that. Lauren James fancast this herself, so it’s hard to argue for anyone else to star. To direct: Gary Ross (bringing elements from Pleasantville, The Hunger Games, and Ocean’s 8).

1. Beak of the Moon by Philip Temple. It’s one of my favourite stories and definitely my favourite NZ novel. It would make an amazing animated film, made entirely in Aotearoa with local talent. There are two huge challenges: firstly, it’s a long book – but I think a good screenwriter (keep Walsh and Jackson as far from this as possible!) could solve that and the imagery would do a lot of the work. More challenging is that all the main characters are male, and this is central to the story (and, in my limited understanding, to kea behaviour). I think you’d want relatively unknown actors to lead (Māori, in their 20s) although I could see cameos by Rhys Darby, Bret McKenzie and Jemaine Clement as the kākā, Taika Watiti as the kākāpō, Sam Neill as the albatross, and maybe Cliff Curtis as Glintamber. I have no idea who could direct – it should be someone local, and someone who can do animation well, which means someone new.

Honourable mentions (in no particular order):

  • The Crystal Cave by Mary Stewart. I’m not sure the world really needs another Arthurian film, even if it is based on one of the best adaptations. It’s also probably too long to make a good film and The Hollow Hills would really need to be made as well. I’d still love to see it done well, and I think we’re at a point where there enough people that can do historical fantasy on screen very well. This did have a TV series adaptation, which probably excludes it (I haven’t seen that).
  • Daughter of the Empire by Janny Wurts and Raymond E. Feist. Easily the best trilogy in the Riftwar Cycle, and Mara’s story, particularly in this first book, would easily be as compelling as Katniss, Tris, and Teresa were in the 2010s – and we’re due for a resurgence in great YA films (with less dystopia this time). However, I’m not sure you could do justice in a film, and it might have to be a TV series.
  • Project Hail Mary by Andy Weir. Better than The Martian, which was a decent film. However, this is apparently going into production in 2024.
  • Gideon the Ninth, by Tamsyn Muir. I think the fantasy/sci-fi mash-up, the LGBTQ+ aspects, the humour, and the action would all combine to make a great film. I wonder if it would be too confusing to tell in under 2 hours and that you’d need to make this a TV series instead. It also feels like it would really require also making the other books in the series (once they’re all finished) and some of those would be even more difficult, and that also disqualifies it from this list.
  • Faerie Tale by Raymond E. Feist. Problem #1: I remember loving this story and re-reading it several times, but I haven’t read it in over 10 years, so it’s hard to recollect enough details to evaluate whether it would translate to the screen well. Problem #2: I watch hardly any supernatural thriller films, so don’t have any idea what it would be competing with. My gut feeling is that it would be great, though.
  • Memory Man, by David Baldacci. This would clearly make a great film – it was only bumped off my top 5 because there are already a lot of great crime films and I’m not sure that this would add anything truly new.
  • Hunting Party by Elizabeth Moon. This would be a great action/sci-fi story (much of the best sci-fi on film is space opera). I don’t have any reason for not including this, except that I couldn’t figure out which of my 5 top I would bump to make this. It’s a clear #6.

End of an Era

(Post Theme: Lost the Breakup by Maisie Peters)

Last Friday (30th June, 2023), was my last day working for N-able after 18 years (a bit over 12 years of that for SpamExperts, then after the acquisition a bit under 4 years for SolarWinds MSP, then after the split the last almost two years for N-able).

I’m more able to write about things now, and have a lot of ideas of topics I’d like to cover, and hope to get to a bunch of those before starting my new job in September. For now, mostly for my own record, a brief summary of how it all ended.

Scaled Agile & the introduction of a Product Owner role

In 2020, SolarWinds decided to adopt the Scaled Agile framework (SAFe), under the name “SolarAgile”. The plan was to introduce this within the MSP side of the business first, and then extend it to “Core” (the traditional, large, part of SolarWinds) and “Cloud” (Pingdom, AppOptics, Loggly, Papertrail). I was part of the pilot group helping to shape this in the initial adoption, which was actually pretty great.

One aspect of this project was the introduction of a Product Owner role. There were a few POs across different products, but no real consistency about what they did, and most teams didn’t have a PO. For Mail Assure & SpamExperts, I handled the PO role in the initial pilot (one of the requirements was someone that was both extremely familiar with the product and also very experienced with Agile), but the intention was that I would move to PM after the pilot period.

By this time, the Mail Assure & SpamExperts engineering team had shrunk in size quite a bit (down to two teams), and wasn’t expected to grow again. Similarly, from a product perspective, the goal was high retention rather than a lot of growth (especially new logo growth). I argued that we did not need both a PO and a PM for a product of Mail’s size (a “practicality beats purity” type argument). However, TPTB went ahead and hired someone for the PO role anyway.

In some ways, the acquisition (August 2017) was the beginning of the end – particularly since SolarWinds and then N-able were both very focused on people in offices (covid disrupted this as it did everywhere, but even now there is strong encouragement to be back at an office at least part of the time). However, I feel like adding in the PO role to the Mail team was the real beginning of the end, because it was always clear to me that we were ‘overprovisioned’ in product people for the nature of the product.

Everything went well enough for almost a year – a lot of that time was our new PO learning the ropes while I learnt more about the parts of PM that I hadn’t done before, and tried to extend out our strategic planning.

Reorganisation and Product Owners transformed to Product Managers

In late 2021, after the split from SolarWinds, N-able had another one of its frequent leadership changes, with a new CTO followed not long after by a new VP of engineering and a new VP of product. This (mostly) spelt the end of SolarAgile, and the decision was made to completely remove the PO position, with the actual PO work being brought under the responsibility of the PMs.

Essentially overnight, all the POs were now either PMs or moved elsewhere within the company (e.g. into an engineering role) – many have since left N-able. This transition was handled very poorly, with the news of the change arriving to at least some of the POs at the same time it arrived to the company as a whole.

Most relevant to my journey, this meant that Mail Assure & SpamExperts now had two PMs, the former PO and me. I already felt like both a PO and a PM was too much ‘product’ for the team, and this obviously made it that much worse.

Again, everything went ok for some time – while we were both PMs, we did still tend to split up the work in similar ways as before, with me focusing on the higher level strategy and prioritisation and my fellow PM working on more of the operational aspects.

Interlude: Promotion and Pay

During 2022, I was promoted to Senior Product Manager (although as a contractor I technically didn’t have any title at all). This had a reasonable pay bump, which was particularly welcome in a year where inflation was suddenly something that was relevant again.

This was the third significant pay increase that I received in my SolarWinds & N-able journey. The first was after SpamExperts was acquired – this was basically adjusting to be closer to ‘market’ rates (SpamExperts being fairly small was generally at the lower end, although it had certainly improved and in the last 5 or so years there I had no complaints about how much I was paid), and also essentially for retention purposes (ensuring as many key people as possible stayed after the acquisition). The second was a couple of years later, after the former SpamExperts CEO & CTO had left, and I was very unhappy in my architecture role and with the direction of the product. This was again basically a retention play (which worked – I also got other changes, which helped, but for the most part I was willing to stick around for a while because of the comparatively high renumeration).

It was never made super explicit, although it came up in some conversations, but I feel that these actually ended up contributing to the end (of course, in the meantime, they contributed plenty to my financial security). If you looked at the cost of PMs for the Mail team, then I was the bulk of that, and my salary was somewhere in the ballpark of 2-3% of the product’s revenue. When I moved product (more on that below) this would have been even more noticeable, because the strong retention motivation was no longer there (or at least no longer visible to the people making the decisions)

This isn’t a complaint about being well paid for a few years, by any means. But I do think that it was one factor in the eventual end of my tenure.

Moving Products

Around August 2022, it was clear that people had looked at the budgets for Mail and decided that having two PMs was not justified. I don’t disagree with this (and indeed had argued it all along), although I think things would have played out pretty differently if we’d never had a PO at all (which is certainly not her fault, and is not to denigrate any of the excellent work she did).

Either I would need to move or the other PM would need to move. It was made clear to me that the preference was for me to move – as a senior PM who was well regarded by many in N-able across several products (although certainly not universally; you can’t please everyone), the expectation was that it would be simpler to put me in a new product and have the other Mail PM continue with Mail Assure and SpamExperts.

I didn’t like this plan. I did have, in my performance planning documentation, some statements around either joining a new product or building something new within the same product & team. However, those were in the 3-5 year range, and I was pretty clear about having some work that I really wanted to finish with Mail first.

(As an aside: I never really got the chance to properly implement a strategy as Mail PM. The first 12-24 months were taken up with (a) work on security as a result of the SolarWinds Hack, (b) work to adjust as a result of the SolarWinds/N-able split, and (c) finishing off and cleaning up after the previous PM, who really didn’t understand the product at all. After that, we were just starting to get underway with some new work, and then I was moved away).

However, it was clear to me that me moving was going to be best – it would hopefully turn out well for me (wrong, as I came to find out), and for the PM who would stay (somewhat correct), and for everyone else as well (unclear). So I accepted that this was going to happen, and was moved to Cloud User Hub, which was a product born of another acquisition (spinpanel), and had just a month or two earlier launched. More on how that was sold to me and what the reality was another time.

Another Reorganisation

The first couple of months on Cloud User Hub were rough (mostly because of the state of the product and the disastrous launch), but by the middle of October were improving and things were looking up.

At this point, N-able joined many other tech companies and laid off a large chunk (maybe 15%?) of employees, and did a bunch of restructuring as a result. I was kept on, but my immediate manager was gone, and the product was moved to a separate group as well, under the leadership of someone completely new to N-able.

At this point, looking back it feels like a foregone conclusion that this would be it. The product was in a huge mess, and although I worked on improving that and some progress was made, and although I joined after the mess was made, you couldn’t look at my work in Cloud User Hub and see anything like success. In addition, I was now reporting to someone (my manager’s manager in particular) who had no history with me at all, so there was no existing goodwill or understanding of quality work that I had done in the past.

Final 4 Months

On February 28th, I was told that I was no longer required and would have 4 months of notice, finishing up at the end of June.

The official reason was the challenges around location and time zone. To be fair, this was much harder in Cloud User Hub than it had been with the Mail team. The majority of the engineering team were in India (good overlap with NZ), some in the Netherlands (good overlap with NZ, long history of working with people in that time zone), and some in the UK (reasonable overlap with NZ, several years of working with people in that time zone. However, N-able has almost all of the senior management in the US, and combining the US (or Canada, where there are other teams) time zones with the Europe/Asia ones leaves no good times for working with NZ.

For all of the 18 years I was with SpamExperts, then SolarWinds, then N-able, I was extremely flexible around working times (anyone who worked with me would attest to this). Until Cloud User Hub, this was the occasional meeting during the night, and working around 4-6 hours in my evening (which suited me well for many years anyway). After moving to Cloud User Hub, I would regularly have meetings at 3 a.m., 4 a.m., and so on – at least weekly, generally multiple times a week. I made this work, but it wasn’t good for me (or for N-able, really).

Ironically, this was much worse in the December-February period (excepting the week I vanished in February thanks to Cyclone Gabrielle) than later, when there was less need to be meeting with senior leadership and more work with the actual teams, where time zones aligned well enough. Travel to do in-person work (to Edinburgh, for example) was expensive for someone in NZ, though (none of the engineers in India would be doing that travel either).

More to say about my adventures over the last 18 years, but that’s essentially how it all came to an end!

Reply: Designing Pythonic library APIs

(Post theme: Code Monkey by Jonathan Coulton)

Ben Hoyt has a great post on designing Pythonic library APIs (itself a written version of a talk he gave). I have some thoughts in response:

Style

I love PEP 20, the Zen of Python (I used to have a set of t-shirts I had made that had one koan on each), and I think it’s actually applicable more widely than just code (Python or otherwise). I certainly agree that following its guidelines is a great start.

Ben suggests following PEP 8 (the style guide), I would go further than that:

  • Assuming that there’s some sort of CI pipeline, that should include enforcement of a style (ideally auto-correcting to one). Black is the obvious choice here, and it’s (for the most part) following PEP 8, but the most important thing is to have a consistent style where a tool does all the work.
  • Shift work ‘left’ of the CI pipeline, and make it easy for contributors, by having your style format of choice part of pre-commit or some similar workflow, and have an appropriate configuration file for that in the source repository.
  • Follow PEP 257 for your docstrings (and obviously have good docstrings). IDEs sometimes shove in a bunch of text around return values/types and enforce conventions – I’m less bothered about those, and I think generally they (and the related recommendations in PEP 257) have been supplanted by type annotations in many cases. When other people are using the library, they’ll see these docstrings, and they’re probably also part of your automated reference documentation.
  • While on the topic of docstrings, put an example or two in them anywhere it makes sense, and use doctest to make sure that they stay correct.
  • Have a style for imports (and use isort or something similar) to automate/enforce that as well. I personally prefer sorting by (length, normalised case alphabetical) with groupings for, from top to bottom, the standard library, third-party libraries (with a blank line between each, ordered approximately by how well established they are), internal libraries, and then finally imports from within the same package. But again, it’s the consistency that matters most. (This one isn’t really about API design).

“Pythonic”

In addition to the items that Ben mentions, I think it’s important to design the API so that it works well with Python idioms. Exactly what this entails depends a lot on the specifics of the API, but for example:

  • Functions & methods should be designed so that they can easily be used with the functools module (Ben has an example of this).
  • Provide generators rather than returning a tuple or list when possible. These should work well with the itertools module, with yield from, etc.
  • Work well with the standard library logging module (but don’t be noisy when someone isn’t using it). The logging module is an example of an API in the standard library that is poorly designed (or perhaps just is not particularly Pythonic), in my opinion, but it’s the default choice for logging and utilised by tools like Sentry.
  • Context managers. For example, if your object connects to something (a file, over a network, etc) then have the cleanup done in a method called close() so that you can using contextlib.closing (but actually also provide your own __exit__ to handle this).
  • Where appropriate, make it easy to serialise data. This might include supporting pickling objects, but might also be to other formats (JSON, YAML, etc).

Async

The Python releases that I really like are the ones that focus on improving performance (sometimes this is CPython specific) and usability (like the improved tracebacks in 3.11), and the standard library. In my opinion, for the most part, the Python language itself does not need regular changes, and sometimes these can be at the detriment of some of the aspects of Python that make it great (like readability and ease of learning).

I’m not (yet?) a fan of the walrus operator or pattern matching, for example. I have mixed opinions about type annotations. However, one change to the language over the last decade that I feel is definitely worthwhile is the addition of async & await. It was possible to use coroutines in Python previously, and understanding how async works does add complexity to learning the language, but I feel it expands what can be easily done with the language, in the same way that you can use Python in a very object-orientated way, or a very functional way, and so on.

One catch with async & await is that they have a tendency to spread throughout your code. You can ‘collapse’ async code into a synchronous action by adding an event loop and waiting until everything is done, but for the most part if you’re using an async library then you’re probably building your entire app in an async fashion. It’s definitely much simpler to make use of a synchronous call inside of an async method than vice-versa.

There are libraries that have added async functionality after originally being synchronous (e.g. Django) but from what I’ve heard that has been complicated to get right and the API is less natural than otherwise (e.g. compare with FastAPI).

Whether or not a library should have predominately async methods, or both async and sync versions, or avoid async entirely depends a lot on what it’s doing and how it’s expected to be used. However, it’s definitely something to think a lot about in advance, rather than try to adjust mid-flight.

Exception Chaining

Ben has a good section on errors and exceptions. The only thing I would add is that you can and should explicitly chain exceptions so that it’s clearer to the caller what’s happening. I think Ben is referring to that here, but doesn’t call it out explicitly:

For example, if your library can raise ssl.SSLError when it’s calling an HTTP API, it’s probably best to catch that and re-raise as fishnchips.NetworkError.

Ben Hoyt

Instead of doing this:

try:
...
except ssl.SSLError:
raise fishnchips.NetworkError()

You should do this to show that the SSLError was the “direct cause” of the NetworkError:

try:
...
except ssl.SSLError as e:
raise fishnchips.NetworkError() from e

Related to errors & exceptions, you should make use of the warnings module and the various warning Exception classes.

In this section, Ben also says:

APIs should be designed so that it’s hard to make mistakes.

He doesn’t have this as one of the takeaways, but I would ‘promote’ it to one.

The standard library unittest package

Ben has unittest as an example of a poor standard library API. I agree with this, and I think it provides a good example of where API design can be challenging. The main problems with the unittest API (such as assertEqual(a, b) not being assert_equal(a, b) or a plain assert a == b) come from the API being an implementation of the xUnit API (originally SUnit and popularised by Java’s JUnit but extremely widely used).

The question here is how closely the Python xUnit implementation should match the implementations in other languages (or, put another way, whether the Python standard library should have an xUnit implementation versus a library that supports unit tests that isn’t necessarily xUnit). If you’re coming to Python’s unittest module from Java (and JUnit) then these are comfortingly familiar:

import static org.junit.jupiter.api.Assertions.assertEquals;

import example.util.Calculator;

import org.junit.jupiter.api.Test;

class MyFirstJUnitJupiterTests {

private final Calculator calculator = new Calculator();

@Test
void addition() {
assertEquals(2, calculator.add(1, 1));
}

}
import unittest

# Probably a relative import in practice.
import example.util


class MyFirstUnittestTests(unittest.TestCase):
def setup(self):
self.calculator = example.util.Calculator()

def test_addition(self):
# In Python 2, this could have been assertEquals()
self.assertEqual(2, self.calculator.add(1, 1))

There are lots of other examples of APIs like this, where some underlying functionality is being exposed in many different languages, or where similar functionality is being implemented in many different languages. Sometimes, you get both, like with the python-mysql library:

import contextlib

import MySQLdb

# Fairly Pythonic, and aligns with the DB API (PEP 249)
# The Connection object should have an __exit__ that closes.
with contextlib.closing(MySQLdb.connect(**[connection args])) as db:
with db.cursor() as c:
c.execute("SELECT col1, col2 FROM tbl")
for col1, col2 in c.fetchall():
pass

# Low level API that essentially exposes the MySQL C API.
import _mysql as mysql

conn = mysql.connect(**[connection args])
conn.query("SELECT col1, col2 FROM tbl")
result = conn.store_result()
for col1, col2 in result.fetch_row(maxrows=0):
pass
conn.close()

In general, I believe it’s better to design your API to match the language, and copy the intentions and outcomes from the source, rather than try to completely match the API. This is one reason why pytest is superior to the standard library unittest.

The standard library csv package

Ben has the csv module as an example of a good API, and I generally agree, and I think it’s particularly so given how poorly defined the CSV format is, which makes working with CSV files much more challenging. The one nit I have is:

import csv

with open("some.csv", newline="") as f:
reader = csv.reader(f)
for row in reader:
pass

I don’t love that you have to know/remember to open the file with newline=”” (in Python 2, you had to remember to open the file in binary mode). Most of the time it won’t make any difference, but when you have a CSV with a newline in a quoted field it’ll break if you don’t do this (I’d argue that if you have that sort of CSV you perhaps are using the wrong serialisation format, but often that’s out of your control).

It’s more Pythonic to create objects from files than filenames (more technically: pass something file-like that supports the file protocol rather than something that supports the string protocol). It does feel like passing a string to csv.reader is generally wrong (you’ll get each character in the string as a one column row), and the first argument to csv.reader can already be a list (or similar) or file (or similar), so perhaps a string could be taken to mean a filename. csv.reader_from_filename doesn’t seem Pythonic, or csv.DictReader.from_filename. Having csv.reader call reconfigure() on the passed object is probably a bit too magic (explicit is better than implicit!).

In summary, this is a (very small part) of the csv API that I don’t like, but I don’t have a good suggestion for solving it, either.

from library import something

Ben says:

Takeaway: Design your library to be used as import lib ... lib.Thing() rather than from lib import LibThing ... LibThing().

Ben Hoyt

I agree with the majority of his post but a huge 100% to this specifically. Namespaces are one honking great idea!

Global configuration and state

In this example code of Ben’s, he argues against having a module-level DEFAULT_TIMEOUT:

DEFAULT_TIMEOUT = 10

def order(..., timeout=None):
if timeout is None:
timeout = DEFAULT_TIMEOUT
...

This is an interesting argument, and goes against the common refrain (probably originating from languages like C) that you shouldn’t have ‘magic’ numbers in your code and should define them at the top level instead.

If your intention is that people should be able to change the default, then you should definitely do this differently (e.g. as Ben describes). If your intention is that this default is never to be changed, then you can make this a little better in modern Python (3.8+):

import typing

_DEFAULT_TIMEOUT: typing.Final[int] = 10

def order(..., timeout=_DEFAULT_TIMEOUT):
...

However, you’ll need some sort of type checker to validate that it’s actually not changed (which does go along with Python’s “consenting adult” type approach to things like private variables).

Type Annotations

Ben has a good section on type annotations, which pretty much exactly matches my feelings. I don’t love them, or feel that they are themselves really Pythonic (although their use in libraries like Pydantic does, somehow). I completely agree with Ben’s takeaway:

On balance, I definitely think it’s the right thing to do in 2023 to ship your library with type annotations.

And of course, don’t just use them, but run Pyright or MyPy over your library’s code on every commit.

Ben Hoyt

Ben’s last example of a positive of type annotations is:

They help your IDE provide better navigation and auto-completion.

I agree that this is the case, and one of the more compelling reasons to use type annotations, and also why there’s – to a certain extent – an obligation on library/package developers to provide them. However, I find it generally disappointing. I strongly feel that this functionality should be provided by the IDE without the user needing to put in all the manual work of explicitly typing everything. I wish we had solved this need with better tools rather than by putting a heap of additional work on developers – and especially without adding a heap of boilerplate to Python code. I understand that the dynamic nature of Python makes this hard, but hard problems are good ones to solve.

Overriding Operators

Ben gives a rule of thumb:

Only override math operators like a+b if you’re creating a number type.

Ben Hoyt

I agree with this for almost all math operators, except perhaps + (for example, using + to concatenate strings is more natural than str.join, and I think it was right to make that implementation faster rather than focus all energy on getting people to call join).

I think the standard library has a perfect example of a mistake here, which is pathlib and the division operator. I believe this is a cute hack that is maybe ok in a third-party library, but definitely does not belong in the standard library:

>>> import pathlib
>>> p = pathlib.Path("~")
>>> conf = p / ".mypackage" / "config.ini"
>>> str(conf)
'~/.mypackage/config.ini'

There was a lot of controversy about this back in 2012 (the PEP has a reference to some of it), and maybe the right decision was made, but it’s unPythonic and distasteful in my view.

Keyword arguments

Ben makes an argument for keyword arguments helping with backwards-compatibility, which I agree with. He has this example:

def order(chips=None, fish=None):
"""Place an order.

Args:
chips: number of scoops of chips
fish: number of fish
"""

I would argue that these should be keyword-only arguments. If I’m writing a call to order, I’ll never remember whether fish or chips comes first (I would even argue that they are backwards here, because – at least where I am – people say “fish and chips”, not “chips and fish” – an irreversible binomial if you’re a linguist). An IDE might help out when writing, but when reading the code, you’re not necessarily going to have that context made available. A two character change, but it prevents easy mistakes:

def order(*, chips=None, fish=None):
"""Place an order.

Args:
chips: number of scoops of chips
fish: number of fish
"""

I also worry a bit about how this would scale. A fish’n’chip shop probably has at least a dozen items on their menu, and that’s a lot of arguments. It could be generalised, something like:

def order(**kwargs):
"""Place an order.

The arguments should be the name of the item, with the value: either the number of the item,
a tuple of (quantity, type),
or a list of such tuples

For example:

>>> fishnchips.order(chips=1, fritters=(4, 'potato'), fish=[(1, 'crumbed'), (1, 'battered')])
"""

The main concerns I have about this are:

  • If you want to have arguments that are not items in the order (like the timeout one Ben has as an example), it feels messy for that to be mixed in with the items. Using **kwargs helps, because you’d make timeout a keyword-only explicit argument and that would distinguish it, but it still feels untidy to mix order items and order configuration into what is essentially one dictionary.
  • The item names are limited to what’s supported by Python names. That means it can’t start with a number, can’t have spaces, can’t have punctuation, and so on. For the most part this is probably fine – you can have onion_rings instead of ‘onion rings’ and the like. It feels like it might get challenging to remember the rules for converting from ‘real name’ to ‘argument name’, though. I also suspect that eventually the method will need the ‘real name’ for logging or display or similar.
  • Related to the issue of remembering the rules is remembering what can be ordered. With Ben’s version, the signature tells you that you can have fish or chips. With my kwargs one, it does not – presumably there is some validation of the arguments in the function, but that doesn’t help the caller prior to runtime.
  • I’m not sure how well this would continue to scale. Say that after adding crumbed fish, we decide to sell both Snapper and Tarakihi. Do we now support an argument like fish=(3, ‘crumbed’, ‘snapper’)? How do I remember that it’s not fish=(3, ‘snapper’, ‘crumbed’)? How are we going to handle burgers, which have lots of ingredients that people might want to customise?

I realise Ben was using this as an example of how keyword args and dynamic typing help backwards compatibility (which they do), and not trying to design the ultimate fish&chips library API. However, I feel like you’d need to move to a single collection of objects (probably dataclasses) before long.