I don’t particularly love this (although I do feel you should match the style of an existing code base and should avoid changes that don’t meet a reasonable level of value-add). I would generally leave out the else (and dedent the second return). It seemed likely that these generate the same opcodes so this is purely about readability, but it was worth checking that:
Interestingly, there is one difference out of those four variants, where instead of a second RETURN_VALUE op, there’s a JUMP_FORWARD to the single RETURN_VALUE. I assume that means that it’s slower in a way that’s so imperceptible that it’s not worth caring about, but let’s check:
>>> timeit.Timer("f(random.random())", setup="import random\ndef f(x): 'a' if x > 0.5 else 'b'").timeit(10000000)
1.4031157046556473
>>> timeit.Timer("f(random.random())", setup="import random\ndef f(x):\n if x > 0.5:\n return 'a'\n else:\n return 'b'").timeit(10000000)
1.326032117009163
I feel this does land in the expected place, where it’s about style & readability. I suppose f1() explicitly shows that it’s making a choice between two cases, but I feel that’s countered by f2() and f3() showing that the function returns a value. In general, it’s nice to have the smallest amount of indentation, so I would not choose to use f1().
Between f2() and f3(), I would generally choose the single-line approach of f3() if the line didn’t wrap (or exceed whatever the wrapping length is for the project, e.g. 100 characters). I would not do this:
def f3_alt():
return (
really_really_really_really_really_really_really_long_val
if another_very_very_very_very_long_condition
else also_quite_long_here_and_now_we_wrap
)
If you’re doing this, then there’s no benefit over the f2() style (and, as it turns out, there is a very tiny performance penalty).
Python has an implicit return None at the end of a method that doesn’t otherwise return. This means that for this specific case, where one of the values is None, there’s also the choice between f2() and f4(). In reality, f4() would be slightly better than it is here, in that it would probably have a return type (with |None) and a docstring that explains that None is a possible return value. However, I still (weakly) prefer f2() where it’s called out explicitly rather than f4() where you need to know that this is what Python does.
Kiwi PyCon had two days of scheduled talks followed by one day of lighting talks and unconference sessions, held over a Friday plus weekend in mid September in Waihōpai Invercargill. I’ve attended Kiwi PyCon once before, but ten years ago.
The talks were almost all at a beginner level, with a few reaching intermediate. I would have preferred a mix with at least a couple at a more expert level. Some were well done but even for novice level talks not all were great quality.
I thought these talks were good (details below):
Moving from unittest to pytest
A practical guide to using OpenTelemetry in Python
Using React as a template language in Flask
DuckDB: Supercharging Your Data Crunching
Sometimes this is a pipe, or close enough
You’re Doing Great! The underappreciated art of appreciation
Automating Victory: Beating browser games with accessible Python
Building Kubernetes Operators with Python and Kopf
Several lightning talks
The venue was good, and the conference was generally well organised (despite several hurdles), although had significant microphone issues. There was minimal organisation of any social activity (just a quiz night), although obviously the usual meal break socialising took place. There were strong anti-COVID meaasures in place, and a lot of work on diversity & inclusion (but not great gender/ethnic diversity of speakers). They have a good Code of Conduct, but ruined that by ignoring it.
I’m glad I attended for non-conference reasons (the surrounding travel and an induction session I tacked on), and to re-evaluate my opinion of the conference. However, I can’t recommend attending it, unless you’re very early in your Python journey, or are attending strictly for the socialising/networking aspect.
Note that for the last 14 years, Kiwi Pycon has moved around AoNZ (normally alternating between the north and south) but from 2024 will be permanently held in Wellington.
This was the second time I’ve attended Kiwi Pycon (the first was 10 years ago in Auckland), and the first time I’ve travelled for one.
I re-read my notes from attending in 2013 and my recollections of being underwhelmed seem accurate. A lot of this is that conferences in general aren’t that appealing to me (much of the value is in the social aspect and that is extremely unappealing for me, particularly attending solo), rather than anything specific to Kiwi PyCon. This is one of the reasons that I haven’t attended one since, but it’s also partly typically on dates that are personally difficult (close to my son’s birthday, right at the end of the sports season, not coinciding with school holidays or any other natural holiday point). Working more on the product side in the last few years also made it a more difficult sell, although I’m pretty sure I could have managed (or just taken personal time) if I had felt it would be worth it.
(I see that all the links in that post to the schedule are, sadly, now invalid, although the wayback machine appears to have them.)
This year, when I initially decided to attend, I wasn’t sure what I would be doing in September – I knew I would not be with N-able any more, but I didn’t know at that point that I’d have started with Canonical. There was some risk, since asking for time off just after starting a job isn’t a straightforward proposition, but the ticket was inexpensive and I held off booking accomodation and travel until after I’d accepted the offer from Canonical and discussed attending with the hiring lead and hiring manager in advance of actually starting.
(Canonical has a really great policy around travel to relevant Conferences, so even though I’m only just finishing up my second week, I didn’t need to use up my brand-new leave allotment to attend, and probably could have even claimed some of the cost (I felt that wasn’t necessary, since I made the decision to attend prior to even applying at Canonical)).
Last time, I wrote up my experience in prose; this time it’s more bullet-pointed – I’m not sure what that says about how I have changed over the last decade 😀
The conference is more polished than (what I recall of) the last time, which you’d expect being the 12th iteration (Auckland was maybe 4th or 5th?), although it was still pretty good in 2013. In my post at the time, I wrote a lot about how inaccurate I thought the “intended audience” labels were. Interestingly, those do not exist now (I’m not sure for how long). I wonder if that was because of the accuracy issues, or for other reasons. I do think a label (if accurate against some fairly agreed scale) does provide value.
(Amusingly, in 2013 I was keen on being able to attend virtually. That’s very dated now, given both the rise of streaming and making video available and then also the last few COVID years).
(Also amusingly: I applied for a position with Kraken earlier in the year, and literally never heard anything back, not even a “thanks but you’re not what we’re after”. So while they had a lot of cute swag on offer, I recommend against applying to work for them).
Friday was more lightly attended than Saturday. I assume this must be people that can’t/don’t want to arrange time off a Monday-Friday job, but can/will do a weekend. It’s interesting because I would much rather have the entire conference during the Monday-Friday period and ‘save’ my weekend (even if I was using a general leave allowance rather than conference leave). Sunday morning was the lightest of all (at least the half I was there for) – probably to be expected given that it was the unstructured part of the conference.
Single track: no need to choose which talks to attend, no awkward talks with minimal attenance, no distractions from people moving from room to room (particularly when tracks get out of timing sync)
Less Good
Waihōpai Invercargill seems like a nice place, but it’s not just far away from the rest of the world, it’s far from pretty much all of AoNZ (intestingly this is changing from 2024 – more on that below) – bad weather can happen anywhere, but this was extra noticeable at the end of the conference when many people got stuck after flights were cancelled and diverted
Single track: some talks are very basic, and it’s awkward to skip
Not a very diverse group of speakers (in terms of gender & ethnicity), with quite a lot from sponsoring organisations
Whoever was doing the AV was very impatient with the speakers and spoke condescendingly and briskly to them – some people are giving a talk for the first time or inexperienced with mics, and it doesn’t take much effort to be patient and a bit kind
Poor mic’ing
The Saturday start almost clashed with the second pool game for the All Blacks in the men’s Rugby World Cup 2023 (I left my room less than a minute after the final whistle, walked through to the main room, and arrived 2 minutes before the opening). Maybe the expectation is that the percentage of people wanting to watch a rugby game is lower at a Python conference than for the wider population, and that this wasn’t likely to be an amazing match, given the history, but this is AoNZ, and it’s the All Blacks, and it’s a World Cup game. There was a Slack ‘#rug-py-con’ channel but there was only two of us there – some sort of communal viewing and starting 30m later would have been nice (there was also a critical Warriors game on the Saturday, with kick-off was 30 minutes after the end of the day’s sessions, so I suppose shifting to be a little later would have hit that)
Bad
Good CoC, social/employment stickets, etc. But the CoC was ignored!
The CoC is a bit messy (it’s confused about what colour t-shirts people will be wearing, and I don’t believe there was a phone number offered as it says) but generally ok. However, I reached out to the CoC team via the provided email, and never received any response at all. It was a minor issue (although from an influential person) but if they ignore reports that are minor, I have to assume they ignore everything. A “thanks for your report – we don’t agree this is a CoC breach, but please discuss more if you wish” type response would have been sufficient, but complete silence is unacceptable.
There are many possible reasons:
Deliberating ignoring messages – this seems very unlikely
Ignoring this specific message because it didn’t seem important – this seems possible, and if so it’s not ok
Ignoring this specific message because of who the subject was – this seems possible, and if so it’s not ok
Getting distracted with the conference and forgetting about the safety channels – this seems most likely to me, and if so it’s maybe the least bad, but still not ok
The best action I could take would be to volunteer to help in 2024 and ensure that it was handled better. If I was considering attending then I would do this. However, since I’m not, I have to recommend that unless you are willing to do this yourself, or unless the organisers publicly address the shortcomings and how they will fix them, you don’t attend.
Value
(For reference, attending Kiwi Pycon 2013 cost me around $400 ($509 in 2013 dollars, although I had no accomodation costs since I lived locally).
My travel was a bit indirect – I took the opportunity to spend 3 days in Christchurch with my new colleague & manager Ben on my way, so flew to Christchurch, then flew to Invercargill the day before the conference (those 3 days were really great). Rather than immediately flying home, I took up a kind offer from Bernie to drive me to Queenstown (checking off another one of the scenic AoNZ drives I had on my list) and stayed a night there before flying home from Queenstown. Canonical kindly paid for my flight from Christchurch to Invercargill (approximately the delta of going directly) as well as some of my accomodation cost in Christchurch.
Flights: $650 (I actually have a lot of AirNZ credit to consume, so in some ways this was ‘free’, and also meant I didn’t take much care in selecting cheaper flights)
Accomodation: $510 (I paid for a nice room; not really necessary given how much it was used, but since I was paying rather than a company I figured I might as well)
Food: $31.50 (dinner Thursday), $24.50 (breakfast Friday), $31 (dinner Friday), $24.50 (breakfast Saturday), $33.20 (dinner Saturday), $24.50 (breakfast Sunday). Morning/afternoon tea and lunch were included with the conference ticket, although I generally skipped morning & afternoon tea (if I was being cost-conscious, I would have skipped buying breakfast and eaten the included food instead). I ate from the restaurant at the accomodation (Thursday it was too late in the evening to be trying to find something else, and Friday I was too tired to bother and the weather wasn’t great) – I’m sure I could have found much better value meals elsewhere
Airport travel: none in Invercargill (courtesy shuttle when arriving, a lift from a friend when departing); I drove to/from the Auckland Airport, roughly 80km/75min (it would take 30 minutes to drive to the closest PT, and I would have multiple transfers, so it’s not particularly viable, unfortunately) – at IRD rates this would be roughly $150; parking at Auckland Airport $107 (this is cheaper than having someone drop me off and pick me up, as well as being considerably more convenient); I was given a lift to Queenstown Airport
Ticket: $296 (early pricing, including a t-shirt – late was $330 plus ~$70 for a shirt)
In total, approximtely: $1,890 (note that this was entirely a personal expense, not paid by Canonical or anyone else)
Conference swag: t-shirt (included in early ticket purpose price); socks & a mini octopus plush, and a large octopus plush (from Octopus Energy); a “gentle steam eye mask” (Japan has such weird cool things that we don’t have!), foldable hand fan, ‘erasable’ pen, and screen & keyboard cleaning tool from Hennge; a small booklet about Te Reo & one promoting Waihōpai Invercargill & Murihiku Southland; post-its and note paper from Hype & Dexter; (cheap) pens from Google; assorted stickers; and a pack of COVID RATs & a mask (not really swag – intended to replace the ones used getting to the conference, although I ended up 5 RATs ahead and 2 masks down).
Venue
As a hotel, the venue was nice – perhaps a little dated, and not amazing, but certainly comfortable. My room had two nicely sized desks, a comfortable bed, and plenty of space. The conference room was large and looked nice, and was generally well set up. The food (both conference and food service) was average at best.
The mic’ing setup was very problematic (surprisingly poor for 2023). There were considerable issues with the signal dropping out (to a receiver that was a few meters away, so definitely should not have been the case), almost always a light echo, and several talks where it was even worse. I’m not sure how this will turn out in the videos.
Session Summaries
Opening 0907-0920
Started 7 minutes late 😦
Paired experienced (Tom) and inexperienced (Jordan?) MCs (and also throughout the conference) – good sustainability/training approach
Conference Director (Carlos) got sick 2 days before the event, so taken over by a new volunteer (who is volunteering at a conference for the first time) at that point!
spaceops is one of the conference sponsors (presumably that came with this speaking opportunity)
Nice tying of space to the (Earth) environment (and environmentalism, climate, etc)
Bit of general space history, NZ space history (William Pickering onwards through to Rocket Lab etc)
Covered what spaceops do (owned by local govt, indirectly), and so forth, tangentially related to dev (they have devs, but only offhand mentions)
Good speaker
Several oblique references to Apple 🙂
Some talk about the Warkworth satellite station (very close to where I live)
Very good introduction to space (particularly radio & space) and particularly where AoNZ and especially Southland fit into that picture
Nice ‘soft’ introduction to the conference with a talk generally interesting for people who are into science, without delving deep into technical details
Essentially an introduction to an opinionated way to handle (project-local) module/package imports
Very introductory (for example, I would expect everyone @ Canonical would already know this and/or have their own opinions on it)
Argues against using relative imports (seems counter to my experience where absolute runs into collisions with paths, like collisions)
Core is an argument that everything should be “import core.module.submodule” where ‘core’ is the top-level code folder
Grant feels that the issue that that existing documentation on this is generally outdated, unclear, or delves deeply into internals instead of giving a “how to”
Good speaker
Unfair comparison with his talk in 2013: I don’t remember the 2013 one in great detail, and my notes on it are minimal, but I believe I liked this one more
Interesting note: prior to his Python work, worked in social sciences on interesting topics like alcohol and gambling issues. All the papers seem locked away (as academic papers often are, sadly).
Moving from unittest to pytest, Tim Penhey (Stacklet) 1021-1200
Had the same realisation as I did (in the shuttle ride to the accomodation) that some of the younger people may have been born after we started using Python!
Talked about initial dislike of pytest because of the implicit magic (conftest.py, and fixture naming matching) (agreed!)
Not entirely convincing arguments about why he changed his mind – basically this came down to “the good outweighed the bad”, which I suppose is also where I have landed, but it seems like you could have most of the good without having to have all the magic. Maybe there’s some other testing framework that does that, although it seems like everyone is standardising on pytest (for these use cases, rather than others like Hypothesis, Playwright or Robot)
Nice examples from real-world (slightly simplified) code unittest->pytest
Several examples of fixtures from real-world code that they find useful (lots of composition)
Medium level, expects understanding of non-trivial code, but would all be known to anyone that knows pytest moderately well (aimed more at people familir with unittest and not familiar with pytest)
Very good speaker
Interesting note: previously worked for Canonical (on Launchpad)
Automate business processes for all your companies with Odoo, Danny Adair & Graeme Gellatly (Roofing Industries) 1333-1417
Automating business process (from a roofing material company, not a software company)
Graeme is from the business side, Danny is the tech side (he was core to the early Kiwi PyCons, started NZPUG, etc)
Considerable amount of LoB automation (seems like a huge amount for a roofing supplies company, but perhaps that’s just ignorance on my side). For example, email comes from supplier with an invoice attached, automatically extract that, match to a purchase order, raise issues if there are differences, otherwise automatically approve an pay, with no human involved
A “this is a tool we use and really love and this is why” talk
General introduction to Odoo and how it’s used
Deep dive into the way that you customise Odoo, with a system that builds classes (and figured out a MRO) on the fly. Interesting, although complex (in implementation, but simple to actually use)
Interesting talk, although not sure that it was to the right audience?
Good speakers, handled the dual speaking well
Some mic issues 😦
Interesting note (Danny): Event Director for Kiwi Pycon 2013, when I previously attended
A practical guide to using OpenTelemetry in Python, Tom Eastman (Kraken) 1423-1455
Very good introduction to OpenTelemetry (probably too basic if you know OpenTelemetry but as someone who has some experience with observability but not much with OpenTelemetry it was good, particularly since OpenTelemetry has improved a lot in recent time and is a fairly young project)
A “this is something I use and really like and this is why” talk
Very good speaker
Inferior mic’ing
Unfair comparison with his talk in 2013: both good quality, but I liked the fun aspect of the 2013 one more than the practical aspect of the 2023 one
Replacing server-side HTML with React, but page-by-page rather than as a SPA. In particular, keeping the routing done with Flask. At the same time, wanting to have strong types that are shared between the Python and Typescript code. Created tooling to automate generation of Typescript that had type definitions based on the type definitions in Python.
Then expanded this to also generate Typescript that provides types (and generally code-competion in an IDE) for API use, but without using (e.g.) OpenAPI since everything is in one local space.
It seems like using something like Vue would have been a simpler way to solve the “TypeScript/JS Framework but page by page” problem, although the coupled typing in the API would still be an improvement over that.
Interesting, assumed an ability to keep up with examples without much context, which generally worked (started out with some overly basic explanations, like “this is how decorators work”, but jumped quickly past that)
DuckDB: Supercharging Your Data Crunching, Richard Wesley (DuckDB) 1550-1624
Significant mic issues 😦
General introduction to DuckDB and where it fits in the DB space, and what the advantages it offers are
Good level (would be known to people that use DuckDB, but e.g. if you use something like Clickhouse and don’t know DuckDB this is a good intro)
Interesting note: worked for Tableau for a long time
Sometimes this is a pipe, or close enough, David Hood 1628-1657
Non-technical talk
Aimed at where he (a data journalist) sees developers needing help when moving into working with data
Important to always consider and document the context and the bias, how valuable the data is depends on what you are wanting to do with it
Can do much with very simple analysis – complicated models are really needed more when figuring out ranges
Very interesting, even though it seems like it would maybe would have been better at a different conference
Talked about examples have seen previously on Mastodon (e.g. time of day/week driving injuries/fatalities, changes in COVID incidence after changes in legal requirements)
Important to understand what you are doing with the data and the context
MIMO scheduling (multiple ground stations, multiple spacecraft, need to maximise the communication, choosing which station to talk to which craft)
He mentioned that he put together the talk the day before, and it was a little apparent. He spoke well and knew his material, but it didn’t have a lot of structure – some of it was praising GNU Radio, some was about what spaceops does, some was (overly trivial) why Python is good/bad, some was about how they’re using Python – it was like 2 good and one less good lightning talks merged into one
Good speaker
Interesting note: interned in Whangārei – maybe from there?
Saturday Conference Opening 0906-0915
No notes
Keynote 2: You’re Doing Great! The underappreciated art of appreciation. Loren Crary, Director of Resource Development of the Python Software Foundation 0917-1007
Very good speaker
Brief intro to the PSF
Brief speaker bio
Why expressing gratitude (written, spoken) matters, tips on how to get better and more regular at doing this
Really good (non-technical) talk
This is something I am also super passionate about (particularly in work, where I think it’s forgotten or overlooked more than elsewhere, and where ‘forced’ appreciation is common and not useful). Hopefully this changes how people behave
Interesting note: was asked to speak at Kiwi PyCon 2023 in late 2012, just days after starting work at the PSF!
Panel: A conversation for inclusion in tech, Samantha Nickson, Loren Crary, Kelsey Vavasour, Daisy Brenecki, Tom Eastman, and Christopher Neugebauer (replacing Carlos Cordero) 1220-1302
(I’m cis, straight, white, middle-aged, male, and work in a well-paid industry (the privilege jackpot), so that and my lack of experience on the minority side may bias this)
Held during lunch, quite lightly attended
No intro of the panelists 😦
Weird having the moderator stand while everyone else sat
Moderator too quiet
Very low energy
Loren was great, although didn’t say a lot; Kelsey was the strongest panelist; Daisy had notes, which seemed odd for a panel
No opportunity for questions from audience, which seems odd for a panel
Speaker diversity at the conference was noticeably lacking PoC, was light on women (every single session on Friday was given by someone that presented as male and ~white (I realise that there are issues with assuming gender & ethnicity) – Saturday was better, but not hugely). It would have been interesting to have introspection on that
Delivering Customer Projects in a Rapidly Evolving Python Landscape, Romi Dexter, Benoit Chabord 1331-1412
Talk from Integration Glue and Hype & Dexter (sponsors), these are the founder/co-founders of the two businesses (which are connected in some way I didn’t fully get)
Practical suggestions particularly around improving devex
Quite a practical talk for a sponsor talk
Talked about the specific tools they use (Ruff, Pyright, Black, pytest, coverage.py, Github Actions, Sentry, GCP, Clickup, Google Docs, Cookiecutter)
An introductory level talk
Good speakers
A little on the long side
Interesting note (Romi): was previously CIO for GrabOne
Interesting note (Benoit): first AoNZ role was at Yellow (Pages)
Building Kubernetes Operators with Python and Kopf, Sam Bishop (Runaway) 1422-14:50
Much more interesting level (it felt intermediate, although if you know k8s then probably more novice) and nice speed and consistency working through it
Nice slides
Exactly what the topic says (not that much else to say: exactly what was on the label)
Good speaker
Easily the best talk
Recommended packages: pendulum for datetimes (I believe I’ve seen this recommended before), deltaman for parsing human-readable time intervals, dynaconf for configuration management
Building an OAuth2 Authorisation Server with Flask and Authlib, Sam Watson (Runaway Play) 1548-1611
Walkthrough as per topic (not much else to say: exactly what was on the label)
Intro level, technical, assumes can keep up with code
Interesting note: the OAuth2 auth server described went live a ~week before the conference (and is working well so far)
Automating Victory: Beating browser games with accessible Python, Jon Gaul (Heenge) 1620-1651
Very good speaker
Project management, useful I/O libraries
Reminded me of Tom’s talk from 2013. Fun but pratical
Technical issues with live demo (screenshoting/GUI automation with projector screen secondary) but had backup videos
Close second for best talk of the conference
Basically: scripting an app by taking screenshots, using (basic) computer-vision to recognise the board, some (simple) rules to decide on moves, and then GUI automation to make the move, aiming to beat the speed run times (the game is like Minesweeper, but with levels of the dangerous squares, so much more maths, Momono Sweeper)
Interesting note: wrote a kid’s book (my guess is that it’s this one)
Basically delved into “simple is better than complex”, pointing out that often “simple” is achieved by moving complexity elsewhere, and that it’s best when complexity is managed by those who are most familiar with the domain
I skipped this. I needed a break by this point, and I didn’t believe the extra COVID risk was worth it for taking part in a quiz. I also lacked the energy to find 7 other people looking for an extra team member (it seems like the conference could have facilitated something here, even if it was just a whiteboard where you could post teams looking for people, or a Slack channel – except that most of the attendees seemed missing from Slack)
(Interestingly, this was the only organised social activity – e.g. there was no conference dinner).
Sunday Opening and Awards 0906-0927
Typical conference thanks etc – done early to accomodate people needing to get a limited number of flights out of Waihōpai Invercargill
General AoNZ Python history & update
Dropping having the conference in the location of the organiser (rotating North & South island), having it in Wellington every year instead
Changing where the conference is located going forward is an interesting choice. I’m sure there must have been much debate about this (although I’m a NZPUG member and don’t recall seeing it). I can see both advantages and disadvantages:
Advantages:
I assume some logistics are easier, particularly if you tend to stick with a single venue and so forth
The ‘average travel’ probably decreases, given that Wellington is located in the (rough) centre of the country, although this sort of calculation is tricky, because in reality it really depends how the origin is distributed
Travel is likely much simpler for international travellers, since you can potentially arrive in AoNZ in Wellington, or at worst would have one connecting flight from Auckland (and there must be many flights Auckland/Wellington every day), rather than travelling to more off-the-path locations like Waitangi or Invercargill
Disadvantages:
A drawcard (I assume for others as well) is seeing other places. This was certainly the case this year for me. I’ve been to Wellington many times (likely most people have) so there’s no appeal there, and even if you haven’t been, then you will have after attending once.
Either the organiser needs to be located in Wellington, or have members of their team in Wellington, or you have to do a lot of organising remotely – this is probably not too difficult this year, and if you’re re-using pieces (e.g. a venue) then it matters less
There are no cheap travel/accomodation years for anyone outside of Wellington (although every year is a cheap travel year for those in Wellington). Not having to pay for accomodation or flights was definitely a drawcard the first time I attended, and I imagine that it still would be if I was early in my career – maybe a lot of this is handled by companies, but it still hits students for sure
Development/boosting the local Python communities (outside of Wellington) is lost
It’ll be interesting to see how this goes.
Lightning Talks, Christopher Neugebauer 0933-10:55
I enjoyed many of these, although a lot were good as lightning talks rather than something where I’d want to see a 30-45 minute version.
Jonas, Promo for PyCon APAC 2023 and other PyCons in the APAC region
Grant, Opinion that Python (and Python-like, e.g. Mojo) will become the norm for development and other languages all exceptions (by 2045)
Sarah Greeff, Issue of low uptake of programming by females in high school education
Sam, “Loss & Grief” (the Tamagochi Effect, need to be careful about downstream consequences of development)
Simon “DevOps for Highly Regulated Environments” – comes up against arguments that devops practices conflict with security regulations, but argues that it’s entirely possible
Kristina Photos of the conference – Kiwi PyCon is about people, and not only developers, basically thanking everyone through photos (started with te reo & a mihi :tada:)
Steve Baker Wavetable Synthesis, live demo playing an instrument (electric ukelele) (music and live demos always very popular)
Kelsey Vavasour How to bring more people into programming (started with a te reo greeting :tada:)
Kesara PyPI Supply Chain Attacks (brief history of improvements that PyPI has made recently, and where there is still work to be done)
Warwick ?Walter? Basic introduction into binary/logic gates for addition (half/full adders), live demo of a home-built 1-bit adder with transistors (again live demo with hardware always popular)
Chelsea Finnie Learn A Sea Shanty (audience participation, learning a sea shanty), mostly just an excuse to get the room to sing a song together (audience partipation and music popular with a subset)
Hugh Davenport Showing a toy OS he has written (in x86 assembly), how he debugs issues with it using Python, live demo (one-handed), includes extremely minimal Python in the OS
Vicki McKay What to do when you don’t have a plan (decide to do something, follow intructions, listen and learn, follow the most passionate person), plan things, unclear on the message of the talk (maybe it was that people should participate in groups?)
Joelle Maslak How Networks Surprise Developers – works for Netflix, examples of how physical networks aren’t always as expected, how distance matters to speed
Daisy Talked far too fast (not the point of lightning talks), history of supermarket development in Austrailia and AoNZ (!)
Unconference
During the first two days of the conference attendees could suggest topics for ‘unconference’ sessions, and vote on suggested topics. My ride for Queenstown was leaving just before this, and none of the topics were ones I was particularly wanting to take part in, so I skipped this.
I think the rise of social media over the last few years have significantly decreased my interest in unconferences. It (possibly unfairly) feels like this would just be 30 minutes of ‘arguing with someone on the internet’ but in person. If there was a practical point, that would be different, but if it’s just a discussion, then I don’t feel like this is going to give me value, and will just get me annoyed. Again, this could definitely be unfair, particularly since I didn’t try these.
The topics were:
Can the Python Community Support Python Education in NZ? (answer: yes – and for the details this really relies on having people from education here, and I don’t think there were enough)
How Do We Improve the Python Packaging Experience, Especially for Beginners? (this seems like a talk for a bigger PyCon or elsewhere)
What are the Most Fun Things You’ve Done with Python (seems like this should just be lightning talks)
What Should Change in the Python Language Over the Next 10 Years / What Are Python’s Biggest Regrets or Mistakes? (this seems like it would drive me crazy)
Southern Tech: Is There Interest in Setting Up A Tech Advocacy Group Across Invercargill/Dunedin/Lakes District (probably good, but entirely out of my scope)
Is ChatGPT/Copilot etc Ruining Programming (this would also drive me crazy)
What Does Best Practice Python Data Science Look Like in 2023? / Do Notebooks Rot the Brain? (just not interesting for me personally)
What Can Python Learn from Other Languages? (another drive me crazy one)
Linters: Who Should Use Them? Which one? (Everyone, whichever one suits you/your team)
GNU Radio Live Demo (probably better during lunchtime)
Conference Closing
I obviously skipped this as well.
Closing Thoughts
I had three reasons for attending Kiwi Pycon this year:
To re-evaluate my now 10-year-old conclusion that attending Kiwi Pycon was not high value for me
To continue exploring areas of AoNZ that I haven’t travelled to
Originally, I was potentially not going to have a full-time development job at this time, and thought it could possible have some job search value; instead this became the value from combining the trip with working in Christchurch for a few days
I did get to see more of Christchurch, and visited Invercargill for the first time, and did the Invercargill->Queenstown road trip that I was keen on, so from that perspective it was a win. The tacked on induction visit in advance was really great (and also very valuable) and I doubt I would have done that without having this additional reason (Christchurch isn’t that far from Auckland, but it’s far enough, as compared to Christchurch and Invercargill when already travelling), particularly since Ben & I would be meeting in Riga in a few more weeks anyway. It’s interesting that the most positives from my trip are in this area, which vanishes when the conference becomes anchored in Wellington.
I did get a chance to re-evaluate attending Kiwi PyCon, and that was valuable. My conclusion ends up the same: I don’t get enough value out of it to attend. This is even more the case than in 2013, since I could watch any of the talks when the videos are available (fairly soon, I expect), so it’s really the experience and the networking that’s providing value. There were plenty of interesting and smart people, and I had some interesting conversations, but nothing that was so amazing (or so unable to be replicated online) that I would pay >$1000 and consume 3 days for it.
If the conference had more of the really great talks, and especially more that were aimed at people that have been using Python for 10+ years, then that might change my opinion (this is something I thought would have changed since 2013), although there’s still the “videos are all online” counter. I would like a mix something like 10% beginner, 80% intermediate, 10% experts, and 20% ‘fun and interesting’ rather than entirely practical, and roughly 30% soft, 70% technical talks. Obviously, other people will have other tastes.
If you’re someone that really enjoys the social & networking aspects (I don’t) or looking specifically to be hired by one of the sponsors or are very early in your Python journey or want somewhere to practice a talk you’re going to give elsewhere, then it could be worth it for you. However I have to recommend everyone against attending Kiwi PyCon because of the Code of Conduct issue. That’s a complete blocker to attending, and should be for anyone in 2023 and beyond (unless it does get publicly addressed).
More Filmspotting catch-up. Hard to resist a Top 5 list that involves books, obviously!
I’m using roughly the same criteria as Josh & Adam (and Kristen) – in particular: anything that already has a film (even if it’s terrible) is excluded, and it has to be a full-length book. The latter is particularly challenging: in my opinion, short stories & novellas make the best page-to-film adaptations. Full-length books that would translate well to the screen are almost always going to be better as TV series. I’ve also avoided cheating this by suggesting a trilogy or series of films, although some of these could obviously have sequels.
There’s quite a bit of recency bias here. I’ve read a lot of books, but find it difficult to think back over which ones would work particularly well as films. I skimmed over my Goodreads list, but only recent reads have (my) ratings, and it’s tricky relying on average ratings – plus until recently I didn’t add books that I read but didn’t own (library, friends, family, etc).
5. Pastwatch: The Redemption of Christopher Columbus by Orson Scott Card. I’m generally wary of time travel stories (but am a sucker for time loops) although there are somegreatfilmexamples. Ender’s Game was a terrible adaptation, so there’s a lot of risk here. (I’m aware of Card’s politics and religion, and my disagreement with most of it doesn’t change my enjoyment of his writing). There are some elements to the story that would play very well today: particularly, the environmental aspect, and the reality of Columbus’s impact on the people already living in the Americas. However, the titular redemption might be an issue. Gina Prince-Bythewood to direct (or maybe Roseanne Liang?), introducing new actors for the main cast, with either Justin Baldoni or Chris Evans to star as Columbus.
4. The Lies of Locke Lamora by Scott Lynch. This would need a careful screenwriter, to emphasise the heist/con story and ensure the fantasy element is almost incidental (until it’s critical to the story). It would have to be a long film, but you could probably sacrifice some parts of the story that lead towards the sequels without losing what makes it a great story. Apparently Warner Bros. were working on an adaption, and after that fell through someone else was, but as far as I know there is nothing announced, so it’s up for grabs in this list. Ajay Friese as Locke, Ian Samuels to direct.
3. The 7½ Deaths of Evelyn Hardcastle by Stuart Turton (first published as The 7 Deaths of Evelyn Hardcastle). Apparently Netflix were going to make a series based on this, but gave up, so that frees it up for the list. I mentioned my love of time loops already, and this would be a great addition, mixed in with the Christie-esque aspects. I didn’t love the ending of the book so I feel this could be one of the rare examplesofafilmthat’sbetterthanthesourcematerial). Maybe Mark Ruffalo as the protagonist, but I don’t have a director in mind.
2. The Loneliest Girl in the Universe by Lauren James. I love the genre twist in the middle of this story, and it would translate well to the screen, although you’d need to ensure that marketing (especially trailers, which have become awful in recent years) didn’t ruin it. There’s a lot of written communication in the story, and not seeing one of the characters is important, but a good screenwriter could solve that. Lauren James fancast this herself, so it’s hard to argue for anyone else to star. To direct: Gary Ross (bringing elements from Pleasantville, The Hunger Games, and Ocean’s 8).
1. Beak of the Moon by Philip Temple. It’s one of my favourite stories and definitely my favourite NZ novel. It would make an amazing animated film, made entirely in Aotearoa with local talent. There are two huge challenges: firstly, it’s a long book – but I think a good screenwriter (keep Walsh and Jackson as far from this as possible!) could solve that and the imagery would do a lot of the work. More challenging is that all the main characters are male, and this is central to the story (and, in my limited understanding, to kea behaviour). I think you’d want relatively unknown actors to lead (Māori, in their 20s) although I could see cameos by Rhys Darby, Bret McKenzie and Jemaine Clement as the kākā, Taika Watiti as the kākāpō, Sam Neill as the albatross, and maybe Cliff Curtis as Glintamber. I have no idea who could direct – it should be someone local, and someone who can do animation well, which means someone new.
Honourable mentions (in no particular order):
The Crystal Cave by Mary Stewart. I’m not sure the world really needs another Arthurian film, even if it is based on one of the best adaptations. It’s also probably too long to make a good film andThe Hollow Hills would really need to be made as well. I’d still love to see it done well, and I think we’re at a point where there enough people that can do historical fantasy on screen very well. This did have a TV series adaptation, which probably excludes it (I haven’t seen that).
Daughter of the Empire by Janny Wurts and Raymond E. Feist. Easily the best trilogy in the Riftwar Cycle, and Mara’s story, particularly in this first book, would easily be as compelling as Katniss, Tris, and Teresa were in the 2010s – and we’re due for a resurgence in great YA films (with less dystopia this time). However, I’m not sure you could do justice in a film, and it might have to be a TV series.
Gideon the Ninth, by Tamsyn Muir. I think the fantasy/sci-fi mash-up, the LGBTQ+ aspects, the humour, and the action would all combine to make a great film. I wonder if it would be too confusing to tell in under 2 hours and that you’d need to make this a TV series instead. It also feels like it would really require also making the other books in the series (once they’re all finished) and some of those would be even more difficult, and that also disqualifies it from this list.
Faerie Tale by Raymond E. Feist. Problem #1: I remember loving this story and re-reading it several times, but I haven’t read it in over 10 years, so it’s hard to recollect enough details to evaluate whether it would translate to the screen well. Problem #2: I watch hardly any supernatural thriller films, so don’t have any idea what it would be competing with. My gut feeling is that it would be great, though.
Memory Man, by David Baldacci. This would clearly make a great film – it was only bumped off my top 5 because there are already a lot of great crime films and I’m not sure that this would add anything truly new.
Hunting Party by Elizabeth Moon. This would be a great action/sci-fi story (much of the best sci-fi on film is space opera). I don’t have any reason for not including this, except that I couldn’t figure out which of my 5 top I would bump to make this. It’s a clear #6.
Last Friday (30th June, 2023), was my last day working for N-able after 18 years (a bit over 12 years of that for SpamExperts, then after the acquisition a bit under 4 years for SolarWinds MSP, then after the split the last almost two years for N-able).
I’m more able to write about things now, and have a lot of ideas of topics I’d like to cover, and hope to get to a bunch of those before starting my new job in September. For now, mostly for my own record, a brief summary of how it all ended.
Scaled Agile & the introduction of a Product Owner role
In 2020, SolarWinds decided to adopt the Scaled Agile framework (SAFe), under the name “SolarAgile”. The plan was to introduce this within the MSP side of the business first, and then extend it to “Core” (the traditional, large, part of SolarWinds) and “Cloud” (Pingdom, AppOptics, Loggly, Papertrail). I was part of the pilot group helping to shape this in the initial adoption, which was actually pretty great.
One aspect of this project was the introduction of a Product Owner role. There were a few POs across different products, but no real consistency about what they did, and most teams didn’t have a PO. For Mail Assure & SpamExperts, I handled the PO role in the initial pilot (one of the requirements was someone that was both extremely familiar with the product and also very experienced with Agile), but the intention was that I would move to PM after the pilot period.
By this time, the Mail Assure & SpamExperts engineering team had shrunk in size quite a bit (down to two teams), and wasn’t expected to grow again. Similarly, from a product perspective, the goal was high retention rather than a lot of growth (especially new logo growth). I argued that we did not need both a PO and a PM for a product of Mail’s size (a “practicality beats purity” type argument). However, TPTB went ahead and hired someone for the PO role anyway.
In some ways, the acquisition (August 2017) was the beginning of the end – particularly since SolarWinds and then N-able were both very focused on people in offices (covid disrupted this as it did everywhere, but even now there is strong encouragement to be back at an office at least part of the time). However, I feel like adding in the PO role to the Mail team was the real beginning of the end, because it was always clear to me that we were ‘overprovisioned’ in product people for the nature of the product.
Everything went well enough for almost a year – a lot of that time was our new PO learning the ropes while I learnt more about the parts of PM that I hadn’t done before, and tried to extend out our strategic planning.
Reorganisation and Product Owners transformed to Product Managers
In late 2021, after the split from SolarWinds, N-able had another one of its frequent leadership changes, with a new CTO followed not long after by a new VP of engineering and a new VP of product. This (mostly) spelt the end of SolarAgile, and the decision was made to completely remove the PO position, with the actual PO work being brought under the responsibility of the PMs.
Essentially overnight, all the POs were now either PMs or moved elsewhere within the company (e.g. into an engineering role) – many have since left N-able. This transition was handled very poorly, with the news of the change arriving to at least some of the POs at the same time it arrived to the company as a whole.
Most relevant to my journey, this meant that Mail Assure & SpamExperts now had two PMs, the former PO and me. I already felt like both a PO and a PM was too much ‘product’ for the team, and this obviously made it that much worse.
Again, everything went ok for some time – while we were both PMs, we did still tend to split up the work in similar ways as before, with me focusing on the higher level strategy and prioritisation and my fellow PM working on more of the operational aspects.
Interlude: Promotion and Pay
During 2022, I was promoted to Senior Product Manager (although as a contractor I technically didn’t have any title at all). This had a reasonable pay bump, which was particularly welcome in a year where inflation was suddenly something that was relevant again.
This was the third significant pay increase that I received in my SolarWinds & N-able journey. The first was after SpamExperts was acquired – this was basically adjusting to be closer to ‘market’ rates (SpamExperts being fairly small was generally at the lower end, although it had certainly improved and in the last 5 or so years there I had no complaints about how much I was paid), and also essentially for retention purposes (ensuring as many key people as possible stayed after the acquisition). The second was a couple of years later, after the former SpamExperts CEO & CTO had left, and I was very unhappy in my architecture role and with the direction of the product. This was again basically a retention play (which worked – I also got other changes, which helped, but for the most part I was willing to stick around for a while because of the comparatively high renumeration).
It was never made super explicit, although it came up in some conversations, but I feel that these actually ended up contributing to the end (of course, in the meantime, they contributed plenty to my financial security). If you looked at the cost of PMs for the Mail team, then I was the bulk of that, and my salary was somewhere in the ballpark of 2-3% of the product’s revenue. When I moved product (more on that below) this would have been even more noticeable, because the strong retention motivation was no longer there (or at least no longer visible to the people making the decisions)
This isn’t a complaint about being well paid for a few years, by any means. But I do think that it was one factor in the eventual end of my tenure.
Moving Products
Around August 2022, it was clear that people had looked at the budgets for Mail and decided that having two PMs was not justified. I don’t disagree with this (and indeed had argued it all along), although I think things would have played out pretty differently if we’d never had a PO at all (which is certainly not her fault, and is not to denigrate any of the excellent work she did).
Either I would need to move or the other PM would need to move. It was made clear to me that the preference was for me to move – as a senior PM who was well regarded by many in N-able across several products (although certainly not universally; you can’t please everyone), the expectation was that it would be simpler to put me in a new product and have the other Mail PM continue with Mail Assure and SpamExperts.
I didn’t like this plan. I did have, in my performance planning documentation, some statements around either joining a new product or building something new within the same product & team. However, those were in the 3-5 year range, and I was pretty clear about having some work that I really wanted to finish with Mail first.
(As an aside: I never really got the chance to properly implement a strategy as Mail PM. The first 12-24 months were taken up with (a) work on security as a result of the SolarWinds Hack, (b) work to adjust as a result of the SolarWinds/N-able split, and (c) finishing off and cleaning up after the previous PM, who really didn’t understand the product at all. After that, we were just starting to get underway with some new work, and then I was moved away).
However, it was clear to me that me moving was going to be best – it would hopefully turn out well for me (wrong, as I came to find out), and for the PM who would stay (somewhat correct), and for everyone else as well (unclear). So I accepted that this was going to happen, and was moved to Cloud User Hub, which was a product born of another acquisition (spinpanel), and had just a month or two earlier launched. More on how that was sold to me and what the reality was another time.
Another Reorganisation
The first couple of months on Cloud User Hub were rough (mostly because of the state of the product and the disastrous launch), but by the middle of October were improving and things were looking up.
At this point, N-able joined many other tech companies and laid off a large chunk (maybe 15%?) of employees, and did a bunch of restructuring as a result. I was kept on, but my immediate manager was gone, and the product was moved to a separate group as well, under the leadership of someone completely new to N-able.
At this point, looking back it feels like a foregone conclusion that this would be it. The product was in a huge mess, and although I worked on improving that and some progress was made, and although I joined after the mess was made, you couldn’t look at my work in Cloud User Hub and see anything like success. In addition, I was now reporting to someone (my manager’s manager in particular) who had no history with me at all, so there was no existing goodwill or understanding of quality work that I had done in the past.
Final 4 Months
On February 28th, I was told that I was no longer required and would have 4 months of notice, finishing up at the end of June.
The official reason was the challenges around location and time zone. To be fair, this was much harder in Cloud User Hub than it had been with the Mail team. The majority of the engineering team were in India (good overlap with NZ), some in the Netherlands (good overlap with NZ, long history of working with people in that time zone), and some in the UK (reasonable overlap with NZ, several years of working with people in that time zone. However, N-able has almost all of the senior management in the US, and combining the US (or Canada, where there are other teams) time zones with the Europe/Asia ones leaves no good times for working with NZ.
For all of the 18 years I was with SpamExperts, then SolarWinds, then N-able, I was extremely flexible around working times (anyone who worked with me would attest to this). Until Cloud User Hub, this was the occasional meeting during the night, and working around 4-6 hours in my evening (which suited me well for many years anyway). After moving to Cloud User Hub, I would regularly have meetings at 3 a.m., 4 a.m., and so on – at least weekly, generally multiple times a week. I made this work, but it wasn’t good for me (or for N-able, really).
Ironically, this was much worse in the December-February period (excepting the week I vanished in February thanks to Cyclone Gabrielle) than later, when there was less need to be meeting with senior leadership and more work with the actual teams, where time zones aligned well enough. Travel to do in-person work (to Edinburgh, for example) was expensive for someone in NZ, though (none of the engineers in India would be doing that travel either).
More to say about my adventures over the last 18 years, but that’s essentially how it all came to an end!
I love PEP 20, the Zen of Python (I used to have a set of t-shirts I had made that had one koan on each), and I think it’s actually applicable more widely than just code (Python or otherwise). I certainly agree that following its guidelines is a great start.
Assuming that there’s some sort of CI pipeline, that should include enforcement of a style (ideally auto-correcting to one). Black is the obvious choice here, and it’s (for the most part) following PEP 8, but the most important thing is to have a consistent style where a tool does all the work.
Shift work ‘left’ of the CI pipeline, and make it easy for contributors, by having your style format of choice part of pre-commit or some similar workflow, and have an appropriate configuration file for that in the source repository.
Follow PEP 257 for your docstrings (and obviously have good docstrings). IDEs sometimes shove in a bunch of text around return values/types and enforce conventions – I’m less bothered about those, and I think generally they (and the related recommendations in PEP 257) have been supplanted by type annotations in many cases. When other people are using the library, they’ll see these docstrings, and they’re probably also part of your automated reference documentation.
While on the topic of docstrings, put an example or two in them anywhere it makes sense, and use doctest to make sure that they stay correct.
Have a style for imports (and use isort or something similar) to automate/enforce that as well. I personally prefer sorting by (length, normalised case alphabetical) with groupings for, from top to bottom, the standard library, third-party libraries (with a blank line between each, ordered approximately by how well established they are), internal libraries, and then finally imports from within the same package. But again, it’s the consistency that matters most. (This one isn’t really about API design).
“Pythonic”
In addition to the items that Ben mentions, I think it’s important to design the API so that it works well with Python idioms. Exactly what this entails depends a lot on the specifics of the API, but for example:
Functions & methods should be designed so that they can easily be used with the functools module (Ben has an example of this).
Provide generators rather than returning a tuple or list when possible. These should work well with the itertools module, with yield from, etc.
Work well with the standard library logging module (but don’t be noisy when someone isn’t using it). The logging module is an example of an API in the standard library that is poorly designed (or perhaps just is not particularly Pythonic), in my opinion, but it’s the default choice for logging and utilised by tools like Sentry.
Context managers. For example, if your object connects to something (a file, over a network, etc) then have the cleanup done in a method called close() so that you can using contextlib.closing (but actually also provide your own __exit__ to handle this).
Where appropriate, make it easy to serialise data. This might include supporting pickling objects, but might also be to other formats (JSON, YAML, etc).
Async
The Python releases that I really like are the ones that focus on improving performance (sometimes this is CPython specific) and usability (like the improved tracebacks in 3.11), and the standard library. In my opinion, for the most part, the Python language itself does not need regular changes, and sometimes these can be at the detriment of some of the aspects of Python that make it great (like readability and ease of learning).
I’m not (yet?) a fan of the walrus operator or pattern matching, for example. I have mixed opinions about type annotations. However, one change to the language over the last decade that I feel is definitely worthwhile is the addition of async & await. It was possible to use coroutines in Python previously, and understanding how async works does add complexity to learning the language, but I feel it expands what can be easily done with the language, in the same way that you can use Python in a very object-orientated way, or a very functional way, and so on.
One catch with async & await is that they have a tendency to spread throughout your code. You can ‘collapse’ async code into a synchronous action by adding an event loop and waiting until everything is done, but for the most part if you’re using an async library then you’re probably building your entire app in an async fashion. It’s definitely much simpler to make use of a synchronous call inside of an async method than vice-versa.
There are libraries that have added async functionality after originally being synchronous (e.g. Django) but from what I’ve heard that has been complicated to get right and the API is less natural than otherwise (e.g. compare with FastAPI).
Whether or not a library should have predominately async methods, or both async and sync versions, or avoid async entirely depends a lot on what it’s doing and how it’s expected to be used. However, it’s definitely something to think a lot about in advance, rather than try to adjust mid-flight.
Exception Chaining
Ben has a good section on errors and exceptions. The only thing I would add is that you can and should explicitly chain exceptions so that it’s clearer to the caller what’s happening. I think Ben is referring to that here, but doesn’t call it out explicitly:
For example, if your library can raise ssl.SSLError when it’s calling an HTTP API, it’s probably best to catch that and re-raise as fishnchips.NetworkError.
You should do this to show that the SSLError was the “direct cause” of the NetworkError:
try: ... except ssl.SSLError as e: raise fishnchips.NetworkError() from e
Related to errors & exceptions, you should make use of the warnings module and the various warning Exception classes.
In this section, Ben also says:
APIs should be designed so that it’s hard to make mistakes.
He doesn’t have this as one of the takeaways, but I would ‘promote’ it to one.
The standard library unittest package
Ben has unittest as an example of a poor standard library API. I agree with this, and I think it provides a good example of where API design can be challenging. The main problems with the unittest API (such as assertEqual(a, b) not being assert_equal(a, b) or a plain assert a == b) come from the API being an implementation of the xUnit API (originally SUnit and popularised by Java’s JUnit but extremely widely used).
The question here is how closely the Python xUnit implementation should match the implementations in other languages (or, put another way, whether the Python standard library should have an xUnit implementation versus a library that supports unit tests that isn’t necessarily xUnit). If you’re coming to Python’s unittest module from Java (and JUnit) then these are comfortingly familiar:
# Probably a relative import in practice. import example.util
class MyFirstUnittestTests(unittest.TestCase): def setup(self): self.calculator = example.util.Calculator()
def test_addition(self): # In Python 2, this could have been assertEquals() self.assertEqual(2, self.calculator.add(1, 1))
There are lots of other examples of APIs like this, where some underlying functionality is being exposed in many different languages, or where similar functionality is being implemented in many different languages. Sometimes, you get both, like with the python-mysql library:
import contextlib
import MySQLdb
# Fairly Pythonic, and aligns with the DB API (PEP 249) # The Connection object should have an __exit__ that closes. with contextlib.closing(MySQLdb.connect(**[connection args])) as db: with db.cursor() as c: c.execute("SELECT col1, col2 FROM tbl") for col1, col2 in c.fetchall(): pass
# Low level API that essentially exposes the MySQL C API. import _mysql as mysql
conn = mysql.connect(**[connection args]) conn.query("SELECT col1, col2 FROM tbl") result = conn.store_result() for col1, col2 in result.fetch_row(maxrows=0): pass conn.close()
In general, I believe it’s better to design your API to match the language, and copy the intentions and outcomes from the source, rather than try to completely match the API. This is one reason why pytest is superior to the standard library unittest.
The standard library csv package
Ben has the csv module as an example of a good API, and I generally agree, and I think it’s particularly so given how poorly defined the CSV format is, which makes working with CSV files much more challenging. The one nit I have is:
import csv
with open("some.csv", newline="") as f: reader = csv.reader(f) for row in reader: pass
I don’t love that you have to know/remember to open the file with newline=”” (in Python 2, you had to remember to open the file in binary mode). Most of the time it won’t make any difference, but when you have a CSV with a newline in a quoted field it’ll break if you don’t do this (I’d argue that if you have that sort of CSV you perhaps are using the wrong serialisation format, but often that’s out of your control).
It’s more Pythonic to create objects from files than filenames (more technically: pass something file-like that supports the file protocol rather than something that supports the string protocol). It does feel like passing a string to csv.reader is generally wrong (you’ll get each character in the string as a one column row), and the first argument to csv.reader can already be a list (or similar) or file (or similar), so perhaps a string could be taken to mean a filename. csv.reader_from_filename doesn’t seem Pythonic, or csv.DictReader.from_filename. Having csv.reader call reconfigure() on the passed object is probably a bit too magic (explicit is better than implicit!).
In summary, this is a (very small part) of the csv API that I don’t like, but I don’t have a good suggestion for solving it, either.
from library import something
Ben says:
Takeaway: Design your library to be used as import lib ... lib.Thing() rather than from lib import LibThing ... LibThing().
I agree with the majority of his post but a huge 100% to this specifically. Namespaces are one honking great idea!
Global configuration and state
In this example code of Ben’s, he argues against having a module-level DEFAULT_TIMEOUT:
DEFAULT_TIMEOUT = 10
def order(..., timeout=None): if timeout is None: timeout = DEFAULT_TIMEOUT ...
This is an interesting argument, and goes against the common refrain (probably originating from languages like C) that you shouldn’t have ‘magic’ numbers in your code and should define them at the top level instead.
If your intention is that people should be able to change the default, then you should definitely do this differently (e.g. as Ben describes). If your intention is that this default is never to be changed, then you can make this a little better in modern Python (3.8+):
import typing
_DEFAULT_TIMEOUT: typing.Final[int] = 10
def order(..., timeout=_DEFAULT_TIMEOUT): ...
However, you’ll need some sort of type checker to validate that it’s actually not changed (which does go along with Python’s “consenting adult” type approach to things like private variables).
Type Annotations
Ben has a good section on type annotations, which pretty much exactly matches my feelings. I don’t love them, or feel that they are themselves really Pythonic (although their use in libraries like Pydantic does, somehow). I completely agree with Ben’s takeaway:
On balance, I definitely think it’s the right thing to do in 2023 to ship your library with type annotations.
And of course, don’t just use them, but run Pyright or MyPy over your library’s code on every commit.
Ben’s last example of a positive of type annotations is:
They help your IDE provide better navigation and auto-completion.
I agree that this is the case, and one of the more compelling reasons to use type annotations, and also why there’s – to a certain extent – an obligation on library/package developers to provide them. However, I find it generally disappointing. I strongly feel that this functionality should be provided by the IDE without the user needing to put in all the manual work of explicitly typing everything. I wish we had solved this need with better tools rather than by putting a heap of additional work on developers – and especially without adding a heap of boilerplate to Python code. I understand that the dynamic nature of Python makes this hard, but hard problems are good ones to solve.
Overriding Operators
Ben gives a rule of thumb:
Only override math operators like a+b if you’re creating a number type.
I agree with this for almost all math operators, except perhaps + (for example, using + to concatenate strings is more natural than str.join, and I think it was right to make that implementation faster rather than focus all energy on getting people to call join).
I think the standard library has a perfect example of a mistake here, which is pathlib and the division operator. I believe this is a cute hack that is maybe ok in a third-party library, but definitely does not belong in the standard library:
>>> import pathlib >>> p = pathlib.Path("~") >>> conf = p / ".mypackage" / "config.ini" >>> str(conf) '~/.mypackage/config.ini'
There was a lot of controversy about this back in 2012 (the PEP has a reference to some of it), and maybe the right decision was made, but it’s unPythonic and distasteful in my view.
Keyword arguments
Ben makes an argument for keyword arguments helping with backwards-compatibility, which I agree with. He has this example:
def order(chips=None, fish=None): """Place an order.
Args: chips: number of scoops of chips fish: number of fish """
I would argue that these should be keyword-only arguments. If I’m writing a call to order, I’ll never remember whether fish or chips comes first (I would even argue that they are backwards here, because – at least where I am – people say “fish and chips”, not “chips and fish” – an irreversible binomial if you’re a linguist). An IDE might help out when writing, but when reading the code, you’re not necessarily going to have that context made available. A two character change, but it prevents easy mistakes:
def order(*, chips=None, fish=None): """Place an order.
Args: chips: number of scoops of chips fish: number of fish """
I also worry a bit about how this would scale. A fish’n’chip shop probably has at least a dozen items on their menu, and that’s a lot of arguments. It could be generalised, something like:
def order(**kwargs): """Place an order.
The arguments should be the name of the item, with the value: either the number of the item, a tuple of (quantity, type), or a list of such tuples
If you want to have arguments that are not items in the order (like the timeout one Ben has as an example), it feels messy for that to be mixed in with the items. Using **kwargs helps, because you’d make timeout a keyword-only explicit argument and that would distinguish it, but it still feels untidy to mix order items and order configuration into what is essentially one dictionary.
The item names are limited to what’s supported by Python names. That means it can’t start with a number, can’t have spaces, can’t have punctuation, and so on. For the most part this is probably fine – you can have onion_rings instead of ‘onion rings’ and the like. It feels like it might get challenging to remember the rules for converting from ‘real name’ to ‘argument name’, though. I also suspect that eventually the method will need the ‘real name’ for logging or display or similar.
Related to the issue of remembering the rules is remembering what can be ordered. With Ben’s version, the signature tells you that you can have fish or chips. With my kwargs one, it does not – presumably there is some validation of the arguments in the function, but that doesn’t help the caller prior to runtime.
I’m not sure how well this would continue to scale. Say that after adding crumbed fish, we decide to sell both Snapper and Tarakihi. Do we now support an argument like fish=(3, ‘crumbed’, ‘snapper’)? How do I remember that it’s not fish=(3, ‘snapper’, ‘crumbed’)? How are we going to handle burgers, which have lots of ingredients that people might want to customise?
I realise Ben was using this as an example of how keyword args and dynamic typing help backwards compatibility (which they do), and not trying to design the ultimate fish&chips library API. However, I feel like you’d need to move to a single collection of objects (probably dataclasses) before long.
LLM & generative AI buzz is everywhere, and assistance writing code (either via something like copilot or by just querying a model) is supposedly a huge productivity boost, so I was curious enough to try this out. Simon Willison has written a lot of interesting things about LLM including how he’s used it for developing, and made it seem like something that could be useful.
As a first experiment, I wanted to use the “ask the LLM to draft something for me” approach (rather than a IDE-integrated tool like copilot), and I wanted to avoid having to go through a ‘conversation’ of prompts (particularly one where I had to guide the model to a solution I already knew or guessed). It’s usually pretty easy to find what I need by looking at the docs (or code), so if this is going to be useful for this sort of work, then I need it to be simple and fast.
Details of the two attempts I’ve made so far are below, but in summary: I’m underwhelmed so far. Maybe I’m not trying the right sort of problems (one was real, one was an exercise), or maybe I need to be better at prompt crafting or more willing to spend time ‘talking’ with the model. I have some investigation/experimentation to do with production use of LLM so may try out some more personal use then.
Redirecting users to a custom landing page in Django
I have a Django site that’s about 13 years old, so has gone through a lot of Django versions. It has the ability to specify a landing page (ie. where you end up after logging in) per-user, if one isn’t specified in the params (ie. in the typical “next” arg). I noticed that at some point this had broken, and wanted to fix/re-implement it for modern Django.
I asked
How do I get Django to redirect a user after logging in to a page specified in a user model field?
The answer I was given is:
You can get Django to redirect a user after logging in to a page specified in a user model field by defining a custom redirect function in your `views.py` file and using it in your `urls.py` file.
In your `views.py` file, define a function that retrieves the desired redirect URL from the user model field and redirects the user to that URL after login:
```python from django.contrib.auth.decorators import login_required from django.shortcuts import redirect
@login_required def custom_redirect(request): redirect_url = request.user.redirect_url # Replace 'redirect_url' with the name of your user model field return redirect(redirect_url) ```
In your `urls.py` file, map the custom redirect function to the Django login view:
```python from django.contrib.auth.views import LoginView from .views import custom_redirect
Note that in the above code, we're using Django's built-in `LoginView` and setting the `redirect_authenticated_user` parameter to `True` to automatically redirect authenticated users to the `custom_redirect` URL. We're also specifying the path to the login template using the `template_name` parameter.
Finally, update your user model to include the `redirect_url` field:
```python from django.db import models from django.contrib.auth.models import AbstractUser
class CustomUser(AbstractUser): redirect_url = models.CharField(max_length=255, blank=True, null=True) ```
Now when a user logs in, they will be redirected to the URL specified in the `redirect_url` field of their user model.
The immediately obvious problem here is that this doesn’t work. Nothing gets the user to the /redirect/ path, so although that would work to redirect to the user’s chosen page, it isn’t any part of the login flow. The second obvious problem is that the redirect_authenticated_user doesn’t do what the model described at all – it’s basically about bypassing the login page when someone is already logged in.
(Side note: while looking up links for this post, I came across this SO question about this (which is likely forming a lot of the LLM’s answer I suspect). It’s not that clear either, but I guess going to SO is what people typically did before LLM? Personally, my go-to is always the docs.)
I could make a solution like this work – if you set the LOGIN_REDIRECT_URL to be the redirect path and a few other adjustments. However, it’s messy and handling a ‘next’ style param won’t work properly without duplicating a bunch of code.
For reference, what I did (which I believe is the ‘correct’ approach, at least as-of the version of Django I’m using) is:
# Add this line to my URLs. path("accounts/login/", [...].login.LoginUserRedirectView.as_view()),
# Add this class to a login module in my app. class LoginUserRedirectView(LoginView): """Like the parent, but with a custom per-user redirect system.""" def get_redirect_url(self): # It would probably be cleaner to just check for the Anonymous User. if not self.request.user or not hasattr(self.request.user, "last_login"): return super().get_redirect_url() # Having "next" in the request parameters is already handled by the normal login view and will bypass this view completely. redirect_to = self.request.POST.get( self.redirect_field_name, self.request.GET.get(self.redirect_field_name) ) or self.request.user.landing_page or "" url_is_safe = url_has_allowed_host_and_scheme( url=redirect_to, allowed_hosts=self.get_success_url_allowed_hosts(), require_https=self.request.is_secure(), ) return redirect_to if url_is_safe else ""
The documentation on this (unusually for Django) isn’t great. However, the code is very clear, and makes it pretty obvious how to do the appropriate subclassing (as well as some safety checks).
Evaluating a basic maths expression in Python
The second one I tried was from an interview exercise. When I was talking to interviewers after completing the test, one of them mentioned that there was a lot of submissions that were clearly from LLM, which made me curious how it would have done. Using one during the exercise didn’t occur to me, but I doubt it would have appealed anyway, since it would consume a bunch of quite limited time and I wouldn’t have expected it to do particularly well.
The task was to take a string like “3 + 7 / 5 * 2” and give 5.8. No brackets, only integers, only addition, subtraction, multiplication, and division, respecting the precedence of operators.
I asked
Write a Python script that can add, subtract, divide, and multiply given an input string, and respects the order of operators, so multiplication and division are done before addition and subtraction.
The answer I was given is:
Here's an example Python script that can perform basic arithmetic operations while respecting the order of operations:
``` def calculate(expression): # Split the expression into a list of tokens tokens = expression.split()
# Initialize the result to the first number in the expression result = float(tokens[0])
# Loop through the remaining tokens and perform the appropriate operation for i in range(1, len(tokens), 2): operator = tokens[i] operand = float(tokens[i+1])
if operator == '+': result += operand elif operator == '-': result -= operand elif operator == '*': result *= operand elif operator == '/': result /= operand
return result ```
To use this script, simply call the `calculate()` function with a string containing the arithmetic expression you want to evaluate. For example:
This is also very clearly wrong. For one thing, the example at the end has the wrong answer – if you ignore precedence then it would give 6 and if you do it correctly it would give 9 (I’m not sure how to get 10). Looking at the code, it also clearly does nothing at all to respect operator precedence.
More importantly, there’s no code that even vaguely hints at dealing with operator precedence. Also, converting to a float is risky with maths unless you really know the context and understand the implications.
For the record, I considered three approaches to this.
Eval()
The first was trivially simple – use eval(). I spent quite a bit of my allowed time internally debating whether the point of the exercise was to check whether I knew eval() existed and how to appropriately use it and that I wouldn’t pointlessly re-implement built-ins – or whether it was not meant to be used, even though that wasn’t in the (otherwise quite detailed) instructions. I put in a bunch of code to handle errors and additional safety restrictions, but at heart, this is just
return eval(input_string, {}, {})
Double-pass Item Replacement
The second was to do two passes of the input string (first for multiplication and division, and second for addition and subtraction), replacing chunks of the expression by the results. This is fairly straightforward, and my main concerns at the time were that this seems like it would get very messy as soon as you try to extend it to do anything else, and that tracking your location in the expression gets messy when you’re changing the length as you iterate through it.
Roughly, again ignoring error handling and so forth, and ignoring that this has a bunch of unneeded conversion to decimals, this is:
tokens = s.split() i = 1 while i < len(tokens): if tokens[i] == "*" or tokens[i] == "/": left = decimal.Decimal(tokens[i - 1]) op = tokens[i] right = decimal.Decimal(tokens[i + 1]) if op == "*": tokens[i - 1:i + 2] = [left * right] else: # op == "/" tokens[i - 1:i + 2] = [left / right] else: i += 2 i = 1 while len(tokens) > 1: if tokens[i] == "+" or tokens[i] == "-": left = decimal.Decimal(tokens[i - 1]) op = tokens[i] right = decimal.Decimal(tokens[i + 1]) if op == "+": tokens[i - 1:i + 2] = [left + right] else: # op == "-" tokens[i - 1:i + 2] = [left - right] else: i += 2 return tokens[0]
In terms of an exercise, this shows an understanding of replacing a slice of a list in Python, and the performance isn’t terrible (memory is fine, looping twice is ok). There’s a bunch of tidying up that could be done, but it would probably have sufficed. I don’t like that it would get complicated quite quickly if you expanded it – adding parentheses and brackets, for example. It is better than eval, unless the point is to show that you shouldn’t reimplement something unnecessarily.
Convert to Post-fix Notation, Evaluate
The third approach, which is the one I like most, was to convert the expression to RPN and then evaluate the RPN expression. I could remember that RPN evaluation was trivial (from way, way, back in my student days) and also that it was fairly simple to convert from in-fix to post-fix (also something I vaguely remember doing, probably in a Data Structures & Algorithms course ~25 years ago, probably in C++). I remembered that Dijkstra had an algorithm for this and the name had something to do with railways, but not the exact details (I looked it up for this: the Shunting Yard algorithm), and that it was basically just using two lists (I would have had to look up the exact details or play around with this, but for the simple case of just addition, multiplication, subtraction, and division, I could remember enough).
Roughly (without extensive testing or any of the error checking, etc), this is:
# Convert from in-fix to post-fix. precedence = {"+": 2, "-": 2, "*": 3, "/": 3} tokens = s.split() op_stack = [] output_queue = [] for token in tokens: if token in "+-*/": while op_stack and precedence[op_stack[-1]] >= precedence[token]: output_queue.append(op_stack.pop()) op_stack.append(token) else: # Assume a number for now, ie. we're ignoring parentheses and other things. output_queue.append(decimal.Decimal(token)) while op_stack: output_queue.append(op_stack.pop()) # Evaluate the post-fix expression. funcs = {"+": "__add__", "-": "__sub__", "*": "__mul__", "/": "__truediv__"} i = 0 # A for loop would work here, but I hate changing a list while iterating through it. while len(output_queue) > 1: if output_queue[i] not in funcs: i += 1 # Skip this, we'll grab it when we get an operator. continue i -= 2 # Go backwards to get the two operands. left = output_queue.pop(i) right = output_queue.pop(i) op = output_queue.pop(i) output_queue.insert(i, getattr(left, funcs[op])(right)) # This probably tries to be too clever and a simple 4-clause if statement would be fine. return output_queue[0]
In terms of the exercise, this beats eval (again assuming the point isn’t avoiding pointless work) and I feel it beats the previous version, since it’s more readily extendable. Using the magic methods rather than just an if statement is probably unnecessary, but shows an understanding that they exist, that you should call them against the object, and shows an understanding of using functions as first-class objects (having them in the dictionary). It’s more expensive in terms of memory – it could be improved a bit, but generally it’s creating new containers not adjusting the existing one, so will always be worse. The code could do with a bunch of cleanup, especially the RPN evaluation, but it suffices for something done in a quick exercise.
Performance-wise, with a few trivial test statements (not trying to do this seriously at all):
$ python -m timeit -s "import eval_math as m" "m.test_eval()" 10000 loops, best of 5: 21.8 usec per loop $ python -m timeit -s "import eval_math as m" "m.test_double_loop()" 20000 loops, best of 5: 10.6 usec per loop $ python -m timeit -s "import eval_math as m" "m.test_rpn()" 10000 loops, best of 5: 24.5 usec per loop
I’m sure that LLMs can assist with coding, and make me more efficient. I don’t feel I have figured out the way to make that happen, yet. More experimenting to do in the future, I suppose.
Josh and Adam did this as a draft, and I can’t really do that since I don’t know how my choices would have impacted their choices – although I would have got first pick, guessing Killmonger had 1000 kills (low but much closer than they guessed). So, just a regular top 5.
Similar criteria as they used: this isn’t the most powerful villain – it’s the ones that are the most interesting, and that make their film(s) more enjoyable and compelling.
5. Quentin Beck (from Spider-Man: Far From Home) – it’s done regularly in other films, but generally it feels like the MCU doesn’t do films where the villain starts out (appearing) as a hero and then that turns around (although they did it very poorly with Wanda). Good backstory and no real super-powers (although crazy good tech).
4. Ultron – not my favourite Avengers film, but Ultron as a character is really interesting. More humour than a lot of the villains, reasonable backstory.
3. Loki – a tricky choice, since he’s not really a villain any more, but certainly was originally. So much fun, compelling motivation, great dynamics with everyone else. It’s obvious why he kept coming back.
2. Killmonger – for all the reasons that Adam and Josh outlined. Pretty compelling backstory, really flawed.
1. Helmut Zemo (from Captain America Civil War) – compelling motivation, and achieves so much without having any kind of super powers or vast wealth.
Honourable mentions: like Filmspotting, I only included films, but if the TV series were included, then Kilgrave (from Jessica Jones) would definitely be in my top 2 (Salinger and Trish are also great). Fisk (from Daredevil) would probably make my list as well. Ward (from Agents of S.H.I.E.L.D.) would be in the running as well, especially from the earlier seasons. Arthur Harrow (from Moon Knight) would be one I’d have to consider too.
5. Chris Pratt – I do like the Guardians movies (less so the second one), but I’m not a big fan of the Jurassic World trilogy. The LEGO Movie and Passengers were good, but nothing else stands out to me.
4. Chris O’Dowd – using the same “5th Chris” as Josh & Adam did. Good in the MCU, but the other things I’ve loved him in are TV, which seems a cheat here (The Big Door Prize, The I.T. Crowd). I did like Juliet, Naked.
3. Chris Hemsworth – really good as Thor (except the latest one). Mixed on Extraction, Men in Black, Huntsman. Disliked Ghostbusters. Wrong era in Home and Away for me to know him from that. A few blind spots here.
2. Chris Pine – the Star Treks were so-so. I liked the first Wonder Woman, but did not like the second. I’ve heard good things about the new D&D movie, but haven’t managed to see it yet. Spider-Verse is one of my favourite movies, but he’s not a big part of that. A lot of other films that are blind spots for me.
1. Chris Evans – easily the best MCU Chris (not just the character, but in the biggest group of the better films). Ghosted was fun, The Grey Man was ok, Lightyear was an odd movie, but I like him in it, hated Don’t Look Up, enjoyed the small cameo in Free Guy a lot, liked Knives Out, liked Gifted, Fantastic 4 was not great.
(A while back The Spinoff asked for suggestions for best book-to-film adaptations, and this was my suggestion. Post theme song: If You Think This Is Real Life by Blossums).
The Map of Tiny Perfect Things (2021, streaming on Prime Video) is based on a short story (of the same name) in the anthology book Summer Days & Summer Nights (edited by Stephanie Perkins). In general, I find short stories produce better film adaptations than novels do.
It’s a great story – a modern (and much better) version of Groundhog Day. It’s romantic, deeper than you first think, and has fun playing with time-loop tropes. (Mild spoiler, sorry) it also has a truly wonderful moment when both the narrator and the reader realise that the story isn’t actually about him at all.
The film takes all of this and is true to all the important parts, but also improves on some of the weaker parts of the story. Kathryn Newton (Little Big Lies, Supernatural) and Kyle Allen (West Side Story, The Path) bring their characters to life perfectly. Director Ian Samuels’s style is clearly there, but it’s not as odd as Myrna the Monster and a stronger story than Sierra Burgess.
The author, Lev Grossman, has other adaptations (e.g. The Magicians) but both source material and adaption aren’t as good as The Map of Tiny Perfect Things.
Anyway, if you know the film I assume you also love it. If you know the story, I strongly recommend the film. If you haven’t read the book, it’s worth it just for this story, but some of the others in the anthology are ok as well, particularly if you’re looking for a light, romantic, read. If you don’t know the film or the book, definitely add it to your watch list.
Parents of students at Mahurangi College received an email from the principal on the 9th of November stating:
We would like to assure you that your child will not be discriminated against in any way by the school and we do not intend excluding students from any activities or trips that we have control over. However, in these ever changing times we have no control over the requirements that an outside organisation might place on us if we were to visit somewhere, eg. the museum. We also have no control over requirements that the government might place on us in future.
T. Giles, Mahurangi College, email, 2021-11-09
I am extremely disappointed that the school has chosen to take this stance. The language used here is very clearly that used by the vocal minority of people who are anti-vaccination. It is absolutely inappropriate for communication from the school to use this language.
Specifically: the emphasis on safety requirements being imposed on the school by “outside organisation”s or the government mirrors the anti-health protests of tiny numbers of people in Aotearoa. The school should welcome every additional safety requirement, whether it is ones that it adopts through internal development, through third-party organisations, or from the government. These are not being “place[d] on us”, they are being adopted to maximise the health & wellbeing of our students.
The school should absolutely be considering treating vaccinated and unvaccinated students differently, including potentially not including unvaccinated students from activities and trips where their unvaccinated status would put their own or their fellow students or the general public’s health at risk. The evidence that this should be done is extremely clear through all of the measures that are being adopted in the COVID-19 Protection Framework.
Tellingly, the email did not encourage vaccination at all. The sooner the 12+ population of the school is fully vaccinated, the safer the school environment is (as well as the rest of the country, through interactions with members of the school community), and the faster pre-pandemic activities can resume. The school has a moral duty to encourage public health measures, particularly ones that impact the local community. Since the school caters to primary students, there is an additional obligation to ensure that those that are currently too young to be vaccinated are as well protected as possible.
The board, and therefore the school, has a duty to provide a safe and healthy environment for its students’ education. The Ministry of Education and the Ministry of Health – as well as all reputable medical professionals – are clear that receiving the Pfizer-BioNTech COVID-19 vaccination is the strongest element of our multi-layer approach to providing a safe environment. The board also has obligations to its Māori students through Te Tiriti o Waitangi, and as those students are (in general) at higher risk from COVID-19, you have a responsibility to ensure that you do everything possible to keep them safe.
My assumption is that the language used in this communication does not reflect the position of the board, and that the board does, sensibly, support and encourage vaccination, and discourages all anti-science, dangerous, anti-vaccination language – and that you have been let down by inappropriate communication from a staff member (if it was actually Mr Giles’s own words, then regrettably also by one of the board).
Please send a corrective message to all recipients of the original email. The message should clearly state that vaccination is scientifically proven to be extremely safe, is the best possible protection against the deadly COVID-19 pandemic, and helps safeguard our children against not only short-term health issues (including death) but also the effects of long-COVID, which are still being determined. You should provide links to covid19.govt.nz and other reliable material so that parents are helped to be well informed (I suggest that these are links rather than attachments, as the school seems to have great difficulty in attaching files to emails). Ideally, you would also explicitly apologise for the earlier language, and explain that it was a mistake and is not the school’s position.
I suggest that for critical communication like this, you may also wish to have someone (perhaps the board chair) vet communication before it goes out to parents, so that mistakes like this are less likely to occur in the future.