Making Hay

Reverse Power Hierarchies make for Safer Streets

The more power and size you have, the more you should defer and be respectful to those smaller and less powerful than yourself.

The 21st Century, and perhaps most of human history sadly disagrees with me.

As I was out walking recently a motorbike rider – an L plater no less – decided to rev their engine at me for having the temerity to continue walking on the footpath instead of stopping for them as they attempted to access a shopping strip by driving across that footpath.

Ironically this same rider is highly likely to suffer from a similar expression of power – often with fatal consequences – as they battle with cars and trucks for their safety on a busy road. Yet when given the opportunity to be patient and respectful to those in a less powerful position they too chose to be the bully.

I once saw a great cartoon which illustrated how little public space is given over to people, portraying streets as great canyons highlighting the risk we all take every time we choose to walk instead of of getting behind the wheel of a car.

That cartoon is no longer accurate.

Not because road designers and governments have widened footpaths and made more and safer crossings. It's no longer accurate because in addition to wider streets, narrower footpaths and cars larger than literal tanks driving across or even parking on footpaths with impunity, we're seeing ballooning and often illegal use of those footpaths by grown adults (read delivery drivers) riding motorised vehicles on them at speed with little thought for the people these spaces are intended for.

I live in hope that the L-Plate rider I had my encounter with learns one of the most important rules of the road and life: give way to others, especially those with less size and power than you have. Hopefully before they fall victim to the kind of ignorant, entitled pissant who'd happily turn right across their bike as they go straight through an intersection.

Postscript: While I was having issues that were preventing me from publishing this post, I sadly read yet another story about a child being killed by a driver who couldn't control their tank. And here is the rub in Australia, the killer was simply fined $2000. Meaning the post-hoc registration charge for SUV drivers who kill is not much more than the actual registration cost of their vehicle in the first place. Over to you, politicians Something Must Be Done! Said that English King one time

A tangled mess that threatened to summon mighty Cthulhu!

I used to know how to fold fitted sheets, now I have the goods to help myself to remember!

How great is the internet‽

Security or Simplicity, pick one

As Daniel Huang has bloggedª many services, in a desire to simplify the experience for the users, have gone the route of sending a login code to an email address you enter.

While this may seem convenient to you and your customer, it is probably even more insecure than sending so-called MFA (Multi-Factor Authentication) codes via sms. Which bizarrely, despite years of evidence that it's an insecure way of sending MFA codes, many financial institutions in Australia have just begun to introduce in 2025, even despite CISA guidance not to use this method:

Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them.

To be clear, neither of these solutions to your problem are secure, no matter how compliant you think they make you with your internal or external security and privacy requirements.

One wonders how ideas like this pass muster with your security team. Oh.

ª: Via Justin Warren's excellent The Crux newsletter.º º: Note that despite supporting Markdown, write.as doesn't appear to do Markdown footnotes.

Returning to WordPress

It seems like I missed Wordpress' decision to Return to Core (whatever that means).

Interestingly published without a byline, their definition of Mullenweg's brain explosions last year as pausing our contributions to regroup, rethink, and plan strategically is quite the understatement.

Perhaps there's been an internal intervention, but it will take quite a bit of change for me to return to my WordPress Blogs any time soon.

Every Genius Needs a Nerd and a Patrician

In Kurt Vonnegut’s Bluebeard, the character Paul Slazinger describes the 'mind-opening team' required to make new ideas successful.

The rarest member of this team is what is known as the Authentic Genius. They almost always have great ideas, but without the requisite support their ideas would be doomed to be ignored by the majority of the population.

As Vonnegut puts it, the Genius needs support from two others playing distinct roles in the team: * First they need a '...highly intelligent citizen in good standing in (their) community’. This person must be someone who understands and admires the Genius’ ideas, one who can vouch that they are ‘…far from mad’. * Secondly they require someone with the unique ability to '...explain everything, no matter how complicated, to the satisfaction of most people’.

Or as I shall refer to them; the Patrician and the Nerd.

Without the ideas of the Genius to refer to the Nerd would often '...be regarded as being as full of shit as a Christmas turkey' and the Patrician would be known as the sort of character who would '...yearn loud for changes, but fail to say what their shapes should be.'

Or to put it another way, Nerds need something interesting to get other people excited and Patricians need something exciting so that others will stay interested in them.

While Slazinger was referring to Revolutions in the book – both Societal and Artistic, isn't it possible that the same idea could apply to ideas of every type? The next time you're watching a terrible movie or using frustrating software consider how likely it is that either or both the Patrician or the Nerd have bought into the ravings of one of their own rather than an Authentic Genius.

Our task should be to find the missing angles in our triangle or else be doomed to a life of frustration – either by promoting or supporting terrible ideas or worse, seeing our ideas ruined by people who don't really understand or can't explain the fantastic products of our Genius minds.

For myself, I may need to go find a Genius and a Patrician. In the meantime, I’m going to reread my copy of Bluebeard.

Via Kottke

Who do you Signal?

There's a lot to unpack in Micah Lee and 404 Media's reporting of the US Government's use of a hacked version of Signal in order to try and comply with message retention policies in government.

My main concern is that Signal doesn't appear to notify users if the person they're talking to is using one of these hacked versions of Signal.

To be clear, while I don't personally use Signal to hide my communications, there are plenty of people who might need to for varying reasons, and it's probably critical for them to know if any of their messages are being passed to clearly insecure backup servers.

Learn like a Human, not like a GPT

Be conscious of what an AI system is capable of and how it learns before believing some of the claims you may read about it.

I see a lot of AI Cheerleading on LinkedIn. Which is pretty frustrating to read, especially when the posts are made by accounts with senior executive names to them – especially when there is a dearth of any critical comments on the posts.

This article in the Conversation includes a good summation of what many people miss about what’s known as AI in this day and age.

It provides some straightforward, enlightening information about how dumb your so-called Artificial Intelligence actually is

...AI systems do not learn from any specific experiences, which would allow them to understand things the way we humans do. Rather they “learn” by encoding patterns from vast amounts data – using mathematics alone. This happens during the training process...

Consider this a reminder to check your understanding of what AI actually means in the context of the post you're reading or the discussion you're having, in-person, online or when you read some PR puff piece on LinkedIn, and unlike

...Most AI systems...such as ChatGPT...do not learn once they are built...– training is just how they’re built, it’s not how they work…

apply some critical thinking and learn while you’re reading.

Act Upon Instructions

Transparency in systems – both human and AI – is crucial for building trust and acceptance, as when users understand how decisions are made and can provide feedback, they're more likely to accept imperfect processes.

While studying Responsible AI we're introduced to some guardrails which those who design Responsible AI systems are encouraged to follow. For example, the introduction in my course describes transparency in AI Systems as follows (paraphrased):

The useful explanation of the behaviour of AI systems and their components, i.e. improving intelligibility, means that those affected by AI’s decisions will understand how and why they are made and be able to provide feedback. With this feedback stakeholders can identify performance issues, safety and privacy concerns, biases, exclusionary practices, or unintended outcomes – leading to improvements for everybody.

We could remove the initialism from the paragraph above – when the systems we use, human or machine, lack transparency, this often leads to unintended consequences for users.

Think about the number of human systems you've interacted with recently and write down the number of times you've left the encounter dissatisfied. Now imagine if your satisfaction may have improved had the person or process you were dealing with taken the time to explain their process or how they reached a decision.

I tend to agree with Peter Hosey when he suggests The algorithmic-only model admits only one remedy: Improve the algorithm and that we may be playing a game of whac-a-mole forever. But I have a caveat; when people, processes, and AI systems operate transparently, users are more likely to accept their imperfections, knowing their feedback helps make things better over time.

Establish your own Guilt

Having expeienced quite the display at a Taxi rank here in Sydney the other night, I found this post; Offend Yourself Sometimes, by Marcel Wichmann quite instructive.

While I try and absorb the sentiment in the following line for myself, I've adjusted the wording in order to help me make sense of the behaviour of Sydney Taxi Driver #8168:

Our egos make us defensive. Whether it’s defending from others or from ourselves, if it’s something we should confront, this defensiveness blocks us from improving.

Note this defensiveness in your everyday encounters and improve by figuring out what you would do differently, if it was you.

Enhanced Visual Value

If a product team has done a good job, users will clearly understand a new feature's value and they can turn it on whenever they choose.

By enabling Enhanced Visual Search automatically on every device running the most recent version of the MacOS and iOS operating systems, Apple appear to have determined that their product team were unable to describe the feature's value in a way that users would understand.

Perhaps the chosen value proposition for the feature was in the style of the recent Apple Intelligence ads, and that's why there was neither a public announcement nor a reference in release notes for those recent OS releases.

It's clear that describing a feature where Apple is analysing data in our Photos while retaining the privacy of those photos isn't going to be easy, but they still needed to tell users about it before turning it on. Even cryptographic experts who understand much of what Apple are actually doing with the feature appear to be mostly concerned with the secretive release.

Personally I have looked at the feature and have no issue with it, as it seems to simply be an extension of the existing Visual Look Up feature. But as I wrote when I first heard about it, while the feature itself may be completely benign, choosing to automatically enable without consent was a poor choice.

Back in the day, when new features were released in software you rarely had the choice of disabling individual features, so installing the software meant the feature was instantly available for you to use. Skipping the release was often your only option if you didn't want a feature to be active on your computer.

These days, with cloud services buttressing almost every piece of software we use, and your personal data (and photos) being exchanged continuously across the internet, it's doubly important that users should be told in advance about anything that might impact their private and personal stuff.

No matter how long the list of available settings, if you can't clearly explain how a new feature benefits users, the setting for that feature should be turned off by default.

This way your customer retains their existing functionality while having the freedom to enable this new feature whenever they've been convinced of its value.