Making Hay

Learn like a Human, not like a GPT

Be conscious of what an AI system is capable of and how it learns before believing some of the claims you may read about it.

I see a lot of AI Cheerleading on LinkedIn. Which is pretty frustrating to read, especially when the posts are made by accounts with senior executive names to them – especially when there is a dearth of any critical comments on the posts.

This article in the Conversation includes a good summation of what many people miss about what’s known as AI in this day and age.

It provides some straightforward, enlightening information about how dumb your so-called Artificial Intelligence actually is

...AI systems do not learn from any specific experiences, which would allow them to understand things the way we humans do. Rather they “learn” by encoding patterns from vast amounts data – using mathematics alone. This happens during the training process...

Consider this a reminder to check your understanding of what AI actually means in the context of the post you're reading or the discussion you're having, in-person, online or when you read some PR puff piece on LinkedIn, and unlike

...Most AI systems...such as ChatGPT...do not learn once they are built...– training is just how they’re built, it’s not how they work…

apply some critical thinking and learn while you’re reading.

Act Upon Instructions

Transparency in systems – both human and AI – is crucial for building trust and acceptance, as when users understand how decisions are made and can provide feedback, they're more likely to accept imperfect processes.

While studying Responsible AI we're introduced to some guardrails which those who design Responsible AI systems are encouraged to follow. For example, the introduction in my course describes transparency in AI Systems as follows (paraphrased):

The useful explanation of the behaviour of AI systems and their components, i.e. improving intelligibility, means that those affected by AI’s decisions will understand how and why they are made and be able to provide feedback. With this feedback stakeholders can identify performance issues, safety and privacy concerns, biases, exclusionary practices, or unintended outcomes – leading to improvements for everybody.

We could remove the initialism from the paragraph above – when the systems we use, human or machine, lack transparency, this often leads to unintended consequences for users.

Think about the number of human systems you've interacted with recently and write down the number of times you've left the encounter dissatisfied. Now imagine if your satisfaction may have improved had the person or process you were dealing with taken the time to explain their process or how they reached a decision.

I tend to agree with Peter Hosey when he suggests The algorithmic-only model admits only one remedy: Improve the algorithm and that we may be playing a game of whac-a-mole forever. But I have a caveat; when people, processes, and AI systems operate transparently, users are more likely to accept their imperfections, knowing their feedback helps make things better over time.

Establish your own Guilt

Having expeienced quite the display at a Taxi rank here in Sydney the other night, I found this post; Offend Yourself Sometimes, by Marcel Wichmann quite instructive.

While I try and absorb the sentiment in the following line for myself, I've adjusted the wording in order to help me make sense of the behaviour of Sydney Taxi Driver #8168:

Our egos make us defensive. Whether it’s defending from others or from ourselves, if it’s something we should confront, this defensiveness blocks us from improving.

Note this defensiveness in your everyday encounters and improve by figuring out what you would do differently, if it was you.

Enhanced Visual Value

If a product team has done a good job, users will clearly understand a new feature's value and they can turn it on whenever they choose.

By enabling Enhanced Visual Search automatically on every device running the most recent version of the MacOS and iOS operating systems, Apple appear to have determined that their product team were unable to describe the feature's value in a way that users would understand.

Perhaps the chosen value proposition for the feature was in the style of the recent Apple Intelligence ads, and that's why there was neither a public announcement nor a reference in release notes for those recent OS releases.

It's clear that describing a feature where Apple is analysing data in our Photos while retaining the privacy of those photos isn't going to be easy, but they still needed to tell users about it before turning it on. Even cryptographic experts who understand much of what Apple are actually doing with the feature appear to be mostly concerned with the secretive release.

Personally I have looked at the feature and have no issue with it, as it seems to simply be an extension of the existing Visual Look Up feature. But as I wrote when I first heard about it, while the feature itself may be completely benign, choosing to automatically enable without consent was a poor choice.

Back in the day, when new features were released in software you rarely had the choice of disabling individual features, so installing the software meant the feature was instantly available for you to use. Skipping the release was often your only option if you didn't want a feature to be active on your computer.

These days, with cloud services buttressing almost every piece of software we use, and your personal data (and photos) being exchanged continuously across the internet, it's doubly important that users should be told in advance about anything that might impact their private and personal stuff.

No matter how long the list of available settings, if you can't clearly explain how a new feature benefits users, the setting for that feature should be turned off by default.

This way your customer retains their existing functionality while having the freedom to enable this new feature whenever they've been convinced of its value.