
Where I live, humidity and heat are something you come to expect. It’s sticky, uncomfortable and can get in the way of the best-laid plans.
Our house isn’t fully air-conditioned, so when an ad popped up for a portable cooling unit you could move from room to room, I took a closer look. A little relief sounded appealing.
Before clicking anything, I did what most of us do now. I went to the comments.
Something felt off almost immediately. People were talking about the same product but referring to it as three different companies. Different names, all replying under the same ad. None of the names matched the name on the ad.
That was enough to make me pause.
I clicked through to the website. My internet protection stopped me before the page even loaded. Unsafe site. Do not proceed.

At that point, it felt irresponsible not to report it. A few days later, I received the response.
They’d reviewed the ad using “a combination of technology and human reviewers.” They decided not to remove it. If I was unhappy, I could influence the ads I saw by changing my preferences.
What struck me wasn’t that scams exist. It was that, somewhere in the process, thinking wasn’t required.
Why this matters
We talk a lot about keeping “humans in the loop.”
But human in the loop doesn’t necessarily mean thinking is in the loop.
They can be present, active, even diligent and still be operating inside a system that rewards ticking a box over discernment.
What bothered me most about the response was not the decision itself, but the system design behind it.
According to the World Economic Forum’s Future of Jobs Report 2025, analytical thinking tops the list of core skills employers say they need, with seven in ten organisations rating it as essential. This isn’t a new requirement. It has held steady across multiple editions.
What has changed is the context. The report links the growing importance of analytical and systems thinking directly to the rise of AI, big data and cyber-related technologies.
As these tools spread, the value of critical problem-solving increases rather than diminishes.
At the same time, the report flags a widening gap. Around six in ten workers are expected to need upskilling by 2027, yet only about half are seen as having adequate access. Most systems fail to detect that strategic risk early.
That gap matters because systems often say thinking is expected, without making it required.
I think of it like the difference between a system that asks someone to notice when something doesn’t quite make sense, and one that only reacts when something is clearly wrong or has malfunctioned.
In the latter, thinking only shows up once the system decides something has failed>.
That feels like a poor place to first invite thinking. It also reflects a simple systems principle:
“Every system is perfectly designed to get the results it gets.”
— Systems-thinking maxim,
often associated with W. Edwards Deming
- Where in your system are people included, but not held accountable for exercising judgement?
- What happens when someone says “this doesn’t quite make sense” and the technology says everything is fine?
- And when a flaw is found later, what gets examined more closely: the process that was followed, or the judgement that wasn’t exercised?
