I Love AI. I Just Hate What We’re Not Doing With It.
- Marie Horodecki Aymes
- 12 minutes ago
- 2 min read

I love AI.
I really do.
I use it. I test it. I break it. I integrate it into my work. I genuinely believe it can make us faster, sharper, and occasionally less tired.
Which is why watching how we talk about AI right now is… deeply confusing.
Because apparently, according to LinkedIn, the main way to use AI is to scroll endlessly through lists.
“30 AI tools you MUST know.”
“10 tools that will change your life.”
“An AI for slides. An AI for writing. An AI for coding. An AI for thinking. Presumably an AI to cope with the emotional burden of having too many AIs.”
At some point, a very basic question kicks in: who is actually doing the work?
Because properly evaluating an AI tool is not a vibe. It’s not a carousel. It’s not a post that starts with “Game changer 🚀(or whatever emoji chatGPT loves to put everywhere)”.
It’s time. It’s testing. It’s figuring out what the tool does well, what it does badly, and what it quietly erodes while looking extremely professional.
Six months ago, I did exactly that with AI tools designed to create “professional” presentations.
I tested them seriously. Over time. With real constraints.
And the result was surprisingly consistent.
They were fantastic at form.Beautiful layouts.
Clean slides.
Very confident-looking decks.
And the content?
Generic. Diluted. Empty.
It was like watching a perfectly dressed person confidently explain absolutely nothing. Impressive, in its own way, but not what you want when the substance of a strategic document actually matters.
So I made a decision.
I kept doing my decks myself.
Not because I dislike AI.
But because, in this case, speed came at the cost of meaning.
This is what gets lost in the current AI frenzy: judgment.
We announce “new” tools at a pace that leaves no room for evaluation. On LinkedIn, many posts enthusiastically reference tools that have existed for months. In AI time, that’s basically ancient history. And yet, there’s rarely analysis. Rarely testing. Rarely an opinion.
Just excitement. And volume.
But leaders are not paid to be excited.
They’re paid to decide.
AI is not magic. It’s not neutral. And it’s definitely not plug-and-play if you care about quality. It’s a system component, and it only works if you understand where it fits and where it doesn’t.
That’s why content grounded in real experimentation, like what Shubham Sharma produces, actually matters. Not because it promises miracles, but because it accepts an uncomfortable truth: implementation is slow, selective, and sometimes disappointing.
Relaying untested lists doesn’t help anyone.
It just creates the illusion of progress.
I love AI.
I just refuse to outsource my judgment to it.




Comments