Using AI professionally: At Work, Artificial Intelligence Requires a New Discipline of Judgment
- Marie Horodecki Aymes
- 1 day ago
- 8 min read

Using AI professionally at work demands more than technical access. It requires judgment: the capacity to read what the tool cannot, protect what it should not touch, and take responsibility for what it delivers.
Some texts reveal themselves before they even manage to say anything.
They arrive neatly aligned, perfectly polished, with words that seem to have been selected for their excellent professional manners. Everything is “critical”, “essential”, “strategic”, “high priority”. The sentences march in formation. The ideas, however, are sometimes still waiting outside the room.
On LinkedIn, the pattern has become familiar. You can spot the AI-generated post within two lines. The emoji used as punctuation. The sentence without a subject pretending to create rhythm. The list that imitates clarity. Words that sit comfortably next to one another without ever producing a solid idea. The feed grows louder. Not necessarily smarter.
For a while, this remained tolerable. You scroll. You sigh. You let the post continue its small algorithmic life without you.
When AI-Generated Content Enters Professional Documents
The discomfort changes when the same pattern appears in a professional document.
Recently, I received a response from a potential supplier following a request for proposal. The brief I had shared was detailed. The objectives were clear. The intended users, expected outcomes, constraints and context were all documented.
And yet, the response included a series of questions meant to clarify the mandate. Several of the answers were already in the brief. Some of the wording carried that now familiar tone of AI output left untouched: “priority question”, “crucial question”, “ultimate question”. And then there was that word, “really”, slipped into a question along the lines of: who will really use this deliverable?
That “really” stopped me.
It added no precision. It introduced a needless suspicion. It gave the impression of depth, while mostly signalling that the brief had not been read closely. What possible interest would I have in misrepresenting the use of a deliverable in an RFP I had taken the time to frame myself?
The situation would almost have been funny, had it not been so revealing.
A large language model can generate questions. It can even rank them with great confidence. It can produce a clean structure, a polite tone, an appearance of method. But it cannot know, by itself, that the answers are already in the document. For that, someone has to read. Someone has to understand. Someone has to decide that one question is useful, another is redundant, and another risks damaging the relationship before it has even begun.
What AI Cannot Replace: Reading, Context, and Professional Judgment
That moment said something larger than the quality of one supplier response.
It showed what happens when artificial intelligence is used as a shortcut to production, instead of as support for thinking already underway. It creates an appearance of diligence. It gives the document all the outward signs of work: sections, questions, priorities, careful vocabulary. It can also expose very quickly what is missing: attention to the need, understanding of the context, professional judgment.
Credibility is not always lost in major failures. It can disappear in one unnecessary question, asked with confidence, when the client had already provided the answer.
How to Use AI Professionally
Using AI professionally means giving the tool a precise frame before it produces anything, verifying every output against the original need, protecting information that should not be shared, and retaining full responsibility for the final result. The model executes. The professional decides.
I do not say this from a position of suspicion toward AI. Quite the opposite. I use artificial intelligence every day. It has helped me build, on my own and with limited resources, a practice that is much stronger than what I could have built otherwise in the same timeframe.
A large share of my deliverables technically passes through AI. But nothing is abandoned to it.
I frame. I test. I specialize my tools. I work with agents that I have gradually trained and professionalized. I choose models according to their strengths at a given moment. I verify. I challenge. I redirect. I reject outputs that are elegant but weak. I revise what sounds good but does not hold. I invest time upstream to save time later, and above all, to deliver better work.
AI Shifts the Demand for Mastery
AI accelerates certain tasks. It does not remove the requirement for mastery. It moves it.
Before, a significant part of the work was absorbed by production: writing, formatting, structuring, compiling, reformulating. Today, part of that work shifts elsewhere: clarifying intent, giving precise instructions, protecting data, selecting the right tool, checking reasoning, identifying weak assumptions, taking responsibility for the final version.
That is where the difference lies between casual use and professional use.

An unsupervised tool produces quickly. A well-directed tool can produce better work. Between the two, there is a skill. It does not appear out of nowhere. It is learned. It is tested. It requires understanding the strengths and limits of the models, knowing what can be entrusted to them, recognizing what information should never be shared, and developing systematic verification habits.
AI also creates a new form of autonomy.
It has taught me to challenge the limits of “that is not possible” or “that is not how things are done” in a different way. Not with naïve enthusiasm. With experimentation. A hypothesis can become a prototype. An intuition can be translated into a tool. An idea I would not have known how to code can be tested, corrected, improved, then integrated into my work.
Two years ago, some of the tools I have built would have felt out of reach. I would not have had the technical means to challenge someone telling me it was too complex, too long or too expensive. Today, I can test. I can compare. I can iterate. I can decide based on proof of function rather than an impression.
AI does not remove every constraint. It helps test some of them faster. It helps distinguish what is truly impossible from what was simply inaccessible without a technical team, a budget or enough time.
And because I care deeply about my work, I will say this too: I enjoy it.
Not because the tool does the work for me. Because it allows me to explore further what I already know how to look for. It helps me move faster from idea to test, from test to tool, from tool to use. For someone who always thinks with implementation in mind, that is a significant shift.
But this power calls for discipline.
AI at Work: Usage Is Already Ahead of Governance
In organizations, AI is already here. Statistics Québec estimated that in 2024, about 59 percent of Quebec’s workforce held jobs highly exposed to artificial intelligence. KPMG Canada reported in its 2025 index that 51 percent of Canadian adults now use generative AI at work. At the business level, Statistics Canada reported that 12.2 percent of Canadian businesses used AI to produce goods or deliver services in the second quarter of 2025. These figures do not all measure the same thing, but they point in the same direction: usage is moving faster than governance.
Leaders who prefer not to look at the issue do not stop it from existing. They simply make the uses invisible.
Employees already use AI to write, summarize, analyze, translate, prepare presentations, answer emails, produce meeting notes and structure ideas. Often with good intentions. Often to save time. Sometimes without knowing which data can be shared, which outputs must be verified, or how to recognize a well-written but fragile answer.
That is where the risk becomes organizational.
A mediocre LinkedIn post mostly damages the quality of the feed. A poorly generated client document can damage a business relationship. Confidential data entered into a poorly configured tool can create a governance risk. An unchecked analysis can shape a bad decision. An RFP response that reveals the absence of reading can disqualify a supplier before the conversation has even started.
Training Teams on AI: A Measure of Professionalism
Training teams on AI is therefore not a gesture of modernity. It is a measure of professionalism.
Organizations need to open the conversation. Which tools can be used? For which purposes? With what types of data? Which deliverables require stronger human validation? How should sources be cited? How should a summary be checked? How do we identify a hallucination, a weak assumption, an empty emphasis? When does AI genuinely help, and when does it merely create the appearance of work accomplished?
The rules must be simple, but usable. Training must be concrete. Examples must come from everyday work. Confidentiality policies must be understood by teams. People need spaces where they can discuss actual uses, not only official positions.
Managing AI Like a Brilliant but Junior Employee
The image I often use is that of a brilliant junior employee.
A good AI model resembles an extremely fast, highly knowledgeable young collaborator, capable of producing a lot in very little time. But it remains junior. It does not know your client as you do. It does not spontaneously understand your professional responsibility. It does not know what can be said,
what should remain unsaid, what is sensitive, what is strategic, what is simply irrelevant. It can deliver an answer with confidence, even when it is wrong.
No one would assign a sensitive mandate to a junior without a framework, instructions and review. We should not do it with AI either.
The new professional skill is not merely “prompting”. The word is convenient, but too narrow. The real work is using AI professionally and managing it: setting the frame, defining the level of quality expected, controlling the steps, correcting blind spots, securing information, deciding what deserves to be delivered.
Artificial intelligence has a formidable quality: it makes our own working methods visible.
If we know how to read, it helps us read more broadly. If we know how to structure, it helps us structure faster. If we know how to ask good questions, it helps us explore further. If we do not read, it can produce questions that have already been answered. If we do not verify, it can give impeccable form to an error. If we do not think clearly, it can give professional polish to confusion.
That is why AI requires a new discipline of judgment.
It forces us to make explicit requirements that already existed: read carefully, understand context, protect information, verify sources, recognize the limits of a line of reasoning, and take responsibility for what we publish, send or deliver.
The machine can accelerate production. Judgment remains with the person who signs.
FAQ: Using AI Professionally at Work
What does using AI professionally mean?
Using AI professionally means directing the tool with intent, verifying each output against the original need, protecting sensitive information, and retaining full responsibility for what is delivered. The model accelerates production. Professional judgment governs selection, correction, and final approval. Those two things operate in sequence, not in parallel.
How is professional AI use different from casual AI use?
Casual use treats AI as a shortcut to production. Professional use treats it as support for thinking already underway. The difference shows in what happens before the tool is opened (clarity of intent, quality of instructions) and after it produces something (verification, correction, responsibility for the result).
What skills do leaders need to manage AI at work?
Leaders need to frame tasks precisely before delegating to AI, identify weak assumptions in outputs, recognize what information should never be shared with external tools, and define quality standards clearly enough to verify results. These are judgment skills. They were
already required before AI arrived.
Why does AI still require human judgment?
A language model produces answers with confidence regardless of whether they are correct. It does not know your client, your constraints, or your professional responsibility. It cannot assess what should remain unsaid, what is sensitive, or what a particular context makes irrelevant. That capacity for discrimination belongs to the person who signs.
How should organizations approach AI training for teams?
Training should be grounded in real use cases, not abstract principles. Teams need to understand which tools are permitted, what types of data can be shared, which outputs require validation, and how to recognize a well-written answer that is nonetheless wrong. Governance policies matter. So does the space to discuss actual practices.
Marie Horodecki-Aymes is the founder and CEO of MHA Insights Inc., a Montreal-based consultancy, and the creator of MHA Studio AI. She advises executives, boards, and organizations on the strategic adoption of artificial intelligence. She has integrated AI as a core professional tool across her practice and writes regularly on governance, judgment, and responsible use.




Comments