Foundations
How LLMs actually work
- Duration
- Half-day (3-4 hours), with optional follow-up consult
- Mode
- In-person preferred · fully remote works well · hybrid possible
- Group size
- 8 to 30 — smaller groups go deeper
- For
- Mixed teams · leadership orientation
Most teams using AI today have a fuzzy sense of what's happening underneath. They use the tool, they get output, sometimes it's brilliant and sometimes it's strange — and they don't quite know why.
This workshop fixes that. Not by making everyone a machine learning engineer, but by giving them the one mental model that explains everything else: the model has no ground truth, no metacognition, and no genuine understanding of your situation. Everything it produces is a sophisticated approximation shaped entirely by its inputs. From this single idea, every failure mode — and every best practice — flows.
The arc of the workshop. Adapted to the room and the time available.
- 01
The prediction machine
What's actually happening inside the model when you send a prompt — explained without jargon and without dumbing it down.
- 02
The single underlying idea
Sophisticated approximation: the one frame that makes hallucination, sycophancy, frame capture, and every failure mode coherent.
- 03
The four failure modes
Hallucination, sycophancy, frame capture, drift. What they look like in real use, and why the subtle versions are the ones to watch.
- 04
Calibrated scepticism
The disciplined habit of checking even when things feel fine — especially when things feel fine. Where it comes from and how to build it.
- 05
Practical countermeasures
What to do differently on Monday morning. Concrete moves for each failure mode, sized to your team's actual workflow.
- 06
Live diagnosis
We work with attendees' real AI conversations. Where things went sideways, where they went well, and what the difference was.
- A clear, transferable mental model of what LLMs actually do
- The ability to predict why a given prompt will fail or succeed
- A practiced habit of calibrated scepticism applied to AI output
- Specific countermeasures for each failure mode
- Confidence to introduce LLMs into more of your work — and to know when not to
Teams across functions, including non-technical members. Especially valuable for groups where AI use is uneven — where some people are 'power users' and others are tentative, and there's no shared understanding of what's actually happening. Works equally well as an organisation-wide orientation or as a focused deep-dive for a single team.
Not sure if it’s the right fit? Book a 30-minute call and we’ll talk about what your team needs.
Let’s see if Foundations is the right fit.
A 30-minute discovery call. No pitch. We’ll talk about what you’re trying to make happen and see if this workshop fits.