Most AI training for finance teams fails because it starts with the technology and hopes the use cases will follow. It should be the other way around. Start with work the team already does, show how AI accelerates it, and let people experience the “aha” moment with their own domain knowledge.
I've run hands-on AI training sessions for finance teams — accounting, FP&A, tax, billing operations — and refined the approach through iteration. This article is the playbook: the session structure, the demo sequence, the prompt patterns, and the design decisions that make the training stick.
The 30-Second Pitch
Before any demo, you need a framing that resonates with finance professionals. Here's what works:
An AI coding assistant is a workspace where you describe what you need in plain English and get working SQL, policy memos, reconciliations, journal entries, data models, and automation scripts back. You don't need to be a developer. You need to know your domain — and you already do.
This framing works because it positions domain expertise as the prerequisite, not technical skill. Finance professionals already have the hard part — years of accounting knowledge, understanding of their company's data, and judgment about what matters. The AI handles the implementation.
Session Structure
A 60-90 minute session, structured in four parts:
| Part | Duration | Purpose |
|---|---|---|
| 1. What Is This? | 10 min | Framing, key concepts, what finance teams actually use it for |
| 2. Live Demos | 40-50 min | 3-5 demos showing real accounting workflows, chosen by audience interest |
| 3. Patterns & Tips | 10 min | Reusable prompt patterns, validation techniques, what still needs judgment |
| 4. Hands-On Practice | 20 min | Participants try it themselves with sample data and guided exercises |
The Demo Sequence
The demo order matters. You want to build from “impressive but accessible” to “genuinely complex.” Each demo should make the audience think “I do that manually today.”
Demo 1: From English to SQL (10 min) — Start Here
The setup: Open three data files — contracts, invoices, and monthly usage data. Describe a deferred revenue waterfall in plain English, including the ASC 606 logic for seat vs. usage recognition.
The moment: The AI generates working SQL with proper joins, window functions, and accounting rationale — in about 90 seconds. Then iterate: “Add a per-customer breakdown.” “Transpose so months are columns.” “Which customers will exhaust their precommit?”
Why it works: Everyone on a finance team has spent hours building waterfall schedules in Excel. Seeing it done in two minutes with correct accounting logic is the “aha” moment. It's the demo that converts skeptics.
Demo 1 Prompt Template
“I have three CSV files open: contracts (enterprise SaaS contracts with seat fees and prepaid usage), invoices (all invoices issued against those contracts), and monthly usage (token consumption and precommit drawdown). Write me a SQL query that produces a monthly deferred revenue waterfall with opening balance, additions, seat recognition (straight-line), usage recognition (consumption-based, capped at precommit), on-demand usage, closing balance, and a validation column. Explain the ASC 606 basis.”
Demo 2: The Detective Story (10 min)
The setup: Show subscription data that includes anomalies — subscriptions with no payments, subscriptions that should have been cancelled, subscriptions with suspicious patterns.
The moment: Walk through the investigation like a forensic analyst. Start with “identify subscriptions that have never received a payment.” Then quantify the financial impact. Then draft the remediation plan and the bug report for the engineering team.
Why it works: Every finance team has a “zombie” problem they haven't had time to investigate — stale subscriptions, phantom invoices, billing anomalies sitting on the backlog. This demo shows the AI as a tool for the things you never have time for.
Demo 3: Audit-Ready Memos (8 min)
The setup: Open contract data and ask the AI to generate a five-step ASC 606 revenue recognition analysis.
The moment: The output is a proper technical memo with citations — not a summary. It addresses performance obligations, material rights, contract liabilities, and recognition patterns. It's the kind of memo that takes a senior accountant a full day to write.
Why it works: Writing policy memos is slow, high-skill work. Seeing a solid first draft in 90 seconds reframes what's possible. The accountant's job shifts from drafting to reviewing and refining.
Demo 4: CECL in a Chat Window (8 min)
The setup: Open AR aging data and dispute data. Ask the AI to build a three-population CECL framework.
The moment: The AI separates ASC 606 variable consideration (disputes) from CECL (credit risk) from fraud reserves, builds the aging-based reserve rates from historical data, and writes the journal entries.
Why it works: Reserve calculations are judgment-heavy and audit-sensitive. The AI doesn't replace the judgment but does the mechanical work and explains the “why” — which makes the judgment easier and better documented.
Demo 5: Close the Books Faster (8 min)
The setup: Run through multiple close tasks in a single conversation: unbilled accruals, DR roll-forward, revenue cutoff, and reconciliation prep.
The moment: For each task, the AI generates SQL, journal entries, and audit documentation. The close checklist that takes days becomes a conversation that takes minutes.
Why it works: Month-end close is the universal finance pain point. Shaving days off close is real money and real quality of life.
The Prompt Patterns
After the demos, teach four reusable patterns that work across any finance workflow:
Pattern 1: The Context Stack
The more context you give the AI, the better the output. Before asking a question:
- Open your data files (CSVs, SQL scripts, exports)
- Open your reference docs (policy memos, SOPs, chart of accounts)
- Open your GL mapping or trial balance
- Then ask your question — the AI sees everything and connects the dots
Most underwhelming AI outputs happen because the AI didn't have enough context. Loading the workspace with relevant files before you prompt is the single highest-leverage habit.
Pattern 2: Explain Then Build
Ask the AI to explain the accounting treatment first, then build the analysis:
- “How should we treat prepaid usage under ASC 606?”
- “Now write the SQL that implements that treatment against our data”
- “Generate the journal entries”
- “Write the memo documenting this for auditors”
This pattern builds understanding and catches errors early. If the AI's explanation of the accounting is wrong, you correct it before any code is written.
Pattern 3: Iterate, Don't Restart
The first prompt gets you 80% of the way. Then:
- “Add a column for variance to prior month”
- “Break this out by business unit”
- “Now write the journal entry for this accrual”
- “Format this as a table I can paste into a memo”
Each follow-up builds on the full conversation context. You're refining, not starting over. This is where the productivity multiplier comes from.
Pattern 4: The Validation Loop
Always ask the AI to validate its own work:
- “Add a check column that proves debits equal credits”
- “Write a reconciliation query that ties this back to the GL”
- “What edge cases might break this logic?”
This is where professional skepticism meets AI. The AI is a fast, tireless analyst — but you're the auditor. Train the habit of validating every output.
Designing the Sample Data
The sample data makes or breaks the training. Here's what I've learned about designing it:
Use Realistic Structure, Fictional Values
The data should look like real SaaS financial data — proper column names, realistic relationships, plausible amounts — but with fictional customers and numbers. People can't learn if the data feels fake, but you can't use production data in a training setting.
Embed Anomalies
Include data quality issues that mirror real-world problems:
- A subscription with no associated payment (the zombie)
- An invoice amount that doesn't match the contract rate (a billing error)
- A customer with usage above their precommit but no on-demand charges (a cap issue)
- Duplicate invoice lines (a common data quality problem)
When participants discover these anomalies during the hands-on exercise, it reinforces the value — “the AI found something I would have missed.”
Keep It Small but Complete
I've found that 10-20 contracts, 30-40 invoices, and 60-70 usage records is the sweet spot. Enough to be realistic, small enough to verify by hand if someone wants to spot-check the AI's work.
Include Multiple Data Types
A good sample data set for finance training includes:
| File | Purpose | Records |
|---|---|---|
| Contracts | Enterprise contracts with terms, rates, seat counts | ~12 |
| Invoices | Invoice line items (seat, usage, true-up) with payment status | ~30 |
| Monthly Usage | Token consumption and precommit drawdown per contract | ~65 |
| Self-Serve Subscriptions | Billing subscriptions including potential zombies | ~20 |
| AR Aging | Receivables by aging bucket with dispute flags | ~17 |
| Trial Balance | GL accounts (revenue, COGS, AR, DR, reserves) | ~19 |
| Disputes | Payment disputes with reasons and outcomes | ~20 |
The Hands-On Exercises
After the demos and patterns, give participants 20 minutes to try it themselves. Design exercises for different roles:
For Revenue Accountants
Open the contracts and usage data. Ask: “Which customers will exhaust their prepaid usage before their contract ends? Show me the projected exhaustion month and the on-demand cost if they maintain current usage levels.”
For Controllers
Open the trial balance. Ask: “Review this trial balance for a SaaS company. What's the gross margin? What's net revenue after contra items? Does anything look unusual compared to typical SaaS benchmarks?”
For FP&A Analysts
Open the usage data. Ask: “Create a cohort analysis of usage growth by contract vintage month. Show how usage ramps in months 1, 2, 3... after contract start. Is there a pattern?”
For Tax Analysts
Ask: “Our SaaS company has customers in California, Texas, New York, and the UK. For each jurisdiction, what are our tax obligations for SaaS revenue? Include registration thresholds, rates, and filing frequencies.”
What Still Needs Human Judgment
Close the session by being explicit about the boundaries. This builds trust and prevents the “AI will replace us” anxiety:
AI Is Great At (Finance Edition)
- •SQL from business requirements (any dialect)
- •Technical accounting memos and analyses
- •Data reconciliation and gap analysis
- •Building automation (scripts, APIs, dashboards)
- •Explaining complex accounting to non-accountants
- •Iterating on analyses without starting over
Still Needs Human Judgment
- •Accounting policy decisions (the AI provides analysis; you make the call)
- •Materiality thresholds
- •Auditor relationship management
- •Business context that isn't in the data
- •Anything requiring professional skepticism
Lessons from Running Sessions
Start with the DR Waterfall Demo
I've tried different opening demos. The deferred revenue waterfall consistently gets the strongest reaction. It's universally understood by finance professionals, it's painful to build manually, and the live iteration (adding per-customer views, transposing) shows the power of conversational refinement.
Let People Choose Their Demos
After the first demo, ask the room which topics interest them most. A room full of revenue accountants will want different demos than a room of FP&A analysts. Having 8-10 demos prepared and running 3-5 based on audience interest keeps engagement high.
The Skeptics Convert When They Try It
The hands-on portion is where skeptics convert. Someone who's been quiet through the demos will type their own question, see a result that reflects their domain knowledge, and suddenly get it. Allocate at least 20 minutes for hands-on time — don't cut it short for more demos.
Follow Up with Use Case Office Hours
The training session plants the seed. The real adoption happens when people try it with their actual work. Offering drop-in office hours in the two weeks after training — “bring your real problems, we'll work through them together” — dramatically increases adoption rates.
Share the Sample Data and Prompts
Give participants the full data set and all demo prompts after the session. They'll re-run the demos on their own, modify the prompts for their use cases, and share them with colleagues who weren't in the room.
Building Your Own Training Kit
If you want to run this at your company:
- Create realistic sample data that mirrors your company's financial structure (but with fictional values)
- Write 8-10 demo prompts covering your team's main workflows — close tasks, reconciliations, reserves, investigations, memo writing
- Design role-specific exercises so everyone has something relevant to try
- Prepare the four prompt patterns (Context Stack, Explain Then Build, Iterate Don't Restart, Validation Loop) as a one-page reference
- Schedule 90 minutes — 60 if you must, but the hands-on time is where conversion happens
The investment is maybe a day to build the training kit. The return is a finance team that can build analyses, memos, and tools at 10x the speed they could before.