Writing/The Accountant-Builder

The Accountant-Builder

March 202615 min read
FrameworkAIFinance

Note: Examples throughout this series are drawn from real work but generalized — specific company details, system internals, customer names, and proprietary data have been removed. The methodology and thought process are what matter.

When I joined a high-growth SaaS company to build the revenue accounting function from scratch, I had no engineering background. Fifteen years in audit, revenue accounting, and billing operations. I knew debits and credits, ASC 606, and how to close the books. I did not know how to build software.

Within four months, I had built over fifteen internal tools the team uses daily and was using AI to complete investigations and analyses in minutes that previously took hours or days. This article explains the two distinct modes of value I've found, and a framework for deciding which one fits which problem.

The False Start Problem

Before getting into the framework, I want to address something I hear constantly: “I tried AI and it didn't work.” There's a growing backlash where people feel they've been oversold — they open an AI tool, ask it something vague, get a mediocre answer, and conclude the technology isn't ready. This is a false start, and it's almost always a framing problem, not a technology problem.

The mental model that works: think of AI as an eager intern who has read everything but has never worked a day in your industry. They have encyclopedic knowledge — every accounting standard, every SQL pattern, every framework. But they have zero context about your business, your systems, your data, or what “good” looks like in your specific environment. If you hand them a vague assignment (“help me with revenue recognition”), you'll get a vague, textbook answer back. If you hand them a specific assignment with context (“here's our contract structure, here's our billing system, here's the specific question I need answered, and here's the data to work with”), you'll get something genuinely useful.

The people who dismiss AI after a bad first experience almost always made the same mistake: they tested it without giving it the context it needed. They asked it a question the way they'd ask Google, not the way they'd brief a colleague. The gap between “AI is useless” and “AI just saved me a full day of work” is almost entirely about how you frame the problem and what context you provide. The domain expertise — the specificity, the judgment, the “here's what actually matters” — that's the part only you can bring.

Two Modes, Not One

Most conversations about AI in finance jump straight to building things: dashboards, apps, automation. But that's only half the picture. There are actually two distinct ways these tools create value, and confusing them leads to over-engineering simple problems or under-investing in ones that deserve a real system.

Before diving in, an important framing: none of this is about building enterprise software. We're not trying to ship products or compete with engineering teams. The point is unlocking velocity and throughput without being bottlenecked by upstream or partner teams. When accounting needs a dashboard, a queue, or an investigation, we shouldn't have to wait in a sprint backlog for six months. Some of what you build will live on and become critical infrastructure the team relies on daily. Some of it will serve its purpose for a quarter and get replaced by something better — or by a proper engineering solution once the requirements are proven out. Both outcomes are fine. The value isn't in the permanence of the artifact; it's in the speed at which your team can move and the problems you can solve without waiting for someone else to prioritize your needs.

Mode 1: Using AI for Daily Work

This is the immediate, zero-infrastructure value. You open your AI coding assistant, describe what you need, and get a deliverable back. No deployment, no database, no ongoing maintenance. You use it once and move on.

Examples from my own work:

  • A billing anomaly investigation — surfaced the root cause, quantified exposure across all affected accounts, wrote the SQL, produced the remediation plan, and drafted the bug report for engineering. Fifteen minutes. The old way: a full day of manual work.
  • Draft preparation for technical accounting memos — AI generates a structured first draft of an ASC 606 analysis or reserve methodology that I then review, refine, and apply professional judgment to. It accelerates the drafting phase; the conclusions are mine.
  • Month-end close analyses — unbilled accruals, revenue cutoff tests, DR roll-forwards. Each one takes minutes instead of hours, with manual validation against source systems before anything is booked.

The defining characteristic: no permanent artifact. The output is a memo, a SQL query, a spreadsheet, a brief. You use it, file it, move on. The AI was a force multiplier for your existing skills, not a system you maintain.

Mode 2: Building Persistent Tools

This is where you build something that lives on, that your team interacts with, that runs on a schedule. A deferred revenue waterfall dashboard. An automated contract provisioning queue. A fraud monitoring system. An AP inbox that triages vendor invoices.

These take more time to build, require infrastructure decisions, and create ongoing maintenance obligations. But they also compound: once the tool exists, it saves time every day, not just once.

The Decision Framework: Build vs. Use

This is the most important section of this article. Getting the build/use decision right determines whether you spend an afternoon building something transformative or a week building something nobody needs.

Use AI conversationally (Mode 1) when:

  • The task is one-time or infrequent — an investigation, a policy memo, a board-level analysis
  • The output is a document or decision, not a process
  • The data inputs change every time — different customer, different quarter, different question
  • You're the only consumer of the output
  • Speed of the first result matters more than repeatability

Build a persistent tool (Mode 2) when:

  • The task repeats on a schedule — monthly close, daily monitoring, quarterly review
  • Multiple people need to interact with it, not just you
  • The logic is stable but the data refreshes — same calculation, new month
  • You need an audit trail — who reviewed what, when, with what result
  • The process has workflow states — pending, reviewed, approved, provisioned
  • You're replacing a manual handoff between people or systems

The mistake I see people make: building a tool for something that should have been a conversation. If you need a one-time reconciliation between two data sources, just describe it to the AI and get the answer. Don't build a reconciliation dashboard. Save the building for things that will run next month, and the month after that.

What I Built (and What I Didn't)

Applying this framework, here's roughly how the work split:

Things I Built (Mode 2)

These are the things that repeat, that the team uses, that run on a schedule:

  • Deferred revenue waterfall — runs monthly, used by the whole accounting team, reconciled to three systems, auditor-facing
  • Contract provisioning queue — every new enterprise contract flows through this: CRM ingestion, AI extraction of PDF terms, validation, human review, provisioning handoff
  • Dispute reserve calculator — monthly ASC 606 variable consideration calculation, automated data pull, trend analysis
  • Fraud monitoring dashboard — daily refresh, pattern detection, alert thresholds
  • AR dashboard with collections intelligence — aging, cash forecast, support thread analysis
  • AP inbox — syncs vendor emails, extracts invoice details, triages for approval
  • Tax compliance tracker — integration with tax automation platform, filing status, exemption management

Things I Used AI For Directly (Mode 1)

These are the investigations, analyses, and one-time deliverables where building a tool would have been overkill:

  • Billing anomaly investigations — usage stacking issues, subscription misconfigurations, payment method problems. Each one unique, each one fast.
  • Draft preparation for policy memos — ASC 606 assessments, CECL methodology docs, reserve analyses. AI accelerates the drafting; the accounting conclusions and judgment are applied by the accountant.
  • Auditor correspondence drafts — contract inception analysis, treatment confirmations. Drafted quickly, then refined with professional judgment and relationship context.
  • Data investigations — zombie subscription forensics, revenue leakage analysis, cross-system reconciliation one-offs. Always validated against source systems before acting on findings.
  • Board and executive materials — financial summaries, trend analyses, risk assessments.

Why Domain Expertise Is the Prerequisite

The common assumption is that AI tools require technical skill. The reality is the opposite: they require domain expertise. The AI can write SQL, build dashboards, draft memos, and structure analyses. What it can't do is know that prepaid usage commitments create contract liabilities under ASC 606, or that a chargeback reserve requires different accounting treatment than a bad debt allowance, or that a zombie subscription generates phantom invoices that inflate gross revenue.

The specificity of your requirements determines the quality of the output. “Build me a revenue dashboard” produces something generic. “Build me a monthly deferred revenue waterfall with opening balance, additions from seat and prepaid usage invoices, straight-line seat recognition over the contract term, consumption-based usage recognition capped at precommit, and a closing balance with a validation column that proves the math ties. Pull contract data from the CRM via API, invoice and payment data from the billing platform, and usage consumption from the data warehouse. The waterfall should reconcile to the GL trial balance and the billing system's deferred revenue balance, with a variance column that flags anything over $100” produces something that actually works.

That level of specificity comes from years of accounting experience, not from learning to code.

The Cost Equation

This is worth being explicit about because the numbers are striking. An AI coding assistant costs roughly $20-40 per month. A single investigation that would have taken a senior accountant a full day — loaded cost of $800-1,200 depending on your market — takes fifteen minutes and costs less than a dollar in AI inference. The return on that subscription is measured in orders of magnitude, not percentages.

Building a persistent tool is more investment: maybe an afternoon to a few days for something substantial, plus the time to set up proper infrastructure, security review, and deployment governance (more on that in Article 03). But compare that to the alternative — putting a request on the engineering backlog, waiting weeks for sprint prioritization, going through the requirements-build-review cycle. The tools I built in four months would have taken a year or more through traditional channels, if they got built at all. Most of them wouldn't have. They'd still be on the backlog.

What Still Requires Human Judgment

AI is an accelerant, not a replacement. The things that still require an accountant:

  • Accounting policy decisions. The AI can lay out the five-step ASC 606 analysis. You decide how to apply it.
  • Materiality. What's worth building a tool for? What's a rounding difference vs. a real break?
  • Business context. Why did that customer's usage spike? Is that a billing error or legitimate growth? The AI sees data; you see the business.
  • Professional skepticism. When the reconciliation ties too perfectly, when the reserve feels low — that instinct comes from experience.
  • Auditor communication. AI can help structure and draft memos, but managing the relationship, understanding what satisfies your auditors, and applying the right level of rigor — that's human.
  • Validation. Every AI-generated analysis gets validated against source systems before it's acted on. The AI gives you speed; you provide the assurance that the output is correct.

The Series

The rest of this series goes deeper on each mode. Article 02 covers the force multiplier mode — concrete examples of AI-assisted investigations and analyses with quantified time and cost savings. Article 03 covers the infrastructure you need when you do decide to build. Article 04 is a deep dive on building one specific tool end-to-end. And Article 05 covers how to train your team.