Writing/Infrastructure

From Localhost to Internal Tool

March 202612 min read
InfrastructureDeploymentSecurity

Note: Examples throughout this series are drawn from real work but generalized — specific company details, system internals, and proprietary configurations have been removed.

The hardest part of building internal tools isn't building them. It's the gap between “this works on my laptop” and “my team can use this.” That gap is where most non-engineer builders get stuck, and it's the part that's least discussed. This article covers the practical infrastructure progression — how I went from local prototypes to deployed tools the team uses daily.

The Progression

Every tool I built went through roughly the same three stages. Understanding these stages helps you plan what you're building toward, even when you're just starting.

Stage 1: Local Prototype (Day 1)

This is where you start. You describe what you want to the AI, it builds a web app, and you run it on your laptop. For me, the AI suggested Next.js — a React framework that runs a local development server on localhost:3000. I didn't choose it deliberately; the AI recommended it and it worked.

At this stage:

  • The app runs when you type npm run dev in your terminal
  • It's accessible at localhost:3000 in your browser
  • Data might be hardcoded, pulled from local CSV files, or hitting APIs directly
  • Only you can see it — nobody else on your team
  • If you close your laptop, the app stops

This stage is valuable on its own. A local prototype that does the right accounting logic is already better than the spreadsheet it replaces. But it's a single-player tool.

Stage 2: Connected Prototype (Week 1-2)

The first upgrade is connecting to real data sources instead of hardcoded values. This is where the tool starts to be genuinely useful because it reflects live data.

The connections I needed:

  • Data warehouse — for analytical queries (revenue waterfalls, aging analysis, usage calculations). I connected to Databricks via OAuth, so queries run against production data models.
  • Billing platform APIs — for real-time subscription data, invoice status, payment methods. API key authentication, wrapped in server-side routes so credentials aren't exposed to the browser.
  • CRM — for contract details, opportunity data, customer information. OAuth flow so individual users authenticate with their own permissions.
  • Support platform — for customer thread context on AR accounts. API integration for pulling thread summaries and sentiment.

Each integration follows the same pattern: the AI writes the API route (a server-side function that handles authentication and data fetching), and the frontend calls that route. You don't need to understand the networking — you describe what data you need and the AI handles the plumbing.

The Pattern for Every Data Connection

1. Store credentials in environment variables (never in code)
2. Create a server-side API route that authenticates and fetches data
3. Call that route from the frontend
4. Handle loading states and errors in the UI

The AI generates all four pieces from a single description of what you need.

Stage 3: Deployed Tool (Week 2-4)

This is the leap from single-player to multiplayer. The tool runs on a server that's always on, accessible to your team via a URL, with proper authentication so only authorized people can access it.

What deployment involves:

  • Hosting. A platform that runs your app 24/7. Vercel, Railway, Render, or your company's internal infrastructure. The AI can set up the deployment configuration — it's usually a few lines of config and a git push.
  • Environment variables. Your API keys and secrets need to be configured on the hosting platform, not just your laptop. Each platform has a UI for this.
  • Authentication. You need to control who can access the tool. This might be as simple as a shared password for a small team, or as proper as SSO integration with your company's identity provider. Start simple.
  • A database. If your tool needs to persist state — like a queue of contracts to review or an audit log of actions taken — you need a database. PostgreSQL is the standard choice. Managed database services make this straightforward.

The Stack (and Why It Doesn't Matter Much)

For full transparency, here's what I ended up using:

LayerTechnologyWhy
FrameworkNext.js (React)The AI suggested it. Works well for internal tools: handles both the frontend and API routes in one project.
DatabasePostgreSQL + Drizzle ORMNeeded persistent state for queues and audit logs. Drizzle gives you type-safe queries.
StylingTailwind CSSThe AI generates Tailwind classes fluently. Quick to style without design skills.
HostingInternal infrastructureDeployed alongside other internal tools. Vercel or Railway would work fine for most teams.
Cron JobsBuilt-in schedulerFor automated tasks: data refresh, ingestion, monitoring. Runs on a timer (e.g., every 5 minutes).

But here's the thing: the specific technologies matter far less than you think. I didn't evaluate frameworks. The AI picked Next.js, it works, and I've never had a reason to change it. If the AI had suggested something else, I'd be using that instead. The accounting logic is the hard part. The infrastructure is commodity.

Cron Jobs: The Automation Layer

Cron jobs are what turn a tool from “something you check manually” to “something that works for you in the background.” A cron job is just a task that runs on a schedule — every 5 minutes, every hour, daily at midnight.

Examples from my tools:

  • Contract ingestion: Every 5 minutes, check the CRM for new Closed Won opportunities and add them to the provisioning queue
  • Auto-processing: Every 5 minutes, pick up queued contracts that have PDF attachments and run the AI extraction
  • AR refresh: Hourly, pull updated invoice and payment data from the billing platform
  • Fraud monitoring: Daily, scan for anomalous subscription patterns and flag for review

The cron pattern is simple: it's an API route that gets called on a schedule, protected by a secret token so only the scheduler can trigger it. The AI sets this up in minutes.

This is where the tools start to feel magical to the rest of the team. “New contracts just appear in the queue.” “The AR dashboard already has today's data.” “I got an alert about a suspicious subscription.” That's not because you're doing anything — it's the cron jobs running quietly in the background.

Database Design for Non-Engineers

Not every tool needs a database. If your tool just reads data from APIs and displays it, you don't need one. But if you need to track state — this contract has been reviewed, this dispute was flagged, this invoice was marked for follow-up — you need somewhere to store that state.

The mental model is simple: a database table is a spreadsheet with strict column types. A row is a record. You insert rows, update rows, and query rows. If you can think in spreadsheets, you can think in database tables.

What I needed databases for:

  • Queue state: Contract items with status (new → extracting → pending review → approved → provisioned)
  • Audit logs: Who did what, when, for compliance and debugging
  • Extracted data: AI extraction results saved for comparison and review
  • Configuration: Versioned prompts for AI extraction, threshold settings

The AI designs the schema from your requirements. Describe what you need to track — items, statuses, extracted data, who reviewed what and when — and it writes the schema, the migrations, the types, and the query functions.

Security, Governance, and Doing This Properly

I want to be direct about this because it's the part that gets glossed over in most “accountant builds tools” narratives: we didn't just spin up a server and hope for the best. These tools handle financial data, connect to production systems, and are used for business decisions. They need to be built and deployed with the same rigor you'd expect of any internal application.

Here's how we structured it:

Authentication and Access Control

Every tool sits behind the company's SSO provider (Okta in our case) with group-based permissions restricted to accounting. If you're not in the accounting group, you don't get in. This wasn't optional or something we added later — it was part of the initial deployment setup, done in partnership with the security engineering team.

Code Review and PR Governance

Every change goes through a pull request. We established a risk-rating system for PRs:

  • High-risk PRs (changes to data-writing logic, API integrations, authentication flows, anything touching billing system interactions) require review and approval from an engineer on the security or platform team.
  • Low-risk PRs (UI changes, dashboard layout, copy updates, new read-only views) go through automated code review. We use an AI code review tool that scans for security issues, dependency vulnerabilities, and common anti-patterns. Issues flagged by the automated review must be resolved before the PR can merge.

This gives us a sensible balance: engineering oversight where it matters (data integrity, security boundaries), automated guardrails for routine changes, and no bottleneck on the engineering team for low-risk UI work.

Infrastructure Setup

The deployment environment was structured with guidance from our security engineering team to mirror how other internal tools were set up:

  • Secrets managed through the company's standard secrets infrastructure — never in code, never in environment files checked into version control
  • Database access restricted to the application's service account with least-privilege permissions
  • API credentials scoped to the minimum permissions needed for each integration
  • Deployment pipeline follows the same CI/CD patterns as other internal services

Working With Engineering, Not Around Them

An important framing: building tools with AI doesn't mean bypassing engineering. It means changing the collaboration model. Instead of “here are my requirements, please build this and come back in three weeks,” the conversation becomes “I've built a working prototype that does X — can you review the infrastructure setup, the security boundaries, and the data access patterns?”

Engineers review the architecture and security-sensitive code. The accounting team owns the business logic and the domain-specific features. Everyone works in their zone of expertise. The AI handles the implementation mechanics that would otherwise require either team to context-switch into the other's domain.

The Principle

Move fast on the accounting logic and UI — that's where your domain expertise lives. Move carefully on security, data access, and production infrastructure — that's where you partner with engineering. AI helps you build; your security team helps you deploy safely.

Common Gotchas

Secrets Management

Never put API keys in your code files. Use your company's standard secrets management. Every hosting platform has a mechanism for this. This should be established in your initial setup with the security team, not added as an afterthought.

OAuth Flows

Some data sources (CRM, data warehouse) use OAuth — the flow where you click “Authorize” and get redirected back. These require callback URLs, client IDs, and scopes to be registered properly. The AI handles the code, but you'll need to register your app in the source system's admin settings and have the security team review the OAuth scopes. Budget time for this on your first integration.

Data Refresh Timing

Your tools are only as good as the data freshness. If the data warehouse refreshes at 6am and your team checks the dashboard at 7am, you're fine. If they check at 5am, they're seeing yesterday's data. Understand the upstream refresh schedules and set your cron jobs accordingly.

Error Handling

APIs go down. Queries time out. Tokens expire. Your tools need to fail gracefully — show a useful error message instead of a blank screen. Ask the AI to add error handling and loading states to every data connection. This is the difference between “the tool is broken” and “the data warehouse is temporarily unavailable, showing cached data.”

The Path for Your Team

If you're an accountant or finance professional considering this path:

  1. Start local. Build something that runs on your laptop and solves a real problem. Don't worry about deployment yet.
  2. Connect to real data. Replace hardcoded values with API calls to your actual systems. This is when the tool becomes genuinely useful.
  3. Deploy when there's demand. When you find yourself screen-sharing the tool in every meeting, or teammates ask “can I get access to that thing?”, it's time to deploy.
  4. Add automation after deployment. Once it's deployed, add cron jobs for data refresh, ingestion, and monitoring. This is when the tool starts working for you instead of the other way around.
  5. Iterate based on usage. The team will tell you what's missing. Add it. The AI makes iteration fast.

The entire progression from “nothing” to “deployed tool the team uses daily” can happen in a week or two. Not months. Not quarters. The infrastructure is no longer the bottleneck — your accounting expertise is the hard part, and you already have that.