During my marketing days, I built small, scrappy tools to make my work easier.
Eventually, I got fed up with duct-taping StackOverflow answers together and returned to school. Not just to learn how make them work, but to make them intentional, maintainable, and genuinely useful.
Now that loop is closing, I’m revisiting those old ideas and rebuilding them with more thought, clarity, and respect for how marketers work.
The original problem
I remember opening my reporting spreadsheet one afternoon and thinking, this thing is collapsing under its own weight.
It started innocently enough. I was running email campaigns through platforms like Mailchimp and MailerLite. Each campaign had its own little dashboard with standard metrics like opens, clicks, and unsubscribes.
Problem was, those charts lived in isolation, and I didn’t just want to know how one email performed.
I wanted to know about trends and movement. How did engagement shift across three waves of the same campaign? Were abandon cart emails quietly degrading over time while newsletters held steady?
The platform reports gave me depth, but I needed breadth. So I downloaded the CSVs and launched Excel.
Copy. Paste. Normalize column names. Fix a few date formats. Create pivot tables. Draw some charts. Rinse then repeat, week over week.
It became this whole thing, so I migrated the logic into a small Python script using pandas, and used Matplotlib to chart the results. They did the job, but it was still a patch of duct tape over the real problem.
Prioritizing honest comparisons
The issue was never about missing data. Email platforms already track everything. The issue was comparison.
I wanted to answer questions like:
- Are open rates drifting down across this campaign series?
- Did wave three underperform because of content, or because we expanded the list?
- Are unsubscribes creeping up as frequency increases?
- What send time actually drives opens for this audience?
Those questions don’t live inside any one campaign, they live across them.
And they get harder as audience sizes vary. One send goes to 10,000 subscribers, another to 120,000.
Raw numbers can become misleading quickly, so I added buttons to easily remove outliers and low-volume sends to keep the comparisons honest.
Designing around thinking, not storage
When I rebuilt this tool as Simple Dash, the first constraint I set was conceptual.
It wouldn’t be a full-fledged email analytics platform with account syncing or campaign history.
It would simply take exported reports and turn them into something easier to reason about.
That led to a few decisions.
First, focus on the metrics that actually inform decisions, like:
- The basic engagement funnel of deliveries, opens, clicks
- Click-to-open rate, which says more about content than open rate alone
- Unsubscribes, hard bounces, soft bounces to see what’s not working
- A heat map of the most effective send day/times to see when things do work
Second, treat multiple uploads as a dataset, not separate artifacts. If a report contains several emails, or if multiple reports are uploaded, they become comparable by default. Trends are graphed over time, with linear trendlines to make directional movement obvious.
Third, allow filtering. Sometimes you want to compare everything. Sometimes you want just the three waves of one campaign. The ability to include or exclude specific emails turns “all campaigns” into “this experiment.”
None of this adds new data; it just reshapes what’s already there.
Privacy by design
One thing I learned from the earlier version is that persistence isn’t always a virtue.
In the world of data-driven everything, it’s tempting to store everything, just in case. But Simple Dash isn’t meant to become a system of record.
When you upload a report, it’s only sent to the backend for parsing. Nothing is stored, and everything else happens in your browser.
This makes analysis lightweight. Your charts and graphs exists only for as long as you need them to. Then, after you close your tab or navigate away, they’re gone.
The Tech
Every technical decision in Simple Dash followed the same principle as the tool itself: keep things simple and easy to understand.
The backend is written in Python, primarily because of its strengths in parsing and normalizing CSV exports. Email reports across platforms aren’t consistent, so the server acts as a translator. It needed to accept a file, standardize the structure, and return clean data, and Python excels at this.
From there, the frontend takes over. The dashboard is an integrated Vue application, and most of the processing happens directly in the browser. Everything from filtering campaigns, calculating trendlines, mapping send-time performance, and applying IQR-based outlier detection runs browser-side.
And that’s it. There’s no analytics pipeline, background workers, or infrastructure designed for scale.
The stack is small because the problem is small: turn exported spreadsheets into usable insights.
Try it out for yourself with built-in sample data.
Why rewrite?
I’ve thought about this a lot.
After all, rewriting old tools isn’t the most exciting thing.
But they existed because I actually needed them. They were grounded in real workflows.
In a way, these projects are more about finishing an old mission than it is starting something new.
I’m taking tools that were once just “good enough for me”, and turning them into things that might be useful for other marketers and founders who are running the same kinds of scrappy experiments.
Simple Dash first, then Simple Pixel. More to come.
Have an idea for a tool?
I built Simple Dash to help marketers analyze their email campaigns more effectively. If you have a pain point that could be solved with a tool like this, I’d love to hear about it!