All posts

Your Ad Data Is Safer With Us Than With ChatGPT

Pasting Google Ads exports into ChatGPT ships every row — clients, budgets, conversions — to a third-party LLM. ROAS Radar's architecture refuses to ask for more than it needs.

Illustration of secure data integrity protocols protecting an encrypted ad data server with locks and authentication keys

On almost every demo call, about ten minutes in, someone asks a version of the same question: "But what happens to our data?" It usually comes with a slight wince, the kind you make when you already suspect the answer is going to involve a Terms of Service document longer than a Russian novel.

It's a fair question. Marketers have spent the last two years watching their colleagues paste proprietary campaign exports into ChatGPT. The hesitation is healthy — pairing a consumer AI with live ad data is exactly the kind of thing that deserves a second thought. Because when you actually look at the data flow, pasting a Google Ads export into a chatbot is, roughly, the riskiest thing you can do with that spreadsheet short of emailing it to a competitor.

Here's what happens. You run a report, you hit export, you drop the CSV into a consumer AI. The entire file — client identifiers, campaign names, budgets, spend, conversions, every row — goes to a third-party LLM. Depending on whose terms you agreed to and which checkbox you didn't uncheck, that data may be used to improve future models, retained in conversation history, or reviewed by humans for quality purposes. You, the person who was just trying to find out why a single campaign's CPA spiked, have now exported your client's P&L to a different company's training set.

ROAS Radar is built to do the opposite of that. Your Google Ads and Microsoft Advertising data comes in through the proper OAuth front door, gets stored in our encrypted database, and stays there. When you type a question into chat, the AI isn't reading your spreadsheet — it's asking our system questions about your spreadsheet. It never sees your client names. It never sees your campaign names. It never sees your budgets, your spend, or your conversions. It sees aggregated results our system has already computed, so it has just enough context to write you a coherent answer and not one drop more.

The model we use is Anthropic's Claude, via their commercial API. That matters because Anthropic's default commercial terms don't allow training on inputs or outputs — not as a paid upgrade, not as a checkbox, just as the baseline. So even for the small slice of aggregated data that does touch the model, it goes nowhere.

Is all of this a little boring to explain? Yes. Is "we designed the system to never need your raw data in the first place" a thrilling marketing line? Not really. But that's the point. The most private AI tool in paid search isn't the one with the slickest demo — it's the one whose architecture quietly refuses to ask for more than it needs.

So the next time the privacy question comes up on a call, the short answer is this: the data stays yours, the model never meets it, and the whole thing is built that way on purpose.