All posts

Your ROAS Radar Agent Only Needs To Be Told Once (IYKYK)

Most AI tools forget every Monday. ROAS Radar remembers account-specific context — brand caveats, offline economics, client red lines — permanently.

Illustrated PPC strategist reviewing an AI ad optimizer dashboard with cumulative ROAS charts and an optimized campaign framework whiteboard

The standard operating experience with AI tools is repetition. You correct it Monday, you correct it again Tuesday, you correct it a third time on Wednesday, and start to wonder if the word learning in "machine learning" was supposed to be ironic.

ROAS Radar doesn't work that way. When you correct the agent, it remembers — for that account, going forward, permanently.

Here's a real example. A few weeks ago, I was reviewing a client account and the agent surfaced the brand campaign as the top performer, citing a 28x ROAS and suggesting a budget lift. This is the classic PPC trap: brand terms are cheap, they convert like crazy because the user already typed your name, and they make every dashboard look like you're a marketing genius. I told the agent, once, that for this account brand should always be evaluated separately — brand ROAS is a defensive metric, not a growth signal, and recommendations should be built off non-brand performance. I never had to mention it again. Brand sits in its own bucket now, treated like the hedge it is.

That's one correction. Now stack a few more of them and you start to see the shape of the product.

Offline conversion economics. A lead-gen client runs campaigns that show a $140 cost-per-lead in the platform. The real business doesn't work that way — leads close at 22%, average deal size is $4,200, and the money arrives 45 days later via a CRM import. I told the agent once. Every recommendation since then is weighted by actual economics, not platform-reported leads. "High CPA" stops meaning what the platform thinks it means, and starts meaning what the CFO thinks it means.

Product-level margin. The agent can see that SKU-A does $40K in revenue and SKU-B does $25K. What it cannot see is that SKU-A is a 6% margin loss-leader you run to acquire subscription customers and SKU-B is an 80% margin product where the real profit lives. Tell it once. The agent now evaluates performance through a profitability lens for that account, forever.

Campaign intent. A campaign with a $3K/week budget and no conversion cap looks like a runaway problem. Until you tell the agent it's an intentional prospecting test — you're buying learnings, not conversions, and you'll know if it worked in six weeks. Now the agent stops flagging it every Monday and starts giving you the kind of diagnostic read you actually want on a test.

Attribution preferences. The platform is on data-driven attribution, but this particular client reports up to their board on last-click because that's what their finance team has used for three years and is not planning to change. The agent learns which view is "the real one" for decisions and which view is reference information.

Seasonality that breaks the pattern. B2B SaaS budgets freeze in late December. A DTC swimwear brand's "bad" February is not a problem to be solved. A legal services client's August dip is just August. Institutional knowledge like this has lived in senior practitioners' heads for a generation. Tell the agent once per account, and the analysis stops fighting the calendar.

Client red lines. "This client has explicitly rejected Performance Max three times, don't recommend it again." "Legal has told us we can't bid on competitor brand terms." "The CMO won't green-light anything above a $60K monthly budget." These are the kinds of things that, on a traditional platform, get forgotten on week five and re-proposed on week six, annoying everyone involved.

In most analytics workflows, this knowledge lives in someone's head, or in a Slack thread, or in a Google Doc nobody opens. It doesn't inform the tool doing the analysis. Every report starts from a blank slate, and every insight has to be mentally re-filtered by the human reading it.

With ROAS Radar, you teach the agent the context once and the context is now part of how that account gets analyzed going forward. Think of it like the best student you've ever worked with — tell them something one time, they've got it, and they bring it up unprompted the next time it's relevant.

The compounding effect is where things get interesting. An account that's been in ROAS Radar for three months, with a dozen of these acute corrections accumulated over that time, is being analyzed with more real-world context than any out-of-the-box tool could replicate. It's your knowledge, layered into the agent, quietly doing the work in the background.

Which is probably the right direction for AI — the tool shouldn't be smart in general, it should be smart about your account.