#marketing-paid
Thread
Do folks have strong preferences on meta creative testing structure these days? For example, if I have a new batch of ads weekly, is it less disruptive to the algo to drop those ads into one existing testing ad set (and only start a new one once you’ve hit the 50 ad limit), or create a new ad set with that batch of creatives in a testing CBO campaign?
is it less disruptive to the algo to drop those ads into one existing testing ad set - that’s exactly what we do.
Few have a few different product categories. Each category has it’s own x1 CBO campaign. In that CBO, we only have x1 ad set with all creatives. New creatives are being launched in that same ad set as well.
Main reason is that we’d struggle to exit the learning phase outside Q4 ($200+ AOV) so we try to consolidate as much as possible.
Ever since switching from multi ad set structure to x1 ad set structure, our performance has been way better.
Another thing that I believe has contributed to better performance, is launching less creatives. Less, but better quality. Bigger swings etc. This helps us get more meaningful data & learnings
Watch roast from @Charles Tichenor IV from 100 to the end
We were also following the same structure as Andres. However, even after adding exclusions, campaign was spending on existing customers & engaged audience.
After speaking with a Meta rep, I discovered that we can add strict exclusions by "Further limit the reach of your ads".
After we switched to this, the campaign is now spending on the new audience.
As a guy who has spent $1B on ads
Lemme lend my 2cents
Please don’t add exclusions
And I assure you, you are testing way too many ads
Why test weekly?
Strong performance doesn’t come from undermining how the machine works, in a way that makes actionable insight impossible, raises costs and makes incremental reach more difficult
Unless you’re spending $100k+ a day
New ads each month is likely all you need
Its disingenuous to say you have spent $1b on ads
Awesome thanks all – appreciate the gut check. I’m working with a brand that’s trying to spend upwards of $50k/day in a really high CAC category, and hasn’t done enough creative testing to date. Realistically I think we’ll be able to test more like 2x/month than weekly, which might be better anyway algo-wise.
@Mike K how so?
I'm not saying that like an Agency talking about all the managed spend for all employees and clients
I've been the guy hitting the buttons on over $1M a day, and I was doing that over 10 years ago
its not my money
thats fair
but then the way most agencies and media buyers measure their work is invalid
@Amanda Berg - when you say they haven't done enough creative testing
thats totally possible
What is the single most valuable problem in the funnel of how the machine is using the ads... that you need to solve
can we do a simple test to improve that weak link
and move systemically through improvements
and push for scale when that happens
also, if the CAC is an issue
are we optimizing for a New Customers event and evaluating the ads against that target event
my point here is that more new ads is not a sustainable and predictive way to move the machine towards addressing the mix of return vs new customers (which is how we measure CAC)
and if we can't track this, or we arent...
no amount of testing will solve the problem
because the problem is a lack of the proper data
Here is what I see working best for most (not all) brands at the moment:
Just use 7-day click tracking.
Skip all the view-through stuff - it's not reliable for measurement anyway.
Higher AOV brands need more time because people don't impulse buy $500+ products.
Get your basics right first.
Make sure your Conversions API is set up and your purchase event scores 8-9.
Don't worry about fancy first-party data tools unless you're spending $500K+ monthly.
Two campaign types, that's it:
Testing Campaigns (ABO):
Use Ad Set Budget Optimization for testing new creative and partnership content.
Lowest cost bidding.
Throw boosted posts in here too. This is where you find your winners.
Scaling Campaigns (ASC):
Advantage+ Shopping/Sales Campaigns for proven winners only.
Use ROAS goal bidding.
Group by product categories or similar price points.
Each campaign should focus on one product type.
How to organize ad sets depends on your brand...
Utility brands (supplements, gear):
Organize by benefit angles.
Mental clarity ads, energy ads, recovery ads - each gets its own ad set.
Aesthetic brands (jewelry, fashion):
Organize by collections.
Spring collection, summer collection, limited edition - separate ad sets.
Keep targeting simple: Age, gender, country.
That's it.
Let ASC handle placements automatically.
Upload both 4x5 and 9x16 creative.
Use benefit-focused headlines with "Shop Now" CTAs.
When to move ads from testing to scaling:
Take your top 10-20% revenue drivers OR anything performing 1+ standard deviation above average.
Don't worry about overlap—Meta's smart enough to handle it.
You can run the same ad in both campaigns.
For very expensive products ($1,000+ AOV):
Add a lower funnel ABO campaign targeting 180-day add-to-carts and engagers.
Use catalog ads and your best offer based creative.
For established brands:
Run a reactivation campaign.
Upload your lapsed customer list to an ABO.
Use "new & improved" and "special offer creative."
Name everything properly so you can actually analyze what's working.
Include product category, concept, offer type, creative format, and whether it's new or an iteration.
Why would we not add view through?
What’s the advantage of keeping that data away from the machine
Attribution isnt about credit… it’s about teaching the machine
Why would you test in a different campaign than you spend meaningful amounts?
Wouldn’t you want to scale what works?
Why spend money to test what you actually want to test, while running a bunch of ads to the bottom of the funnel on small budgets to get fake winners?
ASC isn’t a thing anymore
It’s all one campaign
A+ is a state of the campaign (this changed a few months ago)
ROAS is based on last click attribution, and doesn’t account for volume of profit, only what add was lucky enough to get credit and give a good margin… margins aren’t money
Why are we promoting groups of products?
Why make a nickel when you can make a dime… just to keep the machine dumb and make 2nd transactions more difficult?
Why does organizing ad sets by concept have any impact?
Why would you move anything that works?
Having a system setup to undermine wins feels like a production line that gets in the way of progress
Campaigns don’t talk to each other
Overlap isn’t the issue, it’s the complete compromising of the data set and attribution confidence
Overlap is ad set to ad set in a CBO
Or ad to ad if you are STILL running ABO (god forbid)
AOV has nothing to with targeting