
In the last two posts I shared the setup for a simple safelist experiment and then walked through the results.
The idea was straightforward.
Instead of promoting an offer or trying to build my list, I asked visitors to do something very simple.
Click a button.
No opt-in form.
No sales pitch.
No reward.
Just curiosity.
Over the course of about a week the page received 4,047 visits and 357 button clicks across 40 safelists.
The numbers themselves were interesting, but after spending some time looking through the data I started thinking about something a little deeper.
What exactly was this experiment measuring?
What This Experiment Was Really Measuring
This wasn’t a conversion test.
It wasn’t a landing page test.
And it definitely wasn’t measuring sales or signups.
What it really measured was voluntary interaction.
Someone had to:
– land on the page
– read what was written
– understand the experiment
– decide to click the button
That’s a higher bar than the normal safelist browsing process.
Which means the clicks in this experiment probably say more about attention than anything else.
Different Safelists Encourage Different Behavior
One thing became very clear while looking at the results.
Safelists don’t all behave the same way.
The same splash page ran on 40 different platforms during the same time period.
Yet the engagement levels were dramatically different.
There are probably two main reasons for that.
Community Activity
Some safelists simply have more active communities.
More members are opening mailings and browsing the ads.
That naturally leads to more interaction.
Platform Design
The structure of the mailer itself can also influence how people behave.
Some platforms make it very easy to earn credits quickly. When that happens, people tend to move through pages faster.
Other systems encourage members to spend a little more time looking at what’s on the screen.
Both factors affect how ads are experienced.
The Big Standout
One result from the experiment stood out immediately.
My Daily Mailer produced a participation rate that was dramatically higher than anything else in the test.
There are probably a few reasons for that.
The community is very active, and the platform includes Lucky Letters, which are messages that sometimes contain prizes. That naturally encourages members to actually look at the pages they land on.
It’s also possible that my ad simply stood out more there since members are already familiar with me on that platform.
Whatever the reason, the difference was noticeable.
A Couple Interesting Outliers
While reviewing the data, a couple of smaller mailers also caught my attention.
Send Circle and Viral URL didn’t send a lot of traffic during the experiment.
But the visitors they did send interacted with the page at a surprisingly high rate.
That’s always interesting to see.
Sometimes smaller communities can produce more attentive traffic simply because there are fewer members and people spend a little more time browsing the ads.
With a larger dataset it would be interesting to see if that pattern continues.
Traffic Volume vs Engagement
Another thing that stood out in the data was the relationship between traffic volume and engagement.
Some safelists sent a lot of visitors but relatively few clicks.
Others sent fewer visitors but a much higher percentage of interaction.
That doesn’t necessarily mean one type of traffic is better than the other.
It just shows that different communities interact with ads in different ways.
Why I Enjoy Running Experiments Like This
One of the reasons I enjoyed this experiment so much is that it reminded me of something I used to do regularly.
Years ago I used to publish monthly safelist statistics showing which platforms were producing the most list signups.
Those posts were always fun because they showed real data from actual campaigns.
This experiment felt a little like returning to that idea, but from a different angle.
Instead of measuring signups, I was simply measuring interaction.
And sometimes that tells us just as much.
What I Might Experiment With Next
This experiment answered some questions, but it also created a few new ones.
For example:
– Would the results change if the experiment ran longer?
– Would traffic exchanges show similar behavior?
– What would happen if the page offered an incentive instead of pure curiosity?
I may explore some of those ideas in the future.
For now, I’m glad I ran this experiment.
It turned out to be a fun way to look at safelist traffic from a slightly different perspective.
And as always, I appreciate everyone who took a moment to participate.
Discover more from Get Rich With Jerry
Subscribe to get the latest posts sent to your email.