How We Trained the Algorithm to Ignore Us
There's a particular kind of irony that only becomes visible in hindsight.
For most of the last decade, mobile engagement teams have been rewarded for speed, volume, and "optimization." We learned, to keep up, we must send more, test faster, find the ceiling and push toward it. More notifications meant more opens, more opens meant more sessions, and more sessions (typically) meant more revenue. The flywheel was spinning, so why slow it down?
What nobody quite anticipated was who else would end up paying attention.
Catching this before March 26? Reserve your seat for a live webinar tackling this unavoidably consequential topic!
How the industry talked itself into sending more
The early days of push notifications truly felt like a gift, didn’t it? For the first time, brands had a direct line to users that didn't depend on an ad buy. Permission was the price of entry, and once you had it, the channel was yours to use.
So teams used it, often thoughtfully at first, then increasingly by default. Weekly sends became daily sends. Segmentation may have gotten more sophisticated, but frequency climbed alongside it. A/B testing made everything feel purposeful, even when volume was the real driver. The industry developed an entire vocabulary around optimization (inbox placement, engagement scoring, send cadence) that was, in practice, mostly about finding smarter ways to… send more.
It worked well enough for long enough that it became a default mental model too. And like most defaults, nobody questioned it until something external forced us to question it.
...and so the mobile engagement playbook began eating itself
As a result of the above-mentioned digital bombardment, we all got a lot better at ignoring things that weren't truly meant for us. We started swiping without reading, leaving notifications stacked, and building habits around Do Not Disturb that effectively carved entire brands out of our day. Sometimes, rightfully so!
Average smartphone users in the US now receive around 46 push notifications daily.
So our platforms began to adapt. Android 13 moved to an explicit opt-in permission model, iOS introduced notification scheduling and priority modes, and more recently, Apple Intelligence began summarizing, grouping, and prioritizing notifications before users ever see them.
Operating systems are now making active decisions about which messages deserve a user's attention, and they're learning from behavior to do it. Which is where the irony starts to sting a little.
The filtering systems being built to protect attention didn't emerge from nowhere. Whether or not the causation is direct, the correlation is hard to ignore: the volume era generated the exact behavior data that teaches today's AI-filtered platforms what "unwanted" looks like.
The new layer between your message and your user
The shift we're talking about isn't theoretical either. Nearly half of mobile marketers are already concerned about AI-driven filtering deciding which messages surface, and nearly a quarter of them say it's actively affecting deliverability.
This is new territory for most teams, because the old variables still feel like the right levers. Sharper copy, better-timed sends, tighter audience definitions, A/B testing like a mad scientist still matter. But, as senders, we must stay conscious of the larger context they take place in: whether the message was warranted by something the user actually did.
The teams that are figuring this out are winning by focusing on “messages of consequence,” those tied to a cart left sitting, a streak about to break, or a personal milestone. Behavior-triggered campaigns consistently outperform standard sends because they arrive when intent already exists rather than trying to manufacture it.
The performance gap is larger than most teams expect: behavior-triggered mobile pushes reach nearly 5x the click-through rate of standard targeted sends, and more than 9x untargeted broadcasts.
How to think about messaging when filters are in the room
The teams that built the current playbook were smart, data-driven, and right for their moment. Of course volume worked when attention was abundant and filtering was minimal. And it makes sense that optimization worked so well when the primary constraint was creative quality, not algorithmic gatekeeping.
But the teams pulling ahead in 2026 have fundamentally changed their starting point. Not “how do we get in front of users?” but “what are our users trying to do, and does our message help them get there faster?” Every decision after those questions, what you build, what you measure, what you deprioritize, tends to follow naturally.
Done well, it tends to show up in the same places: higher retention, revenue that actually holds, and a customer relationship that compounds over time.
Join us live on March 26 for a conversation with mobile practitioners from Zynga, Candivore, and more to talk through what behavior-driven engagement actually looks like in practice. The honest, in-the-weeds version of what we discussed here.
The conversation coincides with the release of our 2026 State of Customer Engagement Report, which will unpack everything in even more detail. The timing couldn't be better.
Save your spot