The next phase of AI will not be defined only by bigger models. It will be defined by where those
models run.


AI is moving out of centralized data centers to the edge: enterprise sites, regional clouds, cell
towers, and mobile networks. Now that mobile devices account for roughly 64% of web traffic1, the mobile edge is taking center stage.


The Network is Not Ready
Edge networks are messy. Wireless scheduling, handoffs, shared spectrum, virtualized
infrastructure, and crowded backhaul all introduce jitter, packet loss, and unpredictable delay.
These are not rare failure conditions. They are normal operating conditions that AI and other
applications make worse with bursty, irregular, and often high-volume data transmissions.

Today’s congestion control algorithms, including CUBIC and BBR, were not designed for this
network environment. They misinterpret even small amounts of jitter and packet loss as a sign
of congestion, throttling throughput even when the network still has usable capacity. The result
is slower transfers, unstable performance, wasted infrastructure, and an edge experience that
doesn’t meet the expectations set by the hardware vendors.


For Mobile Device Users, It’s Also a Battery Life Problem
Cellular radios stay in a high-power connected state during a transfer, including transfers
prolonged by transient jitter and packet loss that have nothing to do with congestion. Then, after
the transfer ends, the radio remains awake during a post-transfer inactivity period before
dropping back to a lower-power state. That lingering radio tail means inefficient transfers caused
by even minor network noise do more than waste time. They keep the device burning power
after the user thinks the work is done.


This matters commercially. Battery life is one of the most visible parts of the mobile experience
and a core satisfaction metric. Recent surveys show roughly 80% of users prioritize battery life
when choosing a device2. When the network makes phones work harder, users do not blame
congestion control algorithms. They blame the device, the application, or the carrier.


The good news is that the fix does not have to start with every handset. A transport-layer
improvement deployed at the RAN edge could boost performance and battery life for all users
without requiring mobile device software updates or hardware changes. One edge-side
deployment of the right solution could deliver broad user impact regardless of the application.


Packet loss makes the case even stronger. At the cell edge, even a few percentage points of
packet loss can cause today’s congestion control to overreact. The network may still have
capacity, but the transport layer misreads the signal and collapses throughput.


Jitter makes it worse. As link speeds increase, even small timing variations can distort the
signals congestion control depends on. The faster the network gets, the more senseless it
becomes to let tiny packet timing variations dictate major rate decisions.


What’s Needed
The next generation of edge infrastructure needs a transport-layer solution that stops confusing
noise for congestion. It should pace traffic from a stable foundation, tolerate harmless jitter,
distinguish wireless loss from true congestion, and avoid the weaknesses that limit CUBIC,
BBR, and other congestion control algorithms in real mobile network conditions.


These challenges are not new, but Edge AI is taking them to the next level. It will not scale on
clean-lab assumptions. Edge AI has to run over the network users actually have: lossy, jittery,
mobile, and shared.


The edge is ready. The network still has to catch up. With the right transport-layer solution, it
can.


Watch this space!


Get early access to what’s next in Edge AI networking.


Notes

  1. https://explodingtopics.com/blog/mobile-internet-traffic
  2. https://wtop.com/consumer-news/2026/03/battery-life-a-top-priority-for-phone-buy
    ers-tech-survey-find