Program
Day 1 (Mon 10 March, 2025; times in UTC)
You can check the conference times in your timezone here: 24 hour format / AM-PM format
- Keynote 1: David Clark
- The Razor's Edge: IPv6 Extension Headers Survivability - Justin Iurman (University of Liege), Benoit Donnet (University of Liege)
Abstract: While IPv6 was standardized in the 90's, only the last decade has seen a growth in its global adoption. In addition to dealing with IPv4 addresses exhaustion, IPv6 comes with a mechanism, called IPv6 Extension Header (IPv6 EH), allowing the protocol to be more flexible and extensible. In this paper, we investigate how IPv6 EHs are processed in the network. In particular, we focus on the survivability of IPv6 EHs, i.e., the fact that an IPv6 EH traverses the Internet and arrives unmodified at the destination. We first design experiments in a controlled environment, testing different IPv6 EHs and sizes on different routers from various vendors. Then, we confront our observations with several measurement campaigns between vantage points hosted by different Cloud Providers (CPs) around the world, and we compare them to the responses received from a survey of operators. Our results show that the survivability of IPv6 EHs is quite limited (around 50%) and is a consequence of operators' policies, with some Autonomous Systems being responsible for most of the IPv6 EHs drops. Measurement tool and data collected will be provided to the research community upon paper acceptance.
- A closer look at IPv6 IP-ID behavior in the wild - Fengyuan Huang (National University of Defense Technology), Yifan Yang (National University of Defense Technology), Zhenzhong Yang (National University of Defense Technology), Bingnan Hou (National University of Defense Technology), Yingwen Chen (National University of Defense Technology), Zhiping Cai (National University of Defense Technology)
Abstract: The IP Identification (IP-ID) field, which provides fragmentation and reassembly support for the network layer, is included in an extension header in IPv6, unlike in IPv4, where it is a fixed field. By sending packets such as ICMPv6 Too Big, it is possible to induce fragmented responses and thereby retrieve IP-IDs from remote IPv6 hosts.
In this study, we propose a framework for active probing to obtain the IP-ID sequences of IPv6 targets. By probing over 20 million IPv6 addresses, we found that IPv6 hosts can be induced to fragment primarily depending on their device type and security policy. Furthermore, we built a classifier with an accuracy of 98.4% to distinguish different IP-ID behaviors. We discovered that 46.1% of addresses still use predictable IP-IDs, which can be susceptible to various network attacks, such as IP spoofing and session hijacking.
- Understanding IPv6 Aliases and Detection Methods - Mert Erdemir (Georgia Institute of Technology), Frank Li (Georgia Institute of Technology), Paul Pearce (Georgia Institute of Technology)
Abstract: Recent advancements in IPv6 address discovery methods provide new capabilities for Internet measurements. However, these measurement techniques are encountering a significant challenge unique to IPv6: large IPv6 prefixes that appear responsive on all addresses. The sheer sizes of these so-called IPv6 aliases preclude each responsive address as representing distinct devices, and thus these prefixes can confound measurements of IPv6 hosts. Although prior work proposed initial methods for identifying aliased regions, there has been limited characterization of IPv6 aliases and investigation into the resulting impact on the alias detection methods. In this work, we explore IPv6 aliasing in-depth, characterizing the properties of IPv6 aliases and exploring improvements to alias detection. We first analyze the state-of-the-art public IPv6 alias dataset, evaluating the accuracy and consistency of the alias resolutions. We uncover substantial misclassifications, motivating our development of a distinct high-confidence dataset of IPv6 aliases that enables us to correctly identify the distribution of aliased prefix sizes, detect real-world inconsistencies, and characterize the effects of different alias detection parameters. In addition, we show how small differences in the alias detection methods significantly impact address discovery (i.e., target generation algorithms). Our findings lay the foundation for how alias detection can be performed more effectively and accurately in the future.
- Marionette Measurement: Measurement Support under the PacketLab Model - Tzu-Bin Yan (University of Illinois Urbana-Champaign), Zesen Zhang (UC San Diego). Bradley Huffaker (CAIDA / UC San Diego), Ricky Mok (CAIDA / UC San Diego), kc claffy (CAIDA / UC San Diego), Kirill Levchenko (University of Illinois Urbana-Champaign)
Abstract: The PacketLab Internet measurement framework is designed to facilitate vantage point (VP) sharing for active Internet measurements. The core idea behind PacketLab is to have experimenters instruct remote VPs to perform a series of monitored low-level network operations to conduct measurements, which would reduce costs and security concerns of VP sharing. Despite these benefits, PacketLab users have to update their existing tools to adapt to the new measurement model, where available VP capabilities and the method of access differ from traditional models such as shell access to VPs. This change in the measurement model introduces limitations in measurement feasibility that merit deeper analysis. We undertook this analysis, based on a survey of recent Internet measurement studies, followed by a result accuracy evaluation of PacketLab implementations of selected representative measurements. Our results showed the PacketLab measurement model allows the implementation of a major portion (40 out of 54 studies, 74%) of distributed active measurements in relevant studies in our survey. Further evaluation also showed that the PacketLab model not only accurately supports a diverse set of measurements ranging from latency, throughput, network path, to other non-timing data categories, but also measurement requiring precise spatial and temporal coordination. To assist with porting non-timing data measurements to PacketLab, we also introduce a new porting tool, pktwrap, which allows existing measurement executables to communicate over PacketLab without modification.
- A Tree in a Tree: Measuring Biases of Partial DNS Tree Exploration - Florian Steurer (Max Planck Institute for Informatics), Anja Feldmann (Max Planck Institute for Informatics), Tobias Fiebig (Max Planck Institute for Informatics)
Abstract: The Domain Name System (DNS) is a cornerstone of the Internet. As such, it is often the subject or the means of network measurement studies. Over the past decades, the Internet measurement community gathered many lessons-learned and captured them in widely available measurement toolchains such as ZDNS and OpenINTEL as well as many papers. However, for feasibility, these tools often restrict DNS tree exploration, use caching, and other intricate methods for reducing query load. This potentially hides many corner cases and unforeseen problems. In this paper, we present a system capable of exploring the full DNS tree. We gather 87 TB of DNS data covering 812M domains with over 85B queries over 40 days. Using this, we reproduce four earlier studies that used feasibility and time-optimized DNS datasets. Our results demonstrate the need for care in selecting which limitations regarding the perspective on DNS can be accepted for a given research question and which may alter findings and conclusions.
- An Integrated Active Measurement Programming Environment - Matthew Luckie (CAIDA / UC San Diego), Shivani Hariprasad (CAIDA / UC San Diego), Raffaele Sommese (University of Twente), Brendon Jones (CAIDA / UC San Diego), Ken Keys (CAIDA / UC San Diego), Ricky Mok (CAIDA / UC San Diego), K Claffy (CAIDA / UC San Diego)
Abstract: Active Internet measurement is not a zero-risk activity, and access to Internet measurement vantage points typically requires navigating trust relationships among actors involved in deploying, operating, and using the infrastructure. Operators of vantage points (VPs) must balance VP capability against who gets access: the more capable a vantage point, the riskier it is to allow access.
We propose an integrated active measurement programming environment that: (1) allows a platform operator to specify the measurements that a user can run, which allows the platform operator to communicate to the VP's host what their vantage point will do, and (2) provides users with reference implementations of measurement functions that act as building blocks to more complex measurements.
We first review active measurement infrastructures and how technical and usability goals have evolved over the years. We prototype and deploy an integrated active measurement programming environment on an existing measurement infrastructure, and illustrate its potential with several case studies.
Day 2 (Tue 11 March, 2025; times in UTC)
You can check the conference times in your timezone here: 24 hour format / AM-PM format
- HTTP Conformance vs. Middleboxes: Identifying Where the Rules Actually Break Down - Ilies Benhabbour (King Abdullah University of Science and Technology (KAUST)), Mahmoud Attia (King Abdullah University of Science and Technology (KAUST)), Marc Dacier (King Abdullah University of Science and Technology (KAUST))
Abstract: HTTP is the foundational protocol of the World Wide Web, designed with a strict set of specifications that developers are expected to follow. However, real-world implementations often deviate from these standards. In this study, we not only confirm these inconsistencies but build on previous work to reveal a deeper issue: the impact of network middleboxes. Using a novel framework, we emonstrate that HTTP server conformance cannot be accurately assessed in isolation, as middleboxes can alter requests and responses in transit. We conducted 47 conformance tests on 12 popular proxy implementations. Our results show that none of them are fully compliant with the relevant RFCs, and there is significant variation in their behaviors. This inconsistency stems from ambiguities in the RFCs, which fail to provide clear guidelines for these middleboxes. In some cases, the implementation choices made can lead to vulnerabilities.
- A First Look at Cookies Having Independent Partitioned State - Maximilian Zöllner (Universität des Saarlandes), Anja Feldmann (Max-Planck-Institut für Informatik), Ha Dao (Max-Planck-Institut für Informatik)
Abstract: The introduction of Cookies Having Independent Partitioned State (CHIPS) marks a significant step toward balancing user privacy with essential web functionalities.CHIPS isolates data within specific contexts, preventing cross-site tracking while maintaining the functionality of websites.However, the adoption of CHIPS in real-world web usage remains largely unexplored. In this paper, we investigate the state of CHIPS deployment, providing an overview of how CHIPS has been integrated into web ecosystems since its introduction.Leveraging the HTTP Archive dataset, we first find that the adoption of partitioned cookies remains slow, with most domains still relying on non-partitioned cookies, though a slight increase in both types is observed starting in early 2024, coinciding with Google's phase-out of third-party cookies for 1% of users.This sudden onset of the third-party cookie phase-out has resulted in a haphazard way of adoption for some domains, which caused them to overlook important configuration requirements, resulting in improper settings due to limited awareness of the specific guidelines such as SameSite=None and Secure. In addition, we observe a positive signal for privacy as third-party trackers begin adopting partitioned cookies, with a noticeable increase starting in early 2024.However, as of September 2024, only a small number of trackers have fully transitioned to using partitioned cookies (up to 0.5% of tracking domains), while some continue to rely on both partitioned and non-partitioned cookies (up to 3.1% of tracking domains), highlighting that the shift is still in its early stages, especially for tracking domains.Finally, we observe stark asymmetry among the early adopter tracking domains: some have already added some partitioned cookies to all sites with a presence, while others, notably Google's doubleclick.com has only deployed partitioned cookies to around 5% of pages where it is present.
- Web Crawl Refusals: Insights From Common Crawl - Mostafa Ansar (University of Twente), Anna Sperotto (University of Twente), Ralph Holz (University of Münster)
Abstract: Web crawlers are an indispensable tool for collecting research data. However, they may be blocked by servers for various reasons. This can reduce their coverage. In this early-stage work, we investigate server-side blocks encountered by Common Crawl. We analyze page contents to cover a broader range of refusals than previous work. We construct fine-grained regular expressions to identify refusal pages with precision, finding that at least 1.68% of sites in a Common Crawl snapshot exhibit a form of explicit refusal. Significant contributors include large hosters. Our analysis categorizes the forms of refusal messages, from straight blocks to challenges and rate-limiting responses. We are able to extract the reasons for nearly half of the refusals we identify. We find an inconsistent and even incorrect use of HTTP status codes to indicate refusals. Examining the temporal dynamics of refusals, we find that most blocks resolve within one hour, but also that 80% of refusing domains block every request by Common Crawl. Our results show that website blocks deserve more attention as they have a relevant impact on crawling projects. We also conclude that standardization to signal refusals would be beneficial for both site operators and web crawlers.
- To adopt or not to adopt L4S-compatible congestion control? Understanding performance in a partial L4S deployment - Fatih Berkay Sarpkaya (New York University), Fraida Fund (New York University), Shivendra Panwar (New York University)
Abstract: With few exceptions, the path to deployment for any Internet technology requires that there be some benefit to unilateral adoption of the new technology. In an Internet where the technology is not fully deployed, is an individual better off sticking to the status quo, or adopting the new technology?
This question is especially relevant in the context of the Low Latency, Low Loss, Scalable Throughput (L4S) architecture, where the full benefit is realized only when compatible protocols (scalable congestion control, accurate ECN, and flow isolation at queues) are adopted at both endpoints of a connection and also at the bottleneck router.
In this paper, we consider the perspective of the sender of an L4S flow using scalable congestion control, without knowing whether the bottleneck router uses an L4S queue, or whether other flows sharing the bottleneck queue are also using scalable congestion control.
We show that whether the sender uses TCP Prague or BBRv2 as the scalable congestion control, it cannot be assured that it will not harm or be harmed by another flow sharing the bottleneck link. We further show that the harm is not necessarily mitigated when a scalable flow shares a bottleneck with multiple classic flows. Finally, we evaluate the approach of BBRv3, where scalable congestion control is used only when the path delay is small, with ECN feedback ignored otherwise, and show that it does not solve the coexistence problem.
- A Deep Dive into LEO Satellite Topology Design Parameters - Wenyi Morty Zhang (University of California, San Diego), Zihan Xu (Carnegie Mellon University), Sangeetha Abdu Jyothi (UC Irvine, VMware Research)
Abstract: Low Earth Orbit (LEO) satellite networks are rapidly gaining traction today. Although several real-world deployments exist, our preliminary analysis of LEO topology performance with the soon-to-be operational Inter-Satellite Links (ISLs) reveals several interesting characteristics that are difficult to explain based on our current understanding of topologies. For example, a real-world satellite shell with a low density of satellites offers better latency performance than another shell with nearly double the number of satellites. In this work, we conduct an in-depth investigation of LEO satellite topology design parameters and their impact on network performance while using the ISLs. In particular, we focus on three design parameters: the number of orbits in a shell, the inclination of orbits, and the number of satellites per orbit. Through an extensive analysis of real-world and synthetic satellite configurations, we uncover several interesting properties of satellite topologies.
Notably, there exist thresholds for the number of satellites per orbit and the number of orbits below which the latency performance degrades significantly. Moreover, network delay between a pair of traffic endpoints depends on the alignment of the satellite's orbit (Inclination) with the geographic locations of endpoints.
- Analyzing the Effect of an Extreme Weather Event on Telecommunications and Information Technology: Insights from 30 Days of Flooding - Leandro M. Bertholdo (UFRGS), Renan Barreto Paredes (FURG), Gabriela de Lima Marin (NIC.BR), César A. H. Loureiro (IFRS), Pedro de Botelho Marcos (FURG), Milton Kaoru Kashiwakura (NIC.BR)
Abstract: In May 2024, weeks of severe rainfall in Rio Grande do Sul, Brazil caused widespread damage to infrastructure, impacting over 400 cities and 2.3 million people. This study presents the construction of comprehensive telecommunications datasets during this climatic event, encompassing Internet measurements, fiber cut reports, and Internet Exchange routing data.
By correlating network disruptions with hydrological and operational factors, the dataset offers insights into the resilience of fiber networks, data centers, and Internet traffic during critical events. For each scenario, we investigate failures related to the Information and Communication Technology infrastructure and highlight the challenges faced when its resilience is critically tested. Preliminary findings reveal trends in connectivity restoration, infrastructure vulnerabilities, and user behavior changes. These datasets and pre-analysis aim to support future research on disaster recovery strategies and the development of robust telecommunications systems.
- Keynote 2: "Young digital natives: a forgotten cybersecurity category?”, Anna Sperotto
Day 3 (Wed 12 March, 2025; times in UTC)
You can check the conference times in your timezone here: 24 hour format / AM-PM format
- Detecting Traffic Engineering from public BGP data - Omar Darwich (LAAS-CNRS), Kevin Vermeulen (LAAS-CNRS), Cristel Pelsser (UCLouvain)
Abstract: Routing is essential to the Internet functioning. However, more and more functions are added to BGP, the inter-AS routing protocol. In addition to providing connectivity for best effort service, it carries flow specification rules and blackholing signals to react to DDoS, routes for virtual private networks, IGP link-state database information among other uses. One such addition is the tweaking of BGP advertisements to engineer the traffic, to direct it on some preferred paths. In this paper we aim to estimate the impact of traffic engineering on the BGP ecosystem. We develop a method to detect the impact in space, that is, to find which traffic engineering technique impacts which prefix and which AS. We design a methodology to pinpoint TE events to quantify the impact on time. We find that on average, a BGP vantage point sees 35% of the announced prefixes impacted by TE. Quantifying the impact of TE on BGP stability, we find that TE events contribute to 39% of BGP updates and 44% of the BGP convergence time, and that prefixes belonging to hypergiants contribute the most to TE.
- Global BGP Attacks that Evade Route Monitoring - Henri Birge-Lee (Princeton University), Maria Apostolaki (Princeton University), Jennifer Rexford (Princeton University)
Abstract: As the deployment of comprehensive Border Gateway Protocol (BGP) security measures is still in progress, BGP monitoring continues to play a critical role in protecting the Internet from routing attacks. Fundamentally, monitoring involves observing BGP feeds to detect suspicious announcements and taking defensive action. However, BGP monitoring relies on seeing the malicious BGP announcement in the first place. In this paper, we develop a novel attack that can hide itself from all BGP monitoring systems we tested while potentially affecting the majority of the Internet. The attack involves launching a sub-prefix hijack with the RFC-specified NO_EXPORT community attached to prevent networks with the malicious route installed from sending the route to BGP monitoring systems. While properly configured and deployed RPKI can prevent this attack and /24 prefixes are not viable targets of this attack, we examine the current route table and find that 38% of prefixes in the route table could still be targeted. We also ran experiments in four tier-1 networks and found all networks we studied could have a route installed that was hidden from global BGP monitoring. Finally, we propose a mitigation that significantly improves the robustness of the BGP monitoring ecosystem. Our paper aims to raise awareness of this issue and offer guidance to providers to protect against such attacks.
- Characterizing Anycast Flipping: Prevalence and Impact - Xiao Zhang (Duke University, ThousandEyes/Cisco), Shihan Lin (Duke University), Tingshan Huang (Akamai Technologies), Bruce Maggs (Duke University), Kyle Schomp (ThousandEyes/Cisco), Xiaowei Yang (Duke University)
Abstract: A 2016 study by Wei and Heidemann showed that anycast routing of DNS queries to root name servers is fairly stable, with only 1% of RIPE Atlas vantage points “flipping” back and forth between different root name server sites. Continuing this study longitudinally, however, we observe that among the vantage points that collected data continuously from 2016 to 2024 the fraction that experience flipping has increased from 0.8% to 3.2%. Given this apparent increase, it is natural to ask how much anycast flipping impacts the performance of everyday tasks such as web browsing. To measure this impact, we established a mock web page incorporating many embedded objects on an anycast-based CDN and downloaded the page from geographically distributed BrightData vantage points. We observed that datagrams within individual TCP flows almost always reach the same site, but different flows may flip to different sites. We found that 2015 (10.9%) of 18530 vantage points suffer from very frequent flipping (i.e., more than 50% of flows are directed to a site other than the most common one for that vantage point) and that 1170 of these (6.3% of the total) suffer a median increase in round-trip time larger than 50ms when directed to a site other than the most common. We then used Mahimahi to emulate downloads of popular web sites, randomly applying the above-mentioned flipping probability (50%) and flipping latency penalty (50ms) to CDN downloads. We found, for example, that there was a median increase in the First Contentful Paint metric ranging, across 3 vantage points and 20 web sites, from 20.7% to 52.6% for HTTP/1.1 browsers and from 18.3% to 46.6% for HTTP/2 browsers. These results suggest that for a small, but not negligible portion of clients, the impact of anycast flipping on web performance may be significant.
- An Empirical Evaluation of Longitudinal Anycast Catchment Stability - Remi Hendriks (University of Twente), Bernhard Degen (University of Twente), Bas Palinckx (University of Twente), Raffaele Sommese (University of Twente), Roland van Rijswijk-Deij (University of Twente)
Abstract: Anycast is widely used to improve the availability and performance of, e.g., the DNS and content delivery. To use anycast one simply announces the same prefix from multiple points of presence (PoPs). BGP then takes care of routing clients to the "nearest" PoP. While seemingly simple, managing an anycast service is not without challenges. Factors outside operator control, such as remote peering and hidden MPLS tunnels, may route clients to suboptimal PoPs in terms of latency.
To successfully manage anycast, operators map catchments: the set of prefixes that reach each PoP of a service. Due to the dynamic nature of inter-domain routing, catchments change over time. While earlier work has looked at catchment stability in the short term and in a coarse-grained manner, we lack a detailed view on catchment stability over time. Understanding this is crucial for operators as it helps schedule catchment measurements and plan interventions using traffic engineering to redistribute traffic over PoPs.
In this work, we put long-term catchment stability on an empirical footing. Using an industry-grade anycast testbed with 32 globally distributed PoPs, we continuously map catchments for both IPv4 and IPv6 over a 6-month period and study catchment stability at different timescales, ranging from days to weeks and months. We show catchments are very stable in the short term, with 95% of prefixes routing to the same PoP on a one-week scale, and over 99% on a day-by-day basis. We also show, however, that sudden routing changes can have a major impact on catchments. Based on our longitudinal results, we make recommendations on the frequency with which to measure catchments.
- Partnërka in Crime: Characterizing Deceptive Affiliate Marketing Offers - Victor Le Pochat (DistriNet, KU Leuven), Cameron Ballard (New York University), Lieven Desmet (DistriNet, KU Leuven), Wouter Joosen (DistriNet, KU Leuven), Damon McCoy (New York University), Tobias Lauinger (New York University)
Abstract: The deceptive affiliate marketing ecosystem enables a variety of online scams causing consumers to lose money or personal data. In this model, affiliates promote deceptive products and services on behalf of merchants in exchange for a commission, mediated by affiliate networks. We monitor the ecosystem holistically by taking the vantage point of affiliates and collecting ground truth from 23 aggregators that list deceptive products and services available for promotion across scam types and affiliate networks. Using our novel longitudinal data set, we characterize the ecosystem by taxonomizing the 9 main categories of deceptive products and services composing the ecosystem, and describing the main tactics used to mislead consumers. We quantify the extent of the nearly 450,000 offers in the ecosystem and the differences in the value that is attached to different types of scams, monetization models, and countries. Finally, we identify core affiliate networks and analyze longitudinal trends to track the dynamics of the ecosystem over time. The more complete coverage provided by our novel data set enables not only a broader understanding of the ecosystem, but also adds insights and metadata for developing earlier, data-driven interventions to protect consumers.
- Characterizing the Networks Sending Enterprise Phishing Emails - Elisa Luo (UC San Diego and Barracuda Networks), Liane Young (Columbia University), Grant Ho (University of Chicago), M. H. Afifi (Barracuda Networks), Marco Schweighauser (Barracuda Networks), Ethan Katz-Bassett (Columbia University), Asaf Cidon (Columbia University and Barracuda Networks)
Abstract: Phishing attacks on enterprise employees present one of the most costly and potent threats to organizations. We explore an understudied facet of enterprise phishing attacks: the email relay infrastructure behind successfully delivered phishing emails. We draw on a dataset spanning one year across thousands of enterprises, billions of emails, and over 800,000 delivered phishing attacks. Our work sheds light on the network origins of phishing emails received by real-world enterprises, differences in email traffic we observe from networks sending phishing emails, and how these characteristics change over time.
Surprisingly, we find that over one-third of the phishing email in our dataset originates from highly reputable networks, including Amazon and Microsoft. Their total volume of phishing email is consistently high across multiple months in our dataset, even though the overwhelming majority of email sent by these networks is benign. In contrast, we observe that a large portion of phishing emails originate from networks where the vast majority of emails they send are phishing, but their email traffic is not consistent over time. Taken together, our results explain why no singular defense strategy, such as static blocklists (which are commonly used in email security filters deployed by organizations in our dataset), is effective at blocking enterprise phishing. Based on our offline analysis, we partnered with a large email security company to deploy a classifier that uses dynamically updated network-based features. In a production environment over a period of 4.5 months, our new detector was able to identify 3-5% more enterprise email attacks that were previously undetected by the company's existing classifiers.
- A Large-Scale Study of the Potential of Multi-Carrier Access in the 5G Era - Fukun Chen (Northeastern University), Moinak Ghoshal (Northeastern University), Enfu Nan (Northeastern University), Phuc Dinh (Northeastern University), Imran Khan (Northeastern University), Z. Jonny Kong (Purdue University), Y. Charlie Hu (Purdue University), Dimitrios Koutsonikolas (Northeastern University)
Abstract: Despite much promise, numerous recent studies have shown that 5G coverage remains sporadic and its performance is often suboptimal, leading to degraded QoE for high-throughput, low-latency applications. While two alternate approaches to multi-carrier access, link selection and link aggregation, have been proposed and shown to potentially enhance performance, their actual performance benefit in real-world deployments remains unclear. In this paper, we conduct an extensive measurement campaign involving multiple cross-country driving trips over a duration of 12 months covering a total of 8k+ Km while simultaneously measuring the performance of the three major US mobile operators, and explore the potential of multi-carrier access to improve throughput and latency.
Our measurements show that there exists a substantial amount of diversity in cellular network performance across different operators at a given time and location in terms of both throughput and RTT. Our trace-driven analysis shows that multi-carrier access techniques have the potential to provide significant performance gains over the single-path throughput/RTT of the worst-performing operator.
- 5G Performance: A Multidimensional Variability Analysis - Varshika Srinivasavaradhan (University of California, Santa Barbara), Jiayi Liu (University of California, Santa Barbara), Elizabeth M. Belding (University of California, Santa Barbara)
Abstract: 5G networks have been broadly touted as a revolution in cellular performance. However, these networks have significant architectural, spectrum and physical layer options, such that the delivered performance can be variable. The disparity in smartphone hardware and software platforms adds another layer of performance uncertainty. Our goal in this work is to characterize the impact of these features on 5G performance. To do so, we analyze a dataset of nearly 1.75 million crowdsourced Ookla® Speedtest Intelligence® cellular network measurements over three years and eight U.S. cities. We employ a novel approach by grouping Speedtest results based on both performance metrics and their deviation, while also accounting for spatial distribution and frequency band characteristics. By using statistical distance measures, we quantify the impact of multiple PHY layer and device-specific features across these multidimensional groups. We complement our in-the-wild analysis with a controlled study to validate our findings. We observe that PHY layer parameters, such as channel quality index and signal strength are the primary drivers of performance variability within each frequency range. However, between frequency ranges, user equipment hardware emerges as the dominant factor, highlighting that the equipments themselves play a critical role in determining whether users can fully utilize 5G capabilities. This underscores the importance of advancing device hardware to keep pace with the rapid evolution of network technologies.