How Internet Connection Testing Works
Our Internet Connection Test uses a TCP connectivity check to determine whether your device can successfully reach external servers on the internet. Specifically, the tool attempts to open a TCP socket connection to Cloudflare's public DNS resolver at 1.1.1.1 on port 53. This target was chosen because Cloudflare's infrastructure is among the most reliable and globally distributed on the internet, making it an ideal connectivity indicator. If the connection succeeds, your device has a working path to the public internet through your router, ISP, and upstream backbone providers.
The tool measures latency by recording the time elapsed between initiating the TCP handshake and receiving the server's acknowledgment. This round-trip time (RTT) represents the total time for a data packet to travel from your device to the target server and back. It is important to understand the distinctions between related networking terms: ping typically refers to ICMP echo requests used to test reachability, latency is the delay measured in milliseconds, bandwidth is the maximum data throughput of your connection (measured in Mbps), and throughput is the actual data transfer rate you experience under real-world conditions.
Several factors affect your measured connection speed and latency. Physical distance between your device and the target server introduces propagation delay. The number of network hops (routers your data passes through) adds processing time at each node. Network congestion during peak usage hours increases queueing delays at overloaded routers. ISP throttling may intentionally limit speeds for certain traffic types. Additionally, local factors such as WiFi interference, outdated router firmware, and the quality of your physical network connection all contribute to the final latency measurement.
Understanding Network Latency
Network latency is composed of four distinct delay types that combine to determine total round-trip time. Propagation delay is the time light takes to travel through fiber optic cables or electrical signals through copper wiring, limited by the speed of light. A transatlantic connection covering 6,000 km introduces roughly 30ms of propagation delay in each direction. Serialization delay is the time required to push all bits of a packet onto the network link, which varies based on packet size and link bandwidth. Processing delay occurs at each router as it examines packet headers, performs lookup operations, and makes forwarding decisions. Queueing delay is the time a packet spends waiting in router buffers when traffic volume exceeds the outgoing link capacity.
Content Delivery Networks (CDNs) like Cloudflare, Akamai, and Fastly reduce latency by caching content at edge servers distributed across hundreds of global locations. When you request a webpage, the CDN serves it from the nearest edge node rather than the distant origin server, dramatically reducing propagation delay. Network administrators use traceroute to diagnose latency by mapping each hop between source and destination, revealing exactly where delays accumulate along the path.
Jitter measures the variation in latency over time, which is particularly important for real-time applications. A connection with 50ms average latency and low jitter (2-3ms variation) delivers smoother video calls and gaming than a connection with 30ms average latency but high jitter (50ms+ variation). VoIP, video conferencing, and online gaming all rely on consistent packet arrival times. High jitter causes audio dropouts, video artifacts, and rubber-banding in games. Jitter buffers can compensate for moderate variation, but excessive jitter fundamentally degrades the user experience regardless of average latency.