DDoS Detection, Mitigation, Orchestration, and Threat Intelligence
Consolidated Security & CGNAT
TLS/SSL Inspection
Web Application Firewall
Application Security & Load Balancing
Analytics & Management
CGNAT & IPv6 Migration
When a client sends a request to a server across the Internet a complicated series of network transactions are involved. A typical request path will have the client sending the request to a local gateway which in turn routes the request to a sequence of routers, through firewalls and load balancers, and finally to the server. Each step or “hop” involves:
All of this takes time, so each hop introduces a delay. Network latency is the total time, usually measured in milliseconds, required for a server and a client to complete a network data exchange. Even if there are no intermediate hops, which is never the case for communications across the Internet, latency is still involved because the request has to traverse layers of software and hardware at each end.
There are two ways to measure network latency:
If we’re concerned about how network latency affects application performance, then the Round Trip Time is what we care about. On the other hand, if we’re trying to optimize Internet of Things (IoT) transactions we’ll usually be more concerned about Time to First Byte latency.
Learn more about a global financial services firm that ensures availability and security of low-latency trading applications
When it comes to real-world applications such as high frequency stock trading, minimizing communications latency by even a millisecond potentially gives the trader a huge advantage. This is why, for example, Hibernia Atlantic (now acquired by GTT Atlantic) spent $300 million laying a 6,021km (3,741 mile) fiberoptic link from New York to London to deliver a Round Trip Time of 59 milliseconds, 6 milliseconds less than the next best link latency. It’s been estimated that the reduced network latency could give a large hedge fund an additional profit of close to $100 million per year.
Minimizing network latency is about optimizing all the elements of the networking infrastructure. Even when you’ve deployed ultra-high-performance hardware, optimizing software and protocols is the key and Application Delivery Controllers (ADCs) provide a range of features that deliver optimizations including:
Along with load balancing and infrastructure health checks, A10 Networks Thunder® Series Application Delivery Controllers (ADCs) deliver advanced traffic management and optimization features including offloading CPU-intensive SSL/TLS transactions from servers with SSL Offloading with Perfect Forward Secrecy (PFS) ciphers, content caching, compression, and TCP optimization.
Take this brief multi-cloud application services assessment and receive a customized report.