Application Delivery Platform

Application Delivery Platform

 > Products>Application Delivery>Application Delivery Platform >Related Resources >Technical White Paper >ADX3000 Application Delivery Platform White Paper

TECHNICAL WHITE PAPER

ADX3000 Application Delivery Platform White Paper

\ Download

1.Overview

The burgeoning network application services based on multiple platforms usher in a new era, in which the traditional server load balancing concepts and routing-based traffic processing methods fail to meet the increasingly higher demands from users for the application delivery technology.

Bridging the gap between networks and applications, DPtech ADX3000 Application Delivery Platform addresses the increasing number of users and their growingly higher requirements for application. Users are provided with fast and safe access, and uninterrupted stability around the clock. In addition, it helps bring down operation costs while delivering high performance. ADX3000 features strong processing abilities, comprehensive application delivery capabilities, and abundant interfaces, thus is applicable to data centers and network egress for various industries and operators. It also provides outstanding service values such as improving the reliability and responsiveness of services and contributing to flexible business expansion.

2.Service-oriented application delivery technology

The server application delivery technologies consist of multiple technologies such as application load balancing, application health check, application content scheduling, application delivery optimization, customized application delivery, and resilient server maintenance. Rational adoption of these technologies may substantially improve the performance and quality of application delivery while reducing manual maintenance and troubleshooting costs.

2.1 Basic concepts

◆ Virtual service
Application services provided by the application delivery device are called virtual services. They are configured on the application delivery device with unique identifiers of IP address, protocol, and port number. When an access request from the user reaches the application delivery device through the network, the virtual service are matched based on the identifiers. Then the application delivery device distributes the configured delivery policies to the real service.

◆ Real service
Real service has unique identifiers of IP address and port number, wherein the port number can be different from that of the virtual service. Multiple real services can locate on one or on multiple physical server(s). When distributing the client request to a real service, the application delivery device will modify the requested destination IP address and destination port to the real IP address and port number.

◆ Real service group
Real service group is a set of multiple real services. Virtual service determines the range of real service that can be scheduled and the scheduling algorithms according to the referenced real service group.

◆ Scheduling algorithm
When the client request hits a virtual service, the latter will select a real service from the available real service groups based on the configured algorithms, such as polling and minimum connection, and distribute the request to the real service.

◆ Session persistence
There might be requests that are correlated to each other. For example, all the new connection requests initiated by a customer are associated with each other by an identifier; the service may fail if all these requests are distributed to different real services. session persistence is the process to correlate different new connection requests by an identifier and send them to the same real service.

◆ Application content scheduling
An application delivery device can identify the content and match the polices of requests from a client. The process to distribute the request that hits the policy to the specified real service group rather than to select by scheduling algorithms is called application content scheduling.

◆ Application health check
Application health check performs active detection on the real service with different detection methods to check if it is capable of providing services properly. If the real service fails, the application delivery device will no longer send new requests to this failed real service.

2.2.Scheduling algorithm

Application delivery devices provide a wide array of highly effective scheduling algorithms. Users may choose a suitable scheduling algorithm according to the specific application service and the performance of physical servers, achieving load sharing and improving utilization of physical servers.

◆ Polling
The application delivery device distributes requests in turn to each real service of the real service group so as to share connection requests among each real service equally. It is applicable to scenarios where each request has similar loads on servers (such as HTTP, Radius, and DNS) and the performance of each real service is more or less the same.

◆ Weighted polling
According to the weight value, the application delivery device distributes requests to each real service in a real service group. In other words, the larger the weight value is, the more requests it receives, and vice versa. It is applicable to scenarios where each request has similar loads on servers and the performance of each real service varies greatly.

◆ Minimum connections
The application delivery device estimates the load of real services based on the number of connections, and distribute new connection request to a real service with the minimum number of connections. It is applicable to scenarios where each application request has different requirements for servers in terms of processing load and time, and the performance of each real service is more or less the same. For example, FTP and TELENT. Prolonged connection time will increase processing loads.

◆ Weighted minimum connections
When scheduling new connections, the application delivery device manages to make the number of established connections proportional to the configured weight values, wherein the weight value is closely related to the server performance. It is applicable to scenarios where each application request has different requirements for servers in terms of processing load and time, and the performance of each real service is more or less the same. For example, FTP and TELENT. Prolonged connection time will increase processing loads.

◆ Source address hashes
The application delivery device hashes the source IP address from which the request is initiated by the customer. Based on the hashing results, a real service is mapped and scheduled. It is applicable to scenarios where requests from the same source IP address should be scheduled to the same real service, especially where the application itself has put forward certain requirements for environments of the source IP address.

◆ Source address port hashes
The application delivery device hashes the source IP address and source port from which the request is initiated by the customer. Based on the hashing results, a real service is mapped and scheduled. It is applicable to scenarios where requests from the same source IP address should be scheduled to the same real service, especially where the application itself has put forward certain requirements for environments of the source IP address.

◆ Minimum traffic
The application delivery device estimates the load of real services based on traffic, and distribute new connection request to a server with the minimum traffic. It is applicable to scenarios where the processing load of servers is required to be proportional to the traffic generated, the processing time for video and other services is various, and the performance of each real service is more or less the same.

◆ Weighted best performance
The application delivery device collects the performance data of real service through SNMP or a third-party plug-in, such as CPU usage and free memory. When scheduling new connections, the number of connections of the real service is proportional to the performance weight of the real service. It is applicable to complex scenarios where there is no pattern of processing loads or connection time for each request and the performance of each real service varies greatly.

◆ Minimum response time
The application delivery device estimates the load of real services based on the response time of each real server, and schedules new connection request to a real service with the minimum response time. It is applicable to complex scenarios where there is a large requirement of processing loads or connection time from each request and the performance of each real service is more or less the same.

2.3.Session persistence

Session persistence plays a critical role in actual network application scenarios. As most connections initiated by network applications are correlated, it has become an urgent issue for session persistence to accurately and efficiently process this correlation.

◆ Session persistence based on IP address
Session persistence based on IP address is often seen in server load balancing applications to ensure multiple requests from the same client can be scheduled to the same real server. Upon receiving the initial request from the client, the application delivery device establishes an entry of session persistence based on source IP address to keep records of real services assigned for the source IP address. During the lifetime of the entry, subsequent service packets with the same source IP address will be sent to the real service for processing.

session persistence based on IP address is often seen in server load balancing applications designed for cache servers to ensure multiple requests with the same destination IP address can be scheduled to the same real server, improving the cache hit ratio. Upon receiving the initial request from the client, the application delivery device establishes an entry of session persistence based on destination IP address to keep records of real services assigned for the destination IP address. During the lifetime of the entry, subsequent service packets with the same destination IP address will be sent to the real service for processing.

◆ session persistence based on Radius service attribute
Session persistence based on Radius service attribute is often seen in Radius server load balancing applications to ensure multiple requests with the same service attribute in a Radius packet can be scheduled to the same real server, enabling the correct and balanced execution of Radius services.

Take Radius session persistence request for a certain username for example. Upon receiving login request from the Radius client, the application delivery device establishes an entry of session persistence based on Radius packet to keep records of real services assigned for the user. During the lifetime of the entry, subsequent Radius service packets with the same username will be sent to the real service for processing.

◆ session persistence based on HTTP header
Session persistence based on HTTP header is often seen in server load balancing applications designed for HTTP servers to ensure multiple requests with the same header attribute in a HTTP packet can be scheduled to the same real server, enabling the correct and balanced execution of HTTP services.

Take session persistence based on HTTP header URI for example. Upon receiving HTTP request from the client, the application delivery device establishes an entry of session persistence based on HTTP header URI attribute to keep records of real services assigned for the attribute. During the lifetime of the entry, subsequent similar HTTP header URI request will be sent to the real service for processing.

◆ session persistence based on HTTP cookie
session persistence based on HTTP cookie is often seen in HTTP server load balancing applications to ensure multiple requests from the same client can be scheduled to the same real server, enabling the correct and balanced execution of HTTP services.

Take session persistence based on HTTP cookie for example. Upon receiving the HTTP request from the client, the application delivery device schedules it to a real service based on scheduling algorithms. When the server responds to the request, the application delivery device will insert its own cookie in the response packet, which includes information on the real service assigned for the client. When the client initiates another request, the requested packet will contain the cookie, based on which the application delivery device assigns the new request to corresponding real service.

2.4.Application content scheduling

The application delivery device can identify and match application content and distribute requests conforming to mapping policies to designated servers, thus meeting the challenges posed by increasingly complex server architectures and enhancing service scalability.

2.4.1.HTTP application content scheduling

The rapid development of HTTP applications gives rise to constantly evolving HTTP server architectures. More and more application delivery devices can, according to HTTP header attributes such as URI, cookie, and HOST, schedule requests conforming to matching policies to the corresponding servers, achieving multi-service collaborative processing and load balancing in complex architectures.

Take content scheduling based on HTTP header URI attributes for example. First, the application delivery device first establishes TCP connection by using three-way handshake with the client. When the client sends an HTTP request, the device will perform policy matching on the request. For instance, there is an existing policy: “Schedule URIs containing SPORTS to Server A.”

As shown in the figure below, the client requests the /SPORTS/index.htm page via HTTP after establishing TCP connection by using three-way handshake. When the HTTP request hits the scheduling policy, the application delivery device first establishes the connection by using three-way handshake with Server A according to the policy, then sends the HTTP request from the client to the server, and finally forwards the response from the server to the client. It is roughly equivalent to accomplishing a selective HTTP proxy.

\
Fig. 1 Application content scheduling based on HTTP

2.4.2.Application content scheduling based on UDP Protocol

Application content scheduling based on UDP Protocol is often seen in DNS, Radius and other private application protocols to ensure that client requests containing one or more keywords can be assigned to the same real service group, enabling the correct and balanced execution of HTTP services.

Take UDP content scheduling for DNS requests for example. Upon receiving the client’s DNS request, the application delivery device first performs keyword matching on the content to see if there is any matching keyword. If yes, the application delivery device will select a real service from the service group specified for the keyword, and forward the DNS request to the real service.

Special attention should be paid to the fact that content scheduling for UDP is packet-by-packet.

2.5.Application health check and server maintenance

The application health check technology with rich and practical features can ensure the quality of application delivery from multiple aspects. The reliable and advanced server maintenance technology enables smooth delivery of the server to be maintained in the application system without affecting the end users’ perception.

2.5.1.Application health check

Application delivery devices support multiple application health check functions, including TCP, HTTP, ICMP, DNS, SNMP, UDP, SMTP, POP3, SSL, RADIUS, and customization.

For a server with multiple health check technologies, the combined configuration of ICMP, TCP, and HTTP in a real service allows comprehensive inspections from the network to the operating system to the application layer. In the event that a failure occurs in any layer, the application delivery system will stop forwarding the client request to the faulty server and issue an alert. The health check can automatically discover any recovered server in order to send new requests to it.

2.5.2.Soft shutdown and considerate online design

When the real service on the application delivery device is configured with soft shutdown, the existing connection remains unchanged. Only new connections that hit the session persistence will continue to be scheduled to the real service, and no other new connections will be scheduled to it. In other words, the application delivery device will only deliver the clients that have completed all services. New client requests will not be scheduled to this real service, ensuring the smooth delivery of server.

For a real service that is provided with the Considerate Online Design on the application delivery device, after the real service is deemed working properly, such as rejoining the real service group, recovering from the fault state to normal state, or deactivating soft shutdown, the application delivery device will control the rate at which new requests are sent to the real service, making sure the request rate restores gradually from zero to unlimited. In this way, the sever can smoothly switch to the delivery system, avoiding overwhelming loads on the server caused by a huge amount of requests over a period of time.

2.6.HTTP service advanced delivery technology

The application delivery devices offer several advanced features for HTTP services.

2.6.1.HTTP redirect

Upon receiving client’s HTTP request, the application delivery device first establishes TCP connection by using three-way handshake with the client. When the client sends an HTTP GET request, the device will perform policy matching on the request. If there is a hit keyword, it directly responds to the client with the redirected address instead of forwarding the request to the real service.

2.6.2.HTTP rewriting

Upon receiving HTTP request from the client or a response of a real service, the application delivery device performs policy matching on the content of the request or the response. If it hits a keyword, it directly modifies the content according to the policy.

Take HTTP content modification for the virtual service port for example. In HTTP content responded by the server, some redirect information may contain a port (e.g., port 8080), which is probably a real service port, rather than a virtual service port (e.g., port 80). If the client fails to hit the virtual service, it needs to modify this port to virtual service port (port 80).

2.6.3.HTTP connection multiplexing

HTTP Protocol 1.1 sends GET request in the same session for several times to improve Web server performance. By leveraging this feature, when multiple client requests are sent to the application delivery device, it firstly establishes TCP connection by using three-way handshake with each client, then with the server according to configured ratio (the number of which is less than that of the client requests). When the clients sends HTTP GET request, the application delivery device sends these requests to real services via established connections, reducing server loads by virtue of connection multiplexing.

2.6.4.SSL offload & acceleration

The application delivery device supports SSL offload & acceleration feature, which dramatically reduces the actual computing pressure of the server. With this feature, the application delivery device, rather than the server, is adopted to establish SSL connection by using handshake and conduct encryption and decryption computation. The decrypted client request is then sent to the server, and the content sent by the server is encrypted before sending to the client.

SSL offload & acceleration is often seen in the following scenarios:

◆ SSL offload
The client uses HTTPS protocol for security interaction. First, the application delivery device establishes TCP connection by using three-way handshake with the client, then performs SSL key exchange and SSL handshake, and finally sends the encrypted request data to the device. Second, the device decrypts the requested data, performs load balancing scheduling and assigns real service based on the decrypted plain text. The device connects to the server according to the scheduling result. Finally, after the successful TCP connection by using handshake with the server, the device sends a plain text file to the server. When the server response data is sent back, the device sends it to the client after encryption.

\
Fig. 2 SSL offload feature

◆ SSL proxy
The client uses HTTPS protocol for secure interaction. First, the application delivery device establishes TCP connection by using three-way handshake with the client, then performs SSL key exchange and SSL handshake, and finally sends the encrypted request packet to the device. The device decrypts the requested data, performs load balancing scheduling and assigns real service based on the decrypted plain text. The device connects to the server according to the scheduling result. Upon the successful TCP connection by using handshake, the device establishes SSL connection by using handshake with the server with a new encryption suite; afterwards, it encrypts the plain text data with a new encryption suite before sending it to the server. When the response data from the server is received, the device performs encryption with a corresponding encryption suite to obtain a plain text file. With an encryption suite that interacts with the client, it encrypts the plain text data before sending it to the client.

\
Fig. 3 SSL proxy feature

In most cases, the application delivery device interacts with the server on the intranet, which is relatively secure, so plain text interaction is used. However, plain text transmission is not applicable to scenarios with higher requirements for security or lower server loads. To this end, an encryption suite with higher security levels is adopted for the interactions between the client and the device, while an encryption suite with relatively lower security levels is adopted for the interaction between the application delivery device and the client.

2.7    Floating long connection

For some applications, there may be prolonged connections or no interaction for a long time. When performing maintenance on the existing server or adding a new server, it is likely that the client fails to receive any notification or the newly added server fails to receive any request.

With floating long connection, the application delivery device terminates the connection between the server and the client on behalf of the server, ensuring the client receives responses normally and facilitating the client in initiating new requests. When a new server is added, the application delivery device terminates connections with some clients based on the current load and scheduling algorithm of each server. In this case, when new client request is received, it can be distributed to the newly added server.

\
Fig. 4 Floating long connection

2.8.   Association with a virtual machine management system

The application delivery device can be associated with a mainstream virtual machine management system. For example, when the virtual machine management system is powered off or the virtual machine is powered on, the application delivery device can automatically delete or add the corresponding real service. When, upon health checks, the application delivery device discovers a real service that fails to work properly, it can notify the virtual machine management system to restart or shut down the corresponding virtual machine. When the application delivery device finds that the overall service load is too low or too high, it can perform intelligent calculation and notify the virtual machine management system to shut down or power on the virtual machine, automatically keeping the entire service resources within a reasonable range. The close association between the application delivery device and the virtual machine management system helps minimize manual intervention and maximize service resource utilization.

2.9.AD_Rules application delivery script language

The application delivery service often involves in-depth service-specific customization and complex processing, therefore the static policy configuration fails to meet the challenge of flexible application delivery. The application delivery device enables high freedom service processing through the cutting-edge Application Delivery Language (ADL), an application delivery script language.

?
Fig. 5 AD_Rules features

2.10.Typical networking

Typical networking: Symmetric mode

 

\
Fig. 6 Typical server networking: Symmetric mode

In the symmetric mode, the client sends the request to the application delivery device at the front end of the server farm. The virtual service on the device receives the client request, selects the real service according to the session persistence and scheduling algorithms in turn, rewrites the destination IP address and port of the request packet with the address and port of the real service through network address translation (NAT), and then sends the request to the selected real service. When the response packet of the real server passes the load balancing device, the source address of the packet is restored to the IP address of the virtual service. The source port is also restored as a virtual service port and sent back to the client, marking the completion of the entire server application delivery. Request and response from the client pass through the load balancing device.

Typical networking: One-arm mode

\
Fig. 7 Typical server networking: One-arm mode

In the one-arm mode, the client sends the request to the application delivery device at the front end of the server farm. The virtual service on the device receives the client request, selects the real service according to the session persistence and scheduling algorithms in turn, rewrites the destination IP address and port of the request packet with the address and port of the real service through network address translation (NAT), and then sends the request to the selected real service. When the response packet of the real server passes the load balancing device, the source address of the packet is restored to the IP address of the virtual service. The source port is also restored as a virtual service port and sent back to the client, marking the completion of the entire server application delivery. Request and response from the client pass through the load balancing device.

The one-arm mode is different from the symmetric mode in that it makes no change to the original topology, with easier deployment and higher reliability.

Typical networking: Triangle mode

\
Fig. 8 Typical server networking: Triangle mode

In the Triangle mode, the client sends the request to the application delivery device at the front end of the server farm. The virtual service on the device receives the client request, selects the real service according to the session persistence and scheduling algorithms in turn, uses the real service IP address for routing queries (the server configuration should configure the loopback address to the virtual service address and the port address to the real service address), and then sends the request to the selected real service. The response from the real server is directly returned to the client without passing through the load balancing device. The request from the client passes through the application delivery device, while the response traffic does not.

This mode is often seen in scenarios with heavy response traffic from server, such as in video servers, to alleviate the loads on the application delivery device.

3.Application delivery technology for traffic management and optimization

Nowadays, the network has become an integral part in our daily life and work. To mitigate potential risks caused by network egress faults, and to address network access problems due to insufficient bandwidth, network users tend to rent two or more Operator egresses (such as China Telecom, China Mobile, or China Unicom). It is an issue demanding prompt solution to effectively utilize multiple operator egresses without waste of resources while serving users’ need. Although the traditional routing can solve the problem to some extent, however, it fails to adapt to network structural changes as it is inconvenient and inflexible to configure the routing. What’s more, the routing technology cannot distribute packets based on the link conditions, resulting in inefficient utilization of network egress links. Link load balancing allows balanced loads among multiple links based on static and dynamic algorithms and boasts easy configuration and self-adaptation ability, thus offering a perfect solution to the above problems.

3.1.Basic concepts

◆ Links
Links in load balancing generally refer to the Internet access lines provided by the operator. The application delivery device is provided with several attributes, such as bandwidth, operator, link status and quality (packet loss and delay), which serve as descriptors of the Internet access capability and access quality provided by the link.

◆ Link scheduling policies
Link scheduling policies are designed for users to freely control the direction of traffic and maximize the efficient utilization of links. The application delivery device provides multiple traffic direction control methods, including operator specification, link overload protection, specified source address, specified designation port, and specified types of applications. All kinds of links can be fully utilized to enhance overall traffic quality.

◆ Link health check
In the Link health check process, the application delivery device detects a remote device or server through specified links. Based on various detection methods, such as TCP and ICMP, it determines whether the current link is available and switches the traffic to other normal links in case of failed links.

◆ Link session persistence
When connecting to the application delivery device from multiple links, the source NAT is required in each link. When an intranet user uses an application, it initiates multiple requests, which, if distributed to different link egresses by the device, might lead to application failure due to the difference in source addresses. Therefore, the link session persistence feature keeps multiple requests of the application in the same link through the requested source and destination addresses and ports, eliminating any potential faults in the multi-link source NAT environment.

3.2.Scheduling based on link traffic threshold

When the actual traffic of the link exceeds the product of the configured bandwidth and threshold (percentage), the application delivery device will no longer schedule new requests to this link, except those hitting the application session persistence policy. In this way, traffic in the link can slowly be brought down to fall within the threshold; only at this point can new traffic be scheduled to this link again.

3.3.Scheduling based on link quality/status

After the health check feature is activated in the link, the application delivery device will calculate the average delay and packet loss rate in the link within the set number of detections based on the health check results. When any one of the two detection results exceeds the configured threshold, the device draws a conclusion that the link is faulty, and will schedule the traffic to other normal links.

If several consecutive health checks fail to deliver satisfactory results, the device draws a conclusion that the link is faulty, and will re-schedule the traffic. When the physical interface cable is unplugged or the administrator shuts down the interface through configuration, the device will also re-schedule the traffic.

3.4.Scheduling based on static and dynamic proximity

Static proximity refers to the capability of the application delivery device to schedule the traffic through mapping static network segments. The application delivery device is provided with built-in latest network segment tables of China Telecom, China Unicom, China Mobile, and education networks, and the tables are subject to automatic remote updates. Thanks to the optimized scheduling, traffic accessing the China Telecom addresses can be scheduled to links of China Telecom, and traffic accessing the China Unicom addresses can be scheduled to links of China Unicom, offering users with enhanced experience.

Dynamic proximity refers to the capability of the application delivery device to actively detect delay, packet loss rate and other parameters of a certain destination address, and select an optimal egress link for the destination address from multiple links based on the link quality assessment algorithm, making sure all traffic to the destination address will be directed to this link.

3.5.Scheduling based on domain name

The application delivery device can direct all traffic to a certain domain name (e.g., www.qq.com) to a specified link. The device firstly detects each DNS response packet that passes through. If the domain name of the response packet is identical to the domain name to be directed to, the corresponding IP address in the packet will be extracted to create a destination IP entry. Any subsequent connection to the destination IP that hits the IP entry will be forwarded via a specified link to enable scheduling based on domain name.

3.6.Scheduling based on intelligent DNS

The application delivery devices is provided with rich DNS resolution policies, allowing users to schedule traffic via DNS and delivering better user experience than forced link scheduling. The DNS resolution policy are available A for CNAME, MX, TXT and other records.

3.6.1.DNS static proximity matching

When the application delivery device is connected to multiple links of different operators, and there is a server on the intranet that is providing services through multiple addresses of various operators, the DNS resolution policy can be applied to return server addresses of the same operator to the client via DNS request, enabling the user to access to the corresponding operator links and enhance user experience.

To activate this policy, map the domain name resolution server registered with the domain name service provider to the application delivery device. When the DNS request reaches the application delivery device, the network segment table of the operator is matched according to the source IP address of the request to obtain the operator to which the DNS request belongs. Then the application delivery device checks the status of other links of the same operator. It selects the optimal one from these links and obtains the server's address by applying DNS resolution policy. Finally, it returns the address to the user via DNS.

\
Fig. 10 DNS static proximity scheduling

3.6.2.DNS transparent proxy

For outbound traffic of intranet users, the application delivery device performs load balancing through the DNS transparent proxy technology by redirecting DNS resolution requests of intranet users to the DNS server of the operator with a relatively smaller load intelligently. The server generally returns an address corresponding to its operator. The client then initiate an application traffic request to the destination address of the operator. Upon receiving the request, the device will schedule it to a link with relatively smaller load according to link scheduling policies.

\
Fig. 11 DNS transparent proxy

3.8.Link load balancing for WLAN off-site egresses

In traditional link load balancing, traffic scheduling and optimization are performed on a single point (one office), which is usually a device. Saturated local access bandwidth can only be addressed by purchasing bandwidth from the operator. The continuous growth of enterprises or units gives birth to a number of off-site branches with their own access bandwidth. Making full use of network access at multiple points (multiple offices), the link load balancing for WLAN off-site egresses aims at improving the overall bandwidth utilization of off-site links and reducing purchase costs.

The link load balancing for WLAN off-site egresses is designed with a Client/Server architecture and is composed of multiple application delivery devices. Each device is responsible for link load balancing at a branch through network access. All devices run a client process, and one of them runs a server.

Each client firstly connects to the server, from which it obtains link configuration and health check information. Then it applies the configuration information to corresponding devices, achieving unified management of multiple devices.

After connecting to the server, each client reports all kinds of information concerning the device, such as actual traffic, status, and quality, to the server on a regular basis. The server will summarize all the data and provide an intuitive and unified presentation at the server, displaying the real-time traffic of all the connected devices.

The server conducts overall traffic scheduling calculations at intervals. It firstly calculates the overall load of every device to which the client is connected to see if there is any overload. If an overloaded device is found, the server will look for other connected devices that are non-overloaded. The server will give a command to the overloaded device to schedule traffic above the bandwidth threshold to non-overloaded devices, making the optimal use of all link resources.

Furthermore, the server can divide the clients by region. Traffic will firstly be scheduled among different devices in the same region. Cross-region scheduling will be performed if the overall traffic in one region fails to meet scheduling demands, maintaining a relatively stable traffic direction.

As shown in the following figure, when the total traffic of city A exceeds the set threshold, the exceeding traffic will be automatically scheduled to other devices with surplus bandwidth (city B, city C) according to calculation results. With the session forwarding technology, the outbound interface of response traffic is identical to the inbound interface of request traffic, independent of routing. In this way, the inbound and outbound paths of the scheduled traffic are the same (the blue arrows represent the traffic direction in non-overloaded conditions, while the yellow arrows represent the direction of traffic scheduled to other devices).

\
Fig. 12 WLAN off-site egress features

3.9.Typical networking

\
Fig. 13 Typical networking of links

In link load balancing, intranet users access the Internet via the application delivery device, with the traffic of request and response passing through the link load balancing device.

4.WLAN-oriented global application delivery technology

To provide enhanced user experience, large applications are deployed from the centralized mode to the remote off-site multi-point mode. The global application delivery is designed to enable rational and efficient scheduling of user requests among multiple branches.

4.1.Basic principles

Install an application delivery device in front of each application site. When the global application delivery function is enabled on each device, the application delivery devices placed in front of each site will logically connect to each other through interactive protocols, forming a logical delivery network. Afterwards, these devices will share application information on a regular basis, such as service status, link status, device status, etc. Information concerning all devices in the logical delivery network is made publicly available. Upon receiving a client request, any device in the logical delivery network will first determine whether or not to have it processed locally through multiple mechanisms. If the request is to be processed locally, it will be sent to the back-end server using the server load balancing technology. Otherwise, the device will select an optimal site in the logical delivery network, and direct the user to the selected site using DNS, HTTP, IP-AnyCast and other technologies.

4.2.   Global scheduling algorithm

To select a remote site in the logical delivery network, the global scheduling algorithm is adopted to optimize the selection and improve user experience.

◆ Static proximity algorithm
With the preset provincial and municipal network segment tables of each operator (customization is allowed), the application delivery device matches the source IP address of client requests, and selects a site in the logical delivery network that shares the same operator and geographic location as the client.

◆ Polling algorithm
Based on each client request, the application delivery device sequentially selects the sites that are capable of proper delivery from the logical delivery network.

◆ lntelligent dynamic observation algorithm
Upon receiving a client request, the application delivery device selects an optimal delivery site by factoring in its recent server load and health, link congestion degree, static proximity to the client, and other aspects.

4.3.Scheduling technology based on DNS

Firstly, the user hands over the DNS resolution right of network applications (modifying the DNS record from the DNS service provider) to one or more devices in the logical delivery network. Upon receiving the DNS request, the application delivery device selects an optimal site based on the global scheduling algorithm. The site IP address is then sent to the user via DNS response, directing the request to the site.

4.4.Scheduling technology based on HTTP

The HTTP protocol itself is provided with the Location (redirect) function. Therefore, when providing services for HTTP applications, the application delivery device can select an optimal site based on the global scheduling algorithm, and then provide active responds to the HTTP request, directing the user to the delivery site with better experience through the Location attribute of the HTTP protocol.

4.5.Scheduling technology based on IP-anycast

The application delivery device can post the IP address of the virtual service through multiple dynamic routing protocols such as OSPF, BGP, and ISIS. In addition, it can affect the dynamic routing selections by configuring attributes such as different cost values.

For instance, the application delivery devices of city A and city A issue the same IP address of virtual service to route into the operator's OSPF network at the same time. However, the device of city A has a lower cost value than that of city B. In this case, the client request will select the application delivery device of the city A based on the router priority principle.

When the application device link or back-end server of city A fails to provide services, the application delivery device will dynamically cancel the routing of the virtual service IP address according to actual situations. Afterwards, the information will be transmitted to the entire network based on dynamic routing protocols, which will then direct the application traffic to the normally working site in city B.

When the failed site in city A recovers, the application delivery device will re-publish the route and direct the traffic back to the site in city A.

The IP-anycast technology is often seen in multi-site disaster recovery deployments.

5.High reliability and security

5.1.HA deployments

The application delivery devices are typically deployed in critical paths on the network, so their stability and security have a direct impact on the network availability. To avoid single points of failure, the application delivery device supports dual-system hot standby, i.e., the configuration and service on the peer device are backed up through a heartbeat link to ensure that the configuration and service status on the two devices are consistent. When one of the devices fails, its service traffic is switched to the other device by VRRP or silent dual system mechanism. As the other device has backed up service information of the failed device, the service data stream will be allowed to pass through this device, thus minimizing service interruption.

The dual-system hot standby solution provides two working modes:

◆ Master/backup mode
One of the two devices serves as the master device and the other as the backup. The master device takes care of all services, while sending all service information to the backup device via the backup link; the backup device is intended for backup only. When the master device fails, the backup device takes place of the master device and starts to process all services to make sure new load balancing services can be processed normally and the ongoing load balancing services will not be interrupted.

◆ Dual-master mode
In this mode, both devices are made a master device that processes services. Meanwhile, they serve as a backup device for each other and back up service information of the peer device. When one of the devices fails, the other one takes care of all services to make sure new load balancing services can be processed normally and the ongoing load balancing services will not be interrupted.

5.2.Virtual Switching Matrix (VSM)

The application delivery device adopts the Virtual Switching Matrix (VSM) technology, which performs virtualization of multiple physical devices into a single logical device for simplified management and network operation. It also enables cascading expansion of processing performance and improved reliability.

With the VSM enabled on multiple application delivery devices, they will communicate with each other and select the MASTER device and make the others SLAVE devices. Users may perform configurations on the MASTER device through any interface of any device, which will be automatically synchronized in the entire VSM to facilitate device management.

While configuring the network in VSM, operations can be simplified as interface information, status and service configurations can be synchronized among all devices in the VSM. For example, an outbound interface of the MASTER device can be configured as an interface of a SLAVE device.

As all the devices in the VSM can be a backup of each other, they can process services at the same time, and get to know each other’s status and loads directly. Therefore, the overall performance and reliability is improved thanks to dynamic processing of traffic in the VSM.

5.3.High reliability of hardware

Dual power supply redundancy helps avoid system breakdown due to power failure and achieve high availability. With a frame architecture, it provides high reliability at the hardware level such as dual-master, board redundancy, etc.

Subscription account Service account