Secure Management Through Firewalls Jim Doble, CISSP Tavve Software Co. Tavve Software Co. One Copley Parkway Suite 480 Morrisville, NC 27560 +1 919-460-1789 http://www.tavve.com
Secure Management Through Firewalls Executive Summary Firewall-based network partitioning is a well established and sometimes mandated security practice, but creates a dilemma for management professionals seeking to provide a centralized, comprehensive, and unified infrastructure for managing network devices and servers across the enterprise. Centralized management applications typically rely on the ability to communicate with devices and servers across the enterprise using ubiquitous management protocols, such as ICMP and SNMP, but security professionals typically resist creating firewall rules to allow management protocols, and rightly so, because these protocols were defined years ago and typically lack the security mechanisms required in today s threat environment. So how can companies leverage their existing management infrastructure across the entirety of their firewallpartitioned network, without compromising security? This white paper will: 1. Explore security concerns associated with managing firewall-partitioned networks. 2. Evaluate solutions for managing firewall-partitioned networks. 3. Demonstrate how Tavve s ZoneRanger appliance can be used as a management proxy firewall to extend the reach of management applications, without compromising security. Introduction Ongoing evolution of enterprise security and management practice has resulted in a tension between the two: Security practitioners deploy firewalls in order to control and limit flow of information between network zones of trust. Management protocols such as ICMP and SNMP are typically prevented from passing through the firewall, because they are old, and relatively insecure. Management practitioners rely on the flow of management information between centrally deployed management applications and the network infrastructure devices and servers distributed throughout the network. Firewalls present an obstacle, blocking the flow of information necessary for management. If firewalls are configured to allow the flow of management protocols, security is reduced. If management protocols are blocked, the ability to provide centralized, unified management is severely compromised. What is needed is a solution that resolves this tension, enabling centralized management while not compromising on security. The use of firewalls to partition networks based on zones of trust is a well established and sometimes mandated security practice. Notwithstanding the sensational headlines and startling statements associated with groups such as the Jericho Forum, firewalls do not appear to be going away any time soon. At the same time that some talk about de-perimeterizing the network, we have industry standards, such as the Payment Card Industry Data Security Standard (PCI DSS) mandating the use of firewalls (i.e. separating the portion of the enterprise network that deals with credit card information from the portion that does not). Firewalls essentially serve to block undesirable, unexpected, or inherently high-risk traffic from passing between network zones that are trusted to different degrees. For example, many companies deploy their internet-facing servers and infrastructure within a demilitarized zone or 1
DMZ, with a firewall separating this zone from the rest of the corporate network. The rationale is that the internet-facing servers are subjected to less controlled and therefore more dangerous traffic, and have a higher risk of compromise. Deploying these servers in the DMZ, reduces the likelihood that compromise will spread further into the corporate network. Similar arguments have been used to suggest that groups of computers handling human resources, accounting, or other sensitive information be deployed within their own firewall-protected zones. As a result, the trend is toward highly partitioned enterprise networks, as illustrated in Figure 1. Figure 1: Enterprise Network Partitioning Internal firewall-based network partitioning creates a problem for management professionals seeking to provide a centralized, comprehensive, and unified infrastructure for configuring, monitoring, and controlling network devices and servers across the whole enterprise. Management practice tends to rely heavily on a suite of sophisticated management applications, which serve to automate the wide variety of tasks required to manage devices: monitoring status, collecting events and statistics, modifying configurations, and much more. These applications typically use ubiquitous management protocols, such as ICMP and SNMP, to monitor and control the devices they are managing. In order to manage the whole enterprise, a centralized management application deployed in one zone must be able to communicate with all managed devices, regardless of their network location. If the management application and the device to be managed are deployed in different zones, the internal firewall becomes an obstacle, blocking the flow of information between management applications and the devices they are managing, and defeating the goal of centralized management. The reason why security personnel prefer to block management protocols is that many of them are relatively old, having been defined in the days when the Internet was new, and security was not a significant concern or consideration. As an example, the ICMP protocol, which is neither encrypted not authenticated, defines an ICMP Redirect packet that can be used to instruct a host to send traffic destined for a given IP address range to a different gateway address. Who, at the time, would have imagined that this message could be used to facilitate malicious traffic sniffing and man-in-the-middle attacks? Initial versions of SNMP that are still in common use are not encrypted, and are weakly authenticated using community strings. It is trivial for an attacker to sniff the SNMP traffic, obtain the community string, then start sending its own SNMP Get and Set
requests to obtain detailed device information, or worse, change configuration settings. In the eyes of a security practitioner, today s management protocols simply pose too great a risk. So here is where we are: departments responsible for management have invested heavily in centralized management applications and infrastructure and need to leverage that investment across the entire enterprise; departments responsible for security are equally attached to their firewalls, and do not want to open them up to any traffic that might be perceived as a threat. Something has got to give. When confronted with this dilemma, enterprises have historically employed a variety of less than ideal approaches to more or less work around the issue. The prevalent approaches are discussed in the following section. Workaround Approaches The various approaches that have historically been used to work around the problem of managing through firewalls can be summarized as follows: 1. Ad-hoc Management In some companies, where the security department has sufficient clout, all traditional management protocols will be blocked at the firewall, as shown in Figure 2. Figure 2. Management Protocols Blocked at the Firewall In such cases, the department responsible for management will frequently resort to ad-hoc management of devices located in network zones that cannot be reached from the centralized management applications: Unreachable network zones are treated as a special case with respect to management. Device status can be periodically monitored via SSH or Telnet. Device configuration changes are applied manually. The fundamental characteristic of ad-hoc management is that devices in unreachable zones are managed differently, with different tools and/or procedures, which essentially defeats the goal of unified, centralized management. If the ad-hoc procedures used to manage these devices are not as strong or rigorous as those used in the reachable portion of the network, there is an increased risk that outages and/or performance degradation in the unreachable zones may go undetected for significant periods of time, potentially resulting in negative financial impact to the business.
2. Define Firewall Rules If you ask a management application vendor how to extend the reach of their application beyond a firewall, they will typically recommend that you define firewall rules so that the required management protocols are able to pass through the firewall, between management application servers and managed devices. Figure 3. Management Protocols Allowed by the Firewall While this sounds simple enough in the context of a single application, the reality, in the typical enterprise, is far from simple. First of all, each management application may use multiple protocols, each of which defines its own ports. As a result, multiple rules are needed just to allow a single management application server to communicate with a single managed device, as shown in the following figure. Figure 4. Multiple Ports On top of that, considering that most enterprises will have hundreds of devices in unreachable zones that need to be managed, and will typically use multiple management applications to manage these devices, the number of firewall rules required can become quite large, as illustrated in the following figure.
Figure 5. Multiple Ports, Multiple Applications, Multiple Devices There are a number of problems that arise when defining a large number of firewall rules. First of all, the initial effort to define the rules is large. The more rules you have, the more likely it is that someone will make a mistake. Remember that complexity is the enemy of security. The administrative overhead to maintain these rules over time will also be significant. Due to the associated security impact, some companies require approval by standing committees for any firewall configuration changes. As a result, the cost of making necessary ongoing changes will be high, and the time required to complete the approval process will lead to delayed projects and a perception that the IT department is unresponsive to business needs. The other problem with allowing management protocols to pass through the firewall is that the management protocols themselves lack essential security mechanisms. Many of these protocols were defined in the early days of the Internet, when the kinds of security threats we face today simply had not been imagined, and as a result security was not viewed as a high priority. A high-level security analysis of several commonly used management protocols, as shown in Table 1, illustrates this point. Authentication Encryption Easy to Spoof ICMP None None Yes SNMP v1 / v2c Simplistic None Yes SNMP v3 Good Good No Syslog None None Yes NetFlow None None Yes SFlow None None Yes TFTP None None Yes FTP In the Clear None No HTTP In the Clear None No HTTPS Good Good No Telnet In the Clear None No SSH Good Good No Table 1. Security Analysis of Common Management Protocols
If management protocols had little power and were essentially harmless, their lack of security mechanisms would be less of a concern. Unfortunately, management protocols are in fact very powerful, and can be used by a hacker to obtain valuable information about a network, modify device configurations, manipulate the flow of traffic through the network, and open the door to a wide range of follow-on attacks. The fundamental problem here is that the security mechanisms provided by management protocols are not commensurate with their power. They are like power tools without the safety guards. As a result, even if the administrative effort required to configure the firewalls to allow management protocols can be justified, in doing so, the overall security of the enterprise network is diminished. 3. Management VPN A management Virtual Private Network (VPN) can also be used to extend the reach of a management application into a firewall-partitioned network zone. The simplest form of this approach is to install a VPN client on each management application server, and a VPN server in the unreachable network zone as illustrated in the following figure. Figure 6. Management VPN The VPN client within each management application server can be configured to intercept traffic destined for managed devices in the unreachable network zone and relays this traffic via an encrypted link to the VPN server, which relays the traffic to the managed devices. Similarly, the VPN server is configured to route traffic originated by the managed devices to the VPN clients associated with the intended management application servers. A management VPN can significantly reduce the number of firewall rules that need to be configured: There is no need to configure rules for multiple ports; only the port used for communications between VPN client and the VPN server is required. There is no need to configure rules for each managed device; by providing a rule for a given VPN server, management applications are able to effectively reach all managed devices that are reachable by that VPN server. While a management VPN can simplify firewall configuration, the same security concerns remain that were described for the Define Firewall Rules approach, because the VPN allows fundamentally insecure management protocols to pass between the network zones. The VPN server will typically be a general-purpose network device that is not aware of the nature and behavior of management protocols. In the absence of more specific additional configuration, a VPN server will typically pass any network traffic on any port, whether or not that traffic is related to management, which is considerably less secure than defining firewall rules. Some VPN solutions may allow you to configure access control rules limiting the ports that can be
accessed for given managed devices, but at that point you are effectively doing the equivalent work of defining firewall rules without providing any added security benefit. 4. Application-Specific Probes or Agents Another strategy that management application vendors can use to extend the reach of their application through firewalls is to provide probes or agents that can be deployed in the unreachable network zone, and can communicate back to the centralized management application server using a secure encrypted link, as shown in the following figure. Figure 7. Application-Specific Probes or Agents Even though this approach still requires configuration of firewall rules to allow the probes and agents to communicate with the centralized management application servers, the number of rules that need to be configured is significantly reduced (one port per pair of communicating entities as opposed to multiple ports), and by using a secure encrypted link, the associated security risk is minimized. The problem with this approach is that management application vendors typically develop their own probe or agent that only works with their own application. As a result, if you are using multiple management applications as most companies do you will end up having to deploy multiple probes or agents in each network zone, one for each management application that you use, as illustrated in the following figure.
Figure 8. Multiple Application-Specific Probes or Agents As the number of probes and agents you need to deploy increases, your costs go up, and the complexity of your management environment increases as well. In addition, the probes and agents provided by different management application vendors will provide different levels of security, and you will need to evaluate each one independently, in order to ensure that your needs are met. Some may provide a good degree of security, while others may be weak in this area, because it is not their primary area of expertise, or perhaps because their product is relatively new to the market. In a highly secure environment, you may need to perform penetration testing for each different probe or agent, and as the number of probes and agents increases, the cost/time to perform this testing will increase as well. 5. Management Application Replication Another approach that can be used to manage multiple firewall-partitioned network zones is to install and operate a separate instance of each required management application within each network zone, as illustrated in the following figure. Figure 9. Management Application Replication The advantage of this approach is that all network zones are fully managed without needing to open up the firewalls to allow management traffic. The primary disadvantage is that it defeats the ultimate management goal, which is to provide a single unified view of the entire enterprise network.
This approach can also be very expensive, depending on the licensing arrangement that you have with your management application vendors. If you have paid a fixed price for a corporate licenses, the additional costs may be manageable, but if you are paying on a perinstance basis, this approach can rapidly become cost prohibitive, especially as the number of network zones increases. Even if the software costs are manageable, you will need to deploy and maintain additional servers in each network zone, on which to install and run your management application instances. If you are deploying management application instances in a high-risk network zone, such as a DMZ, you will want those applications to be hardened for security, which is not a typical management application requirement, and often is not within the application vendor s area of expertise. Otherwise you run the risk that your management application servers may be compromised, either to deny service to or tamper with the management application (e.g. to provide cover for an ongoing attack), or to be used as a vantage point for subsequent attacks on other devices and systems (e.g. if the network infrastructure devices are locked down via access control lists, to only allow commands from the management application servers, if an attacker can compromise the management application server they then have management access to the infrastructure devices). 6. Management Proxy Firewall A management proxy firewall extends the reach of management applications into a firewallpartitioned network zone with a minimum of firewall rules and with significantly enhanced security. The typical approach for deploying a management proxy firewall is similar to that for a Management VPN. A client is installed on each of the management servers, and one or more proxy firewall appliances are deployed in each unreachable network zone, as illustrated in the following figure. Figure 10. Management Proxy Firewall The client within each management application server intercepts traffic destined for managed devices in the unreachable network zone and relays this traffic via an encrypted link to the proxy firewall, which relays the traffic to the managed devices. Similarly, the proxy firewall is configured to route traffic originated by the managed devices to the clients associated with the intended management application servers. The management proxy firewall approach is also comparable to a management VPN in terms of firewall rule reduction: There is no need to configure rules for multiple ports; only the port used for communications between client and the proxy firewall appliance is required.
There is no need to configure rules for each managed device; by providing a rule for a proxy firewall appliance, management applications are able to effectively reach all managed devices that are reachable by that appliance. The distinguishing characteristic of a management proxy firewall 1 is that it operates at the application protocol layer and is aware of the ports, message formats and transaction patterns associated with specific management protocols. A management proxy firewall will only relay traffic that involves the expected participants, that uses the expected ports, and that looks and behaves like valid management traffic, resulting in a significantly reduced attack surface. Conceptually, a management proxy firewall breaks down the traffic passed between management applications and managed devices by port and protocol, and includes specific validation modules for commonly-used management protocols, as illustrated in the following figure. Figure 11. Management Proxy Firewall Conceptual Architecture In a more practical implementation, the client and the proxy firewall appliance may share the responsibility for protocol validation, so that invalid traffic can be discarded at the point where the traffic is received, as illustrated in the following figure. 1 A proxy firewall is sometimes referred to as an application layer firewall (see http://en.wikipedia.org/wiki/application_layer_firewall) because it deals with protocols at the application layer.
Figure 12. Management Proxy Firewall Practical Architecture A management proxy firewall may also serve to isolate the lower layer transport protocols (e.g. IP, TCP, UDP) on the management application side from those on the managed device side, because each transport protocol can effectively be terminated at the local point of contact (i.e. client or proxy firewall). For example, when providing a proxy for a TCP-based protocol such as SSH, rather than simply relaying IP datagrams or TCP segments, the TCP/IP transport can be implemented as two TCP connections, one between the management application and the client, and one between the proxy firewall and the managed device. The header content of each datagram and segment sent by the client or proxy firewall will be created by the sending client or proxy firewall. This approach effectively protects against a wide variety of transport layer attacks, including fingerprinting attacks that would typically be used to identify management application server software versions, in order to identify known vulnerabilities. In summary, there are a variety of approaches that can be used to manage devices in firewallpartitioned networks. The Management Proxy Firewall approach is recommended, because it provides a best of all worlds solution, extending the reach of centralized management applications throughout the entire enterprise network, simplifying firewall configuration, and providing enhanced security. The remainder of this paper will describe Tavve s ZoneRanger, which has been developed to meet the market need for a security-hardened, commercial management proxy firewall. ZoneRanger A Commercial Management Proxy Firewall Introduction to ZoneRanger Tavve s ZoneRanger is a commercial Management Proxy Firewall and can extend the reach of management applications throughout firewall-partitioned networks, while minimizing firewall rules
and mitigating associated security threats. ZoneRanger provides proxy and forwarding services for a wide variety of management protocols, including the following: ICMP Echo Request (a.k.a. ping ) Proxy SNMP Get/Set Request Proxy Telnet/SSH Proxy HTTP/HTTPS Proxy FTP/TFTP Proxy SNMP Trap Forwarding Syslog Forwarding NetFlow/sFlow Forwarding TACACS+ Proxy RADIUS Proxy NTP Proxy A minimal ZoneRanger installation consists of two components: 1. The Ranger Gateway software, typically installed on the management application server. 2. The ZoneRanger appliance, typically deployed in an unreachable network zone, such as a DMZ. Figure 13. Minimal ZoneRanger Installation Note that the Ranger Gateway software acts as the client and the ZoneRanger acts as the proxy firewall as previously described in the Management Proxy Firewall approach. The Ranger Gateway intercepts traffic destined for managed devices in the unreachable network zone and relays this traffic via an encrypted link to the ZoneRanger, which relays the traffic to the managed devices. Similarly, the ZoneRanger is configured to forward traffic originated by the managed devices to the Ranger Gateways associated with the intended management application servers. In order to facilitate integration with a wide variety of management applications, Ranger Gateway and ZoneRanger provide transparent proxy services. That is, management applications are able to use the same device addresses and protocols to communicate with managed devices beyond the firewall that they would use in the absence of the firewall, and no special management
application configuration is required. As a result, the management application can remain completely unaware that a proxy service is being used. As an example, consider the network illustrated in the following figure: Figure 14. Network Example Assume that a management application such as HP NNM or CA ehealth is installed on the management application server. In order to perform an SNMP Get Request, the management application will send a normal SNMP Get Request message to the target device address (e.g. 10.1.10.20). The Ranger Gateway will intercept this request, perform validation, then will forward the request in an internal format to a ZoneRanger that is able to communicate with the target device. The ZoneRanger will rebuild the request and send it to the target device. Note that the source address in the rebuilt request will be the IP address of the ZoneRanger (e.g. 10.1.10.6), so that when the device replies, the reply will be routed back to the ZoneRanger. When the ZoneRanger receives the reply, it is validated, matched with a known outstanding request, and relayed back to the requesting Ranger Gateway in an internal format. The Ranger Gateway rebuilds the reply and forwards it to the management application. Note that the source address in the reply will be the IP address of the target device (e.g. 10.1.10.20). The message flow for this example is illustrated in the following figure: Figure 15. SNMP Proxy Message Flow In the case of management protocol traffic originated by managed devices, the existence of the proxy is semi-transparent. That is, the managed device can use the same protocols that it would normally use, but must be configured to direct management traffic towards the ZoneRanger, as opposed to sending it to the management application. One advantage of this approach, in the case where the ZoneRanger is deployed in a network zone of low trust, is that the ZoneRanger serves to hide the IP addresses of the management applications from the managed devices. From the perspective of the managed devices, they are being managed by the ZoneRanger.
As an example, consider the case where a managed device sends a Syslog message to the ZoneRanger. The destination address, in this case, must be the ZoneRanger s IP address (e.g. 10.1.10.6). When the ZoneRanger receives the Syslog message, it will validate the message, then will consult its configured forwarding rules to determine where the message should be forwarded. The ZoneRanger will forward the message to any indicated Ranger Gateways, along with a list of addresses (based on matching forwarding rules) to which the message should be forwarded. The Ranger Gateway will forward the message to the specified recipients, using the original managed device s IP address (e.g. 10.1.10.20) as the source address, so that it looks to the receiving management application as if the message had been sent directly from the managed device. The message flow for this example is illustrated in the following figure: Figure 16. Syslog Forwarding Message Flow Deploying ZoneRanger In a typical large enterprise, there will be multiple management applications and multiple DMZs, or other firewall-partitioned networks. In order to accommodate this requirement, each ZoneRanger is able to work with multiple Ranger Gateway instances, and each Ranger Gateway instance can work with multiple ZoneRangers, as illustrated in the following figure. Figure 17. Large Enterprise Configuration
In this configuration, two ZoneRangers are deployed in each firewall-partitioned network highavailability, and a Ranger Gateway instance is installed on each management application server. In general, each Ranger Gateway instance will have an SSL connection to each of the ZoneRangers, enabling the corresponding management application to reach into each of the firewall-partitioned networks where the ZoneRanger pairs have been deployed. ZoneRangers are frequently deployed in redundant pairs in order to provide high availability, so that if one of the ZoneRangers becomes unavailable, the other ZoneRanger in the pair will be able to proxy the necessary management traffic. The mechanisms provided by ZoneRanger in order to support high availability are illustrated in the following figure. Figure 18. High Availability Mechanisms As shown in the figure, the Ranger Gateway is configured with a table indicating which ZoneRangers are able to reach specific managed devices or subnets. The table shown indicates that either ZR-1 or ZR-2 can be used to proxy traffic to devices in the 10.1.1.0/24 subnet. The Ranger Gateway monitors the status of all associated ZoneRangers on an ongoing basis, so that it can divert traffic away from any ZoneRangers that may be unavailable. If, for example, the management application sends an SNMP Get request to 10.1.1.20, the Ranger Gateway will intercept the request, consult its configuration table to identify the list of ZoneRanger candidates (ZR-1 or ZR-2), eliminate any ZoneRangers that are currently unavailable from the list, then select one of the remaining ZoneRangers to proxy the request. In cases where the management protocol transaction is initiated by the managed device, such as SNMP trap forwarding, the managed device can be configured to send the trap to the virtual IP address associated with the pool (e.g. 10.1.1.60). Assuming that ZR-1 is currently active with respect to the virtual IP address, the trap will be received by ZR-1, which will forward the trap on to one or more Ranger Gateways based on configured rules. If ZR-1 were to become unavailable, ZR-2 would detect this condition and become the new owner. Note that each ZoneRanger also has its own individual IP address. As an alternative to the virtual IP mechanism, a managed device can also be configured to send each trap to two or more ZoneRangers. Each ZoneRanger will forward the trap to one or more Ranger Gateways based on
configured rules, and each Ranger Gateway will detect and remove the duplicates before forwarding traps on to the management application. A pool of ZoneRangers (e.g. three or more) can be deployed in a firewall-partitioned network zone in order to provide a combination of high capacity and high availability, where the size of the pool is based on the management traffic load. When the load increases, additional ZoneRangers can be added to the pool in order to handle the additional traffic. Each Ranger Gateway instance is responsible for distributing management traffic transactions evenly across each pool of ZoneRangers. When a management application initiates a management protocol transaction, the Ranger Gateway will intercept the initial request, and will select an available ZoneRanger to proxy the request. The Ranger Gateway keeps track of ZoneRanger status and transaction history, and will attempt to balance the transaction load across the set of available ZoneRangers in the pool. The load balancing function provided by the Ranger Gateway is illustrated in the following figure. Figure 19. Load Balancing Note that the load balancing algorithm is based on the same configuration table that is used for high availability. In fact, the ZoneRanger selection algorithm in the Ranger Gateway essentially addresses both high availability and load balancing requirements by seeking to balance the load across the set of available ZoneRangers. The rule of thumb in sizing a ZoneRanger pool is to determine the number of ZoneRangers necessary to handle the anticipated load, assuming that all of them are available, then add one, to handle the case where one of the ZoneRangers becomes unavailable. For example, if four ZoneRangers are needed to handle the management protocol transaction load, a pool of five is recommended, so that if any one of the ZoneRangers becomes unavailable, the Ranger Gateway will detect this condition and will spread the load across the remaining four. ZoneRanger Interfaces Each ZoneRanger appliance has two network interface cards (NICs), and can be configured for single-nic or dual-nic operation. In a single-nic configuration, traffic between the ZoneRanger and the Ranger Gateway, and traffic between the ZoneRanger and the managed devices shares a single interface, as shown in the following figure.
Figure 20. Single-NIC Configuration The single-nic configuration is preferred in situations where the traffic flowing through the firewall is a combination of both management and non-management traffic. The single-nic configuration allows non-management traffic to be routed directly to the intended devices, bypassing the ZoneRanger, so that the ZoneRanger does not become a bottleneck. In the dual-nic configuration, traffic between the Ranger Gateway and the ZoneRanger is carried over the first interface, and traffic between the ZoneRanger and the managed devices is carried over the second interface, as illustrated in the following figure. Figure 21. Dual-NIC Configuration The dual-nic configuration works well in cases where there is a separate management network, dedicated to management traffic. Note that the dual-nic configuration also serves to enhance security, by providing an additional layer of isolation between the corporate network and the managed devices. Deploying Ranger Gateway The simplest approach for deploying the Ranger Gateway is to install an instance of the Ranger Gateway software on each management application server. Within each server, traffic originated by the management application and destined for managed devices that are located in firewallpartitioned networks will be intercepted by the Ranger Gateway, and relayed to available ZoneRangers that are able to communicate with the intended managed device.
There may be cases where it is preferable to deploy shared stand-alone Ranger Gateway servers, making use of the Ranger Gateway Virtual Interface (RGVI) mechanism, as illustrated in the following figure. Figure 22. Shared Ranger Gateway Servers The RGVI mechanism requires a relatively thin RGVI Client to be installed on each management application server. The RGVI Client intercepts traffic originated by the management application destined for managed devices that reside in a firewall-partitioned network, and forwards intercepted traffic to a Ranger Gateway server, which relays the traffic to available ZoneRangers that are able to communicate with the intended managed device. Responses received by the ZoneRanger are passed back to the Ranger Gateway, which forwards them to the original RGVI client, which forwards them to the requesting management application. Note that Ranger Gateway servers are typically deployed in pairs in order to provide high availability. Even though the RGVI mechanism still requires software to be installed on each management application server, it should be noted that the RGVI Client has a much smaller memory and processing footprint than the Ranger Gateway software. As a result, the potential for impact to the management application is minimized. Another advantage of sharing Ranger Gateway servers is the potential to reduce the number of firewall rules that must be defined to allow communication between Ranger Gateways and ZoneRangers, because the overall number of Ranger Gateway instances is reduced. In addition, by consolidating Ranger Gateway configuration and control is into a small number of dedicated servers, overall configuration effort is reduced, and access to the Ranger Gateway can easily be restricted to authorized users. The primary disadvantage of sharing dedicated Ranger Gateway servers is the additional cost to purchase and operate the dedicated servers. In some organizations these costs will outweigh the advantages. In other organizations, the reverse may be true. Conclusions Firewall-based network partitioning creates a dilemma for centralized management. The preferred strategy to resolve this dilemma is to deploy a management proxy firewall, in order to extend the reach of centralized management applications throughout the entire enterprise network, while simplifying firewall configuration, and providing enhanced security. Tavve s ZoneRanger appliance provides a mature, commercial-grade, feature-rich management proxy firewall solution, complete with high availability features and a scalable architecture.