AXIGEN Mail Server. Implementing, Deploying and Managing a High Availability Distributed Solution on. Copyright 2007 GECAD Technologies S.A.
|
|
|
- Ann Shepherd
- 10 years ago
- Views:
Transcription
1 Implementing, Deploying and Managing a High Availability Distributed Solution on AXIGEN Mail Server Last Updated on: September 6, 2007 GECAD Technologies 10A Dimitrie Pompei Blvd., BUCHAREST 2, ROMANIA. Zip code: Tel.: Fax:
2 Copyright & trademark notices This article applies to version 3.0 or higher of AXIGEN Mail Server. Notices References in this publication to GECAD TECHNOLOGIES S.A. products, programs, or services do not imply that GECAD TECHNOLOGIES S.A. intends to make these available in all countries in which GECAD TECHNOLOGIES S.A. operates. Evaluation and verification of operation in conjunction with other products, except those expressly designated by GECAD TECHNOLOGIES S.A., are the user's responsibility. GECAD TECHNOLOGIES S.A. may have patents or pending patent applications covering subject matter in this document. Supplying this document does not give you any license to these patents. You can send license inquiries, in writing, to the GECAD TECHNOLOGIES S.A. marketing department, Copyright Acknowledgement (c) GECAD TECHNOLOGIES S.A All rights reserved. All rights reserved. This document is copyrighted and all rights are reserved by GECAD TECHNOLOGIES S.A. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage and retrieval system without the permission in writing from GECAD TECHNOLOGIES S.A. The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. GECAD TECHNOLOGIES S.A. will not be responsible for any loss, costs or damages incurred due to the use of this documentation. AXIGEN TM Mail Server is a SOFTWARE PRODUCT of GECAD TECHNOLOGIES S.A. This document is copyrighted and all rights are reserved by GECAD TECHNOLOGIES S.A. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage and retrieval system without prior permission in writing from GECAD TECHNOLOGIES S.A. GECAD TECHNOLOGIES and AXIGEN TM are trademarks of GECAD TECHNOLOGIES S.A. Other company, product or service names may be trademarks or service marks of others. GECAD TECHNOLOGIES S.A. 10A Dimitrie Pompei Blvd., Connect Business Center, 2nd fl., Bucharest 2, ROMANIA; Phone: ; fax: ; Sales: [email protected] Technical support: [email protected] Website: (c) Copyright GECAD TECHNOLOGIES S.A All rights reserved. 2
3 Summary: 1 Introduction Overview Intended Audience Definitions, Terms and Abbreviations Benefits Scalability High Availability and Fault Tolerance Solution Architecture Multi-tier Service-level High Availability I/O High Availability Requirements Software Hardware Licenses Setup and Configuration Network Planning DNS Configuration Load Balancer Front-end Tier Backend Tier Provisioning Account distribution policy Creating a New Account Modifying Account Settings Modifying Account Password Deleting an Account
4 1 Introduction 1.1 Overview This document describes an implementation of a large-scale messaging solution relying on the AXIGEN Mail Server software. The global architecture for the solution is described along with implementation details and operation and maintenance procedures. 1.2 Intended Audience The information in this document is intended for users who are evaluating the benefits of a distributed, high availability solution as well as for integrators and operational personnel. The components of such a solution, both software and hardware, are also listed in this document thus ensuring the ability to assess overall associated costs. 1.3 Definitions, Terms and Abbreviations Vertical scalability potential increase in processing capacity of a machine attainable by hardware upgrades; Horizontal Scalability potential increase in processing capacity of a cluster attainable by increasing the number of nodes (machines); Statefull Services services that provide access to persistent information (i.e. account configuration and mailbox) over multiple sessions. Typically refers to a service in the backend tier. Ex: IMAP services for an account; Stateless Services services that do not store persistent information over multiple sessions. Typically refers to the services in the front-end tier. Ex: IMAP Proxy. Frontend Tier Subnet, medium security level, provides proxy services Backend Tier Subnet, high security level, provides data storage and directory services Frontend Node Machine residing in the frontend network tier, providing proxy functionality Backend Node Machine residing in the backend network tier, participating in the high-availability cluster 2 Benefits 2.1 Scalability Statefull Services Non-distributed solutions, where account information (configuration and messages) is stored on a single machine allow vertical scalability through hardware upgrades (CPU, RAM, disk). However, due to limitations in a typical machine (i.e. max 2 CPU, max 4 GB RAM etc) an upper limit is eventually reached where one can no longer upgrade one machine we shall refer to this as vertical scalability limit. When the vertical scalability limit is reached, the only solution available is to distribute account information (configuration and mailbox) on more than one machine we shall 4
5 refer to this as horizontal scalability. Since information for one account is atomic and cannot be spread across more machines, the solution is to distribute accounts on more than one machine. This way, for a single account, there will be one machine responding to requests (IMAP, POP, SMTP) for that specific account. Thus, when the overall capacity (in terms of active accounts) of the messaging solution is reached, adding one more machine to the solution and making sure new accounts are created provides a capacity upgrade, therefore allowing virtually unlimited horizontal scalability. It must be noted that, since each account of the system is serviced by a specific node, a centralized location directory must be available to provide location services. In our case an LDAP system will store information about which node is able to service requests for a specific account Stateless Services Since stateless services do not store information over multiple sessions, we can assume that two different machines are able to service requests for the same account. This way, horizontal scalability can be achieved by simply adding more machines providing the same service in the exact same configuration. The only remaining requirement is to ensure that requests to a specific service are distributed evenly throughout the machines providing that specific service (i.e. if the system contains two machines providing IMAP proxy services, half of the incoming IMAP connections must reach one of the machines and the rest of the connections must reach the other machine). This functionality is provided by a load balancer, be it hardware (dedicated) or software (Linux machine running LVS). 2.2 High Availability and Fault Tolerance Statefull Services Consider the fact that, for statefull services, requests for one specific account are made to a specific machine. If that specific machine experiences a fault and can no longer respond to requests, none of the other machines are able to service the account in question. A mechanism is required to ensure that, in the event of a catastrophic failure on one machine, some other node must take-over the task of servicing requests for that account thus providing high-availability. RedHat Clustering Suit provides this exact functionality; it ensures that, if one node running a statefull service fails, another node will automatically detect the fault and start the required service in-place of the failed node, providing minimal downtime to that service Stateless Services In the case of stateless services, since any of the nodes providing the same service is able to respond to requests for any account, the only requirement is to make sure that the request distribution mechanism (load balancer) can detect when one of the nodes no longer responds to requests and ceases to direct service requests to that specific node. The total request processing capacity is decreased (the system will respond slower, since one node no longer processes requests), but all service requests can still be processed. 5
6 3 Solution Architecture A global description of the architecture of the messaging solution is provided below. 6
7 3.1 Multi-tier The solution uses three tiers to provide the required functionality. The load balancer tier provides services for network layer 4 (transport), TCP connections, and is completely unaware of account information; it only provides distribution of connections to the nodes in the front-end tier. The front-end tier comprises of nodes running proxy services and SMTP routing services. Its task is to ensure that messages and connections are routed to the appropriate node (depending on the account for which a request is being performed) in the backend tier. Finally, the backend tier provides access to persistent data (such as account configuration and mailbox data); each node in the backend tier is capable of responding to requests for a set of accounts. No node in the backend tier is capable of servicing requests for an account that is serviced by a different node. 3.2 Service-level High Availability Depending on the service type (statefull or stateless), high availability is achieved differently High Availability for Statefull Services We shall define, for the backend tier, the following terms: service-instance and clusternode. A service instance comprises of a running service and storage for a specific set of accounts. A cluster node is a machine. Each machine is capable of running one (typically) or more service instances. High-availability in this tier is achieved by making sure that, in the event of failure of a physical node, the service instance is automatically started on a different cluster node. Currently, there are a number of software packages providing this functionality; AXIGEN was tested with RedHat Clustering Suite. A disadvantage of this high-availability mechanism is the delay that is induced between the time a service on one node fails (or the node fails completely) and the time the service is restarted on a different node. This delay is caused by the time required for the cluster software (RHCS, in our case) to detect the failure and the time required to remount the data partitions on a different node, activate the virtual network interfaces and start the service. During this period (which may vary from 10 seconds to a couple of minutes) the service is not available for the users High Availability for Stateless Services In the case of stateless services (services in the front-end tier), the load distribution system also provides high availability capabilities. The load balancer automatically distributes requests (at TCP level network layer 4) to all the nodes in the front-end tier, based on the configured algorithm. If one node in the front-end tier fails, the load balancer will automatically detect the failed node and will no longer distribute connections to it. A major advantage over the statefull services high availability mechanism is that, due to its active-active nature, a node failure causes no service downtime. 7
8 3.3 I/O High Availability The high-availability used at service level provides a full no-single-point-of-failure. However, any faulty hardware component in a node causes that node to be unusable thus diminishing the total processing power (hence the total transaction capacity) the solution provides. There is a mechanism which makes sure that, even in the case of an I/O controller failure, the node can continue to provide the service; this relies on having duplicate I/O controllers on each node and a software method of failover (rerouting I/O traffic from the faulty controller to the healthy one). The I/O high availability can be used for disk I/O and network I/O fault tolerance, provided that duplicate controllers are available on the nodes. This reduces the occurrence of service downtime in the case of statefull services; if an I/O controller fails, the service would need to be restarted on a different node. 4 Requirements 4.1 Software Or Or OS RedHat Enterprise Linux (ES or AS) version 4. Note: RHEL ES only supports a maximum of 2 CPU / 16 GB RAM and it is not available for the POWER architecture CentOS version AXIGEN AXIGEN 4.x ISP/HSP Edition AXIGEN 3.x ISP/HSP Edition Directory OpenLDAP 2.2.x. Typically, the OpenLDAP package in the Linux distribution should be used Cluster software RedHat Cluster Suite 4. If a hardware load balancer is not available, the Linux Virtual Server component of the RedHat Cluster Suite can be used to achieve the same purpose. 8
9 4.2 Hardware Load balancer Any Layer 3-7 compatible hardware load balancer can be used to provide request balancing. Alternatively, the Virtual Server component of the RedHat Cluster Suite can be used for balancing Servers Balancer. If a hardware load balancer is not employed, one machine must be available for running Virtual Server (component of RHCS). Front-end One server must be available for each node in the front-end tier; in order to achieve high availability, at least two nodes must be used. The hardware configuration of the machines and the number of nodes depend on the solution s performance requirements. It is recommended that RAID1 controllers are used in the front-end nodes to ensure fault tolerance for disk I/O. Backend One server must be used for each of the AXIGEN service nodes in the backend and one for the Directory. It is recommended that a standby node is also available; it will be used by the clustering software in the event of failure of one of the active nodes. Each backend node must have an (or two, if disk I/O fault tolerance is required) external SCSI or FiberChannel ports (depending on the interfaces of the shared storage) in order to connect to the shared storage. Operating system files (root partition) must reside on a local disk, using a RAID1 controller and two disks to provide I/O fault tolerance Shared Storage The clustering software (RHCS) requires all nodes in the cluster to access the same storage system. A wide variety of directly-attached storage systems (SAN) accessible via SCSI or FibreChannel is available and the selection will probably be influenced by a number of factors such as scalability and performance requirements, price and/or preferred vendor. An important issue to be considered when making this choice if the limitation of using an SCSI-attached storage: typically, solutions provide only two or four SCSI ports, thus limiting the number of nodes that can be used. FibreChannel solutions provide numerous ports, allowing the solution to scale better, at the expense of the overall solution cost Fence Devices Fence devices allow a failed node to be isolated from the storage so that, at no time, two nodes may write on the same partition on the shared storage. There are two types of fence devices: Remote power switches (allow the cluster software to remotely power down/reboot a node that failed); I/O barriers (allow the cluster software to block access to the shared storage for a node that failed) 9
10 These components are required for the backend tier of the solution; one fence device port is required for each node in the backend tier. Example: if using fence devices with 4 ports in a solution which has 8 nodes in the backend tier, 2 fence devices are required. 4.3 Licenses AXIGEN The AXIGEN ISP/HSP Edition is licensed on a per-mailbox model. Thus, no matter how many nodes running AXIGEN are contained in the solution (both in the backend tier and in the front-end tier), only the total number of mailboxes that is hosted in the solution affects the license price OS RedHat Enterprise Linux is licensed on a per-host model, so a license is required for each node in the cluster (both for backend, front-end and, if a software load balancer is used, in the load balancer tier) Cluster RedHat Clustering Suite is licensed on a per-cluster-node model. Each node in the backend tier requires a separate RHCS license. 10
11 5 Setup and Configuration 5.1 Network Planning The front-end and backend tiers are separated in different subnets and constitute different security zones. The load balancer resides in the same subnet as the front-end tier. In our example, the backend layer uses subnet /24 and the front-end layer uses a valid, internet routable subnet /24 (the IP class is an example and should not be used in a real scenario). A router must exist to connect the outside network, the front-end network and the backend network, also providing firewall and address translation services. The image below depicts the network topology for this solution. 11
12 5.2 DNS Configuration In our scenario, a DNS service will be configured on the Router/Firewall machine to allow visibility from both front-end tier and backend tier. One forward and one reverse lookup zones are required for each network tier (backend and front-end). The Router/Firewall has one network interface in each zone. The zones for the scenario in this document are described below: Front-end o DNS zone name: front-end.cluster o Zone subnet: /24 o Nodes: Router: router.front-end.cluster = Load Balancer Node: loadbalancer.front-end.cluster = Front-end Node 1: proxy1.front-end.cluster = Front-end Node 2: proxy1.front-end.cluster = Front-end Node 3: proxy1.front-end.cluster = o Virtual (balanced) service addresses (on the load balancer) SMTP: smtp.front-end.cluster = POP3: pop3.front-end.cluster = IMAP: imap.front-end.cluster = Backend o DNS zone name: backend.cluster o Zone subnet: /24 o Nodes Router: router.backend.cluster = LDAP: ldapnode.backend.cluster = Axigen node 1: axigenode1.backend.cluster = Axigen node 1: axigenode2.backend.cluster = o Clustered service addresses LDAP: ldap.backend.cluster = AXIGEN1: axigen1.backend.cluster = AXIGEN2: axigen1.backend.cluster = All the machines in the solution must use the same DNS server to avoid confusions. 5.3 Load Balancer The load balancer resides in the front-end tier subnet and provides the following functionality: it accepts TCP connections on service ports (SMTP 25, IMAP 143, POP3 110, etc.) and redirects them, based on a scheduling algorithm, to a front-end node. In our example, the load balancer s address is and the front-end nodes are assigned with , , The following service routing policy should be implemented: Service IMAP o Virtual server: o Virtual port: 143 o Real servers: :143, :143, :143 o Scheduling algorithm: Least-connection o Fault-detection: enabled 12
13 Service POP3 o Virtual server: o Virtual port: 110 o Real servers: :110, :110, :110 o Scheduling algorithm: Least-connection o Fault-detection: enabled Service SMTP o Virtual server: o Virtual port: 25 o Real servers: :25, :25, :25 o Scheduling algorithm: Least-connection o Fault-detection: enabled If all the nodes in the front-end layer have identical hardware performances, the leastconnection scheduling algorithm will suffice. If, however, the hardware differs, a weighted least-connection scheduling algorithm must be used to ensure a uniform load on the frontend nodes. The figure below depicts the functionality of the load balancer: Depending on the load balancer that is used (either hardware or software), a dual activeactive load balancer setup may be used to ensure no-single-point-of-failure at this tier. Please consult the specific load balancer documentation for details on how to implement such a setup. 13
14 5.4 Front-end Tier Generic Nodes Configuration (1) Operating System The operating system on the front-end tier nodes requires no special configuration other than the specific network settings, according to the planned network topology. In our example, each node has one network interface connected to the router in the front-end tier subnet, configured with the specific IP, x, network mask and gateway Additional configuration: Kernel parameters o Disable routing: net.ipv4.ip_forward = 0 o Increase the maximum number of open descriptors: fs.file-max = (this also covers open TCP sockets; consider the fact that each service connection may consume up to 3 filedescriptors: one for front-end node to client, one for front-end node to backend node and one for LDAP query) o Tune the virtual memory manager: vm.bdflush (this should be configured depending on the specific scenario) o Tune the kernel swapping algorithm: vm.kswapd (this should be configured depending on the specific scenario) o Increase the maximum number of local tcp ports: net.ipv4.ip_local_port_range = (this is required so that the front-end node is able to handle many simultaneous TCP connections) o Configure the TCP connection backlog: net.ipv4.tcp_max_syn_backlog (according to the required setup a larger backlog consumes more memory but allows a queue for new request in peaks; a smaller backlog is more economical, but connections may be refused in peak periods) o Reduce the TCP keep-alive timeout: net.ipv4.tcp_keepalive_time = 600 DNS Resolver o Configure the /etc/resolv.conf file according to your specific network settings o In our scenario, the /etc/resolv.conf file contains search backend.cluster nameserver (2) AXIGEN Configuration The AXIGEN package must be installed according to the instruction manual. After installation, the following configurations must be performed: Enabled services: IMAP-Proxy, POP3-Proxy, SMTPIn, SMTPOut, DNR. Other services may be also enabled if required. Listeners: IMAP-Proxy, POP3-Proxy, SMTPIn must be configured with one listener each, on the IP address allocated to the front-end node (in our example, front-end node 1 s IP is ) LDAP Connector. One LDAP connector is required for both proxies and SMTP routing. o LDAP connector parameters: 14
15 Name: LDAP_Master URL: specific LDAP URL (in our example, ldap:// ) BindDN: LDAP Administrator s DN (in our example: cn=admin,dc=example,dc=com) BindPassword: LDAP Administrator s password (as defined in the LDAP configuration file) Search Base: LDAP base of the user entries (in our example: dc=example,dc=com) Search pattern: LDAP search string to locate, based on the request, the entry in LDAP (in our example (&(uid=%e)(objecttype=inetlocalmailrecipient)) ) Password field: will not be used since we shall rely on Bind authentication Hostname field: LDAP attribute holding the hostname of the backend node where the account resides (in our case mailhost ) Use first returned field: set to yes to allow routing even if duplicate entries exist in LDAP (in our case, no in order to be able to detect duplicates they will not be able to login) o The values for the LDAP connector settings must be according to the LDAP schema used in the specific scenario UserMap. One UserMap is required to allow the proxies and SMTP router to locate home backends for each account. o UserMap parameters: Name: LDAP_Master_UserMap Type: LDAP LocalFile: Not used, since this is an LDAP, not a local map userdbconnectortype: LDAPBind userdbconnectorname: LDAP_Master Domain Name Resolver o Add nameserver: Priority: 5 Address: (if the DNS is located on the router, as in our example scenario) Timeout: 2 No. of retries: 3 15
16 5.4.2 IMAP and POP Proxies Configure the proxies to route connections via a usermap, both for IMAP and POP3 o In the Mapping Data section of the proxies usermap: the name of the usermap defined in the step above (in our case, LDAP_Master_UserMap ) mappinghost: default backend node hostname to use if the user is not found in LDAP mappingport: default backend node port to use if the user is not found in LDAP Configure authentication via LDAP o For both IMAP and POP3 proxies, set: userdbconnectortype: ldapbind userdbconnectorname: LDAP_Master authenticateonproxy: yes SMTP Routing Enable Routing o In the SMTPIn service general context Set the enablesmtprouting to yes o In the Mapping Data section of the SMTPIn service UserMap: LDAP_Master_UserMap mappinghost: default backend node hostname to use if the user is not found in LDAP mappingport: default backend node port to use if the user is not found in LDAP Configure authentication via LDAP o In the SMTPIn service general context userdbconnectortype: ldapbind userdbconnectorname: LDAP_Master 5.5 Backend Tier Configuring the Storage The external storage (SAN) must be connected to all the backend nodes via SCSI cables or FiberChannel. The next step is to configure the virtual disks (LUNs) on the storage that will be accessible from the backend nodes. Depending on the storage or the desired result, the following storage configurations can be used: One virtual disk for each service instance (each AXIGEN service instance and the LDAP service). In this case, on each virtual disk, partitions will be created for each data folder required for each service. In our example, let s assume the disk is accessible from the backend nodes with the device name: /dev/sda for LDAP, /dev/sdb for AXIGEN instance 1 and /dev/sdc for AXIGEN instance 2. The following partitions must be created (using fdisk on any of the backend nodes) o LDAP 16
17 /dev/sda1 LDAP data partition (will be mounted in /var/lib/ldap) o AXIGEN instance 1 /dev/sdb1 AXIGEN1 Storage (will be mounted in /var/opt/axigen1/domains) /dev/sdb2 AXIGEN1 Queue (will be mounted in /var/opt/axigen1/queue) /dev/sdb3 AXIGEN1 RunDir (will be mounted in /var/opt/axigen1/run) o AXIGEN instance 2 /dev/sdc1 AXIGEN2 Storage (will be mounted in /var/opt/axigen2/domains) /dev/sdc2 AXIGEN2 Queue (will be mounted in /var/opt/axigen2/queue) /dev/sdc3 AXIGEN2 RunDir (will be mounted in /var/opt/axigen2/run) One virtual disk for all services. In this case, partitions are required for each service and for each data folder. In our example, let s assume that the single virtual disk on the storage is available on the backend nodes as device name /dev/sda : o LDAP /dev/sda1 LDAP data partition (will be mounted in /var/lib/ldap) o AXIGEN instance 1 /dev/sda2 AXIGEN1 Storage (will be mounted in /var/opt/axigen1/domains) /dev/sda3 AXIGEN1 Queue (will be mounted in /var/opt/axigen1/queue) /dev/sda4 AXIGEN1 RunDir (will be mounted in /var/opt/axigen1/run) o AXIGEN instance 2 /dev/sda5 AXIGEN2 Storage (will be mounted in /var/opt/axigen2/domains) /dev/sda6 AXIGEN2 Queue (will be mounted in /var/opt/axigen2/queue) /dev/sda7 AXIGEN2 RunDir (will be mounted in /var/opt/axigen2/run) One virtual disk for each data folder required by the services. This is not a recommended scenario since it adds a lot of overload to the subsequent administration. The first scenario is typically the best solution (also the one used in this document), because it allows isolation of each service to a storage virtual disk and, if the storage supports it, may allow selective availability of the disk on particular nodes; this feature can be used if some services will only be allowed to run on specific nodes (for instance, LDAP will run only on its home backend node and on the hot-standby node). 17
18 5.5.2 Operating System Configuration (1) Kernel Configuration The same tuning must be performed for the backend tier nodes as for the front-end tier nodes. (2) DNS Resolver Configure the system DNS resolver to use the DNS server that contains the appropriate zones; in our case, the DNS is on the router. The /etc/resolv.conf file contains, in our scenario: search backend.cluster nameserver Configuring the Network Each node in the backend tier must be configured according to the appropriate network setup. In our scenario: LDAP: AXIGEN node 1: AXIGEN node 2: The gateway for all nodes is (the backend tier interface of the router) Setting-up the Cluster Software (1) Install RHCS packages Install the RedHat Cluster packages according to RHCS user s manual. (2) Configure the cluster Use the system-config-cluster tool to create a new cluster configuration, with the following parameters (the information below is based on the example scenario defined in this document): Cluster settings: o Locking system: GULM Nodes: o In our setup, a physical node exists for each service instance plus one for LDAP and another one as hot standby Shared resources. The AXIGEN service script must be defined as a shared resource o Type: Script o Name: AXIGEN Script o Path: /etc/rc.d/init.d/axigen Services. One service must be created for each AXIGEN backend service instance and one for the LDAP service (in our example, AXI1, AXI2 and LDAP) o AXIGEN Service instances Service parameters 18
19 o Service name: AXIGEN<instance_number>. Instance number may be 1, 2, aso. In our case, AXIGEN1 and AXIGEN2 Service Resources Service IP Address: virtual IP for the service instance (in our example: for AXI1, for AXI2) Service File Systems (associated with the partitions on the shared storage) o Storage partition: mounted in /var/opt/<service_name>/domains o Queue partition: mounted in /var/opt/<service_name>/queue AXIGEN Service script (shared resource) LDAP Service Service parameters Service name: LDAP Service resources Service IP Address: virtual IP for the service instance (in our example: for LDAP) Service File Systems (associated with the partitions on the shared storage) o LDAP Data partition: mounted in /var/lib/ldap LDAP Service script: /etc/rc.d/init.d/ldap After configuring the cluster, copy the /etc/cluster/cluster.conf file on all the backend nodes, preserving the permissions. (3) Start the cluster services On all the backend nodes, start the cluster services according to the RHCS user manual Setting-up the LDAP Directory (1) Install the LDAP packages The following packages must be installed on the backend nodes that will be used to run the LDAP service: openldap openldap-clients openldap-servers (2) Configure the LDAP service The objectclass that will be used to identify user accounts in LDAP will be inetlocalmailrecipient. The default LDAP configuration does not include the schema for this objectclass; it has to be explicitly included. Add the following line to the slapd.conf LDAP configuration file: include /etc/openldap/schema/misc.schema Configure the LDAP base, the administrative DN and the administrative DN s password. In our example, the base is dc=example,dc=com and the administrative DN is cn=admin,dc=example,dc=com. 19
20 suffix dc=example,dc=com rootdn cn=admin,dc=example,dc=com rootpw secret This example uses a simple plaintext password for the administrative DN; it is recommended to: Use a more complex password Define it encrypted in the configuration file (use the saslpasswd utility) Copy the LDAP configuration file to all the backend nodes where the LDAP service will be allowed to run (in our case, all the backend nodes) Setting-up AXIGEN (1) Install the AXIGEN package Install the appropriate AXIGEN package for the platform (AXIGEN for RPM-based distros for GCC3) on all the backend nodes that will be used to run AXIGEN service instances. Depending on the version of the distribution, the compat-libstdc++-33 package may also be required prior to installing the AXIGEN package. After installation, disable the AXIGEN automatic startup, by running: chkconfig -level 35 axigen off Do not run the AXIGEN install wizard as it will perform unnecessary tasks and also attempt to start AXIGEN which are not needed for this type of setup. Due to the fact that each service instance may float on different nodes (the cluster software will relocate the service on a different node in the event of a failure), some instance-specific files (such as the storage, queue, and rundir configuration and pidfile) must reside on partitions of the shared storage. (2) Configure AXIGEN For each AXIGEN service instance, temporarily mount the RunDir service partition and create a copy of the default configuration file on it. Modify the following parameter: queuepath = /var/opt/axigen<instance>/run Replace <instance> with the actual service instance number (i.e. /var/opt/axigen1/run). Each AXIGEN service instance will now use its queue directory on the shared storage. Change the ownership of the mounted RunDir directory to user axigen, group axigen : chown axigen.axigen <rundir_partition_mountpoint> Remember to unmount the RunDir partition it will be automatically mounted by the cluster software when a service is started. For each AXIGEN service instance, temporarily mount (for instance, on one backend node in the /var/opt/<service_name>/queue and /var/opt/<service_name>/queue respectively) the Queue and Storage service partition and change their ownership to user axigen group axigen : 20
21 chown axigen.axigen <queue_partition_mountpoint> chown axigen.axigen <storage_partition_mountpoint> Remember to unmount both partitions after the ownership change they will be automatically mounted by the cluster software when a service is started. For each backend node that will run AXIGEN, modify the /etc/sysconfig/axigen file changing the following parameters: PIDFILE = /var/opt/$ocf_reskey_service_name/run/axigen.pid AXIOPT= -C /var/opt/$ocf_reskey_service_name/run/axigen.cfg -W /var/opt/$ocf_reskey_service_name/ The $OCF_RESKEY_service_name environment variable will be filled-in by the cluster software with the actual name of the service (AXIGEN1, AXIGEN2, etc) Starting the Services Use the clusvcadm utility to start the service instances on the specific backend nodes: clusvcadm E LDAP m backend_ldap_node clusvcadm E AXIGEN2 m backend_axigen1_node clusvcadm E AXIGEN2 m backend_axigen2_node Configuring the AXIGEN Service Instances (1) Listeners On each AXIGEN service instance, the SMTP, POP3 and IMAP services must be configured to bind on the specific service instance IP. In our case, the configuration for each service instance is the following: AXIGEN1 o SMTP one listener, on port 25 o POP3 one listener, on port 110 o IMAP one listener, on port 143 o WebAdmin one listener, on port 9000 o CLI one listener, on port 7000 o FTP Backup one listener, on port 21 AXIGEN1 o SMTP one listener, on port 25 o POP3 one listener, on port 110 o IMAP one listener, on port 143 o WebAdmin one listener, on port 9000 o CLI one listener, on port 7000 o FTP Backup one listener, on port 21 (2) Domain Name Resolver On all service instances, add the following nameserver entry (in the DNR configuration) Priority: 5 Address: (if the DNS is located on the router, as in our example scenario) Timeout: 2 21
22 No. of retries: 3 (3) LDAP authentication Define LDAP Connector o In the UserDB section, create an LDAP Connector with the following parameters: Name: LDAP_Master URL: specific LDAP URL (in our example, ldap:// ) BindDN: LDAP Administrator s DN (in our example: cn=admin,dc=example,dc=com) BindPassword: LDAP Administrator s password (as defined in the LDAP configuration file) Search Base: LDAP base of the user entries (in our example: dc=example,dc=com) Search pattern: LDAP search string to locate, based on the request, the entry in LDAP (in our example (&(uid=%e)(objecttype=inetlocalmailrecipient)) ) Password field: will not be used since we shall rely on Bind authentication Hostname field: LDAP attribute holding the hostname of the backend node where the account resides (in our case mailhost ) Use first returned field: set to yes to allow routing even if duplicate entries exist in LDAP (in our case, no in order to be able to detect duplicates they will not be able to login) Enable LDAP authentication for POP3 and IMAP o Configure, for both the POP3 and IMAP services: userdbconnectortype: ldapbind userdbconnectorname: LDAP_Master (4) Logging Ideally, a separate log server must be used and all AXIGEN services must send log entries through the log service, via the network. In our example, we will log locally (on each back-end node), making sure that the log files names are unique for each AXIGEN service instance. Make sure that, for each instance, the log files names contain the AXIGEN service instance name (so that they will not mix). The default log files are: o everything.txt we shall rename it to everything_<service instance name>.txt (i.e. everything_axigen1.txt and everything_axigen2.txt) o default.txt we shall rename it to default_<service instance name>.txt (i.e. default_axigen1.txt and default_axigen2.txt) 22
23 6 Provisioning This section describes the method to create, update and delete accounts for the solution presented in this document. Account information is located in two different places: LDAP o Authentication information (i.e. password) o Routing information (on what backend machine the account is located) AXIGEN (backend nodes) o Account settings o Account mailbox Provisioning is typically performed through a provisioning utility that is implemented based on specific requirements, which may also act as an interface between the mail solution and an external application (such as a user database or a billing system). 6.1 Account distribution policy When creating a new account, one backend AXIGEN service instance must be selected. The provisioning utility must be implemented to select the backend service instance based on one of the following algorithms: Random o Each new account is created on one of the backend service instances, picked randomly; o As an enhancement, a weighted-random distribution algorithm may be used to allow creating more accounts on some of the backend service instances than on others. Least used o The provisioning interface must be aware of the number of accounts that exist on each service instance, so that, each time a new account is created, the backend service instance that has the least number of accounts is used. Domain based o Each domain is placed on one of the backend service instances; the provisioning interface must have configured a domain/backend service instance table in order to be able to select a specific backend service o instance when creating a new account; Each domain will have a home backend service instance. The first and second distribution algorithms have the advantage of a better spread of the accounts to the backend service instances. The disadvantage resides in the fact that each domain must be created on all the backend service instances and that domain-wide settings for each domain must be kept in sync on all the backend service instances. The third distribution algorithm simplifies the management of the accounts (the domain is only created on the specific backend service instance that will host that domain; changes to domain configuration are performed only on the domain-home backend service instance); moreover, routing can be performed with a much simpler LDAP configuration (i.e. one entry per domain instead of one entry per account). 23
24 6.2 Creating a New Account Creating the Account Mailbox and Settings Each new account must be created, along with its settings, on its designated (see the above section) backend node. The simplest way to implement the account creation on the AXIGEN backend nodes in the provisioning interface is by connecting to the AXIGEN CLI. The account must be created in the correct domain in the AXIGEN backend node Creating the LDAP Entry An LDAP entry for an account must have the following attributes: objectclass o The main object class of each entry is account. This provides the structural object class required for each entry and the uid attribute o We shall use the inetlocalmailrecipient object class for accounts in the solution. o In order to allow authentication in LDAP, the userpassword attribute is required. In order to allow this attribute, the simplesecurityobject object class must also be used for each entry maillocaladdress o This attribute must be used to uniquely identify the account. It must contain the fully qualified address of the account mailhost o The hostname of the backend service instance that holds the account userpassword o The password of the account used for authentication, both on the frontend proxies and on the backend AXIGEN service instances uid o The account unique identifier may be used by the provisioning interface to identify the account in the external account database. In our case, since no specific external account database exists, we shall use the address as the unique identifier o This attribute will also be used as DN, since it is unique for each account Example: User user1 in domain example.com is located on service instance axigen1. The LDAP entry is the following: dn: [email protected],dc=example,dc=com objectclass: account objectclass: inetlocalmailrecipient objectclass: simplesecurityobject uid: [email protected] maillocaladdress: [email protected] mailhost: axigen1.backend.cluster userpassword: thepassword 24
25 6.3 Modifying Account Settings It may be required for the provisioning interface to be able to change account settings for instance, the account quota. For this purpose, the provisioning interface should use the AXIGEN CLI to perform the required changes. No LDAP changes are required for this operation. 6.4 Modifying Account Password If an account s password needs to be changed, the LDAP entry s userpassword attribute must be modified by the provisioning interface (or by other means). 6.5 Deleting an Account If the provisioning interface also handles account deletion, it must delete the account from both the appropriate service instance and from the LDAP. 25
Parallels Virtuozzo Containers 4.7 for Linux
Parallels Virtuozzo Containers 4.7 for Linux Deploying Clusters in Parallels-Based Systems Copyright 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd.
VMware Identity Manager Connector Installation and Configuration
VMware Identity Manager Connector Installation and Configuration VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until the document
GRAVITYZONE HERE. Deployment Guide VLE Environment
GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including
Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint 2013. Deployment Guide
Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint 2013 Deployment Guide rev. 1.4.2 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Appliances
Acronis Storage Gateway
Acronis Storage Gateway DEPLOYMENT GUIDE Revision: 12/30/2015 Table of contents 1 Introducing Acronis Storage Gateway...3 1.1 Supported storage backends... 3 1.2 Architecture and network diagram... 4 1.3
Resonate Central Dispatch
Resonate Central Dispatch Microsoft Exchange 2010 Resonate, Inc. Tel. + 1.408.545.5535 Fax + 1.408.545.5502 www.resonate.com Copyright 2013 Resonate, Inc. All rights reserved. Resonate Incorporated and
Astaro Deployment Guide High Availability Options Clustering and Hot Standby
Connect With Confidence Astaro Deployment Guide Clustering and Hot Standby Table of Contents Introduction... 2 Active/Passive HA (Hot Standby)... 2 Active/Active HA (Cluster)... 2 Astaro s HA Act as One...
EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1
EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014 Version 1 NEC EXPRESSCLUSTER X 3.x for Windows SQL Server 2014 Quick Start Guide Document Number ECX-MSSQL2014-QSG, Version
OnCommand Performance Manager 1.1
OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Quick Start Guide for Parallels Virtuozzo
PROPALMS VDI Version 2.1 Quick Start Guide for Parallels Virtuozzo Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the current
Syncplicity On-Premise Storage Connector
Syncplicity On-Premise Storage Connector Implementation Guide Abstract This document explains how to install and configure the Syncplicity On-Premise Storage Connector. In addition, it also describes how
How To Configure A Bomgar.Com To Authenticate To A Rdius Server For Multi Factor Authentication
Security Provider Integration RADIUS Server 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property
Table of Contents. This whitepaper outlines how to configure the operating environment for MailEnable s implementation of Exchange ActiveSync.
This whitepaper outlines how to configure the operating environment for MailEnable s implementation of Exchange ActiveSync. Table of Contents Overview... 2 Evaluating Exchange ActiveSync for MailEnable...
Installing GFI MailSecurity
Installing GFI MailSecurity Introduction This chapter explains how to install and configure GFI MailSecurity. You can install GFI MailSecurity directly on your mail server or you can choose to install
Microsoft Dynamics CRM 2013 Service Provider Planning and Deployment Guide
Microsoft Dynamics CRM 2013 Service Provider Planning and Deployment Guide Copyright This document is provided "as-is". Information and views expressed in this document, including URL and other Internet
Acronis and Acronis Secure Zone are registered trademarks of Acronis International GmbH.
1 Copyright Acronis International GmbH, 2002-2016 Copyright Statement Copyright Acronis International GmbH, 2002-2016. All rights reserved. Acronis and Acronis Secure Zone are registered trademarks of
GlobalSCAPE DMZ Gateway, v1. User Guide
GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical
Active-Active and High Availability
Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark
Setup Guide Access Manager 3.2 SP3
Setup Guide Access Manager 3.2 SP3 August 2014 www.netiq.com/documentation Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE
Quick Start Guide for VMware and Windows 7
PROPALMS VDI Version 2.1 Quick Start Guide for VMware and Windows 7 Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the
Veeam Cloud Connect. Version 8.0. Administrator Guide
Veeam Cloud Connect Version 8.0 Administrator Guide April, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be
Security Provider Integration RADIUS Server
Security Provider Integration RADIUS Server 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property
F-Secure Messaging Security Gateway. Deployment Guide
F-Secure Messaging Security Gateway Deployment Guide TOC F-Secure Messaging Security Gateway Contents Chapter 1: Deploying F-Secure Messaging Security Gateway...3 1.1 The typical product deployment model...4
MailEnable Installation Guide
MailEnable Installation Guide MailEnable Messaging Services for Microsoft Windows 2000/2003/2008 Installation Guide for: MailEnable Standard Edition MailEnable Professional Edition MailEnable Enterprise
Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.
Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component
Virtual Managment Appliance Setup Guide
Virtual Managment Appliance Setup Guide 2 Sophos Installing a Virtual Appliance Installing a Virtual Appliance As an alternative to the hardware-based version of the Sophos Web Appliance, you can deploy
How To Use Directcontrol With Netapp Filers And Directcontrol Together
Application Note Using DirectControl with Network Appliance Filers Published: June 2006 Abstract This Application Note describes the integration between Network Appliance servers and Centrify DirectControl
ENTERPRISE INFRASTRUCTURE CONFIGURATION GUIDE
ENTERPRISE INFRASTRUCTURE CONFIGURATION GUIDE MailEnable Pty. Ltd. 59 Murrumbeena Road, Murrumbeena. VIC 3163. Australia t: +61 3 9569 0772 f: +61 3 9568 4270 www.mailenable.com Document last modified:
AXIGEN Mail Server. Quick Installation and Configuration Guide. Product version: 6.1 Document version: 1.0
AXIGEN Mail Server Quick Installation and Configuration Guide Product version: 6.1 Document version: 1.0 Last Updated on: May 28, 2008 Chapter 1: Introduction... 3 Welcome... 3 Purpose of this document...
v7.8.2 Release Notes for Websense Content Gateway
v7.8.2 Release Notes for Websense Content Gateway Topic 60086 Web Security Gateway and Gateway Anywhere 12-Mar-2014 These Release Notes are an introduction to Websense Content Gateway version 7.8.2. New
Active-Active ImageNow Server
Active-Active ImageNow Server Getting Started Guide ImageNow Version: 6.7. x Written by: Product Documentation, R&D Date: March 2014 2014 Perceptive Software. All rights reserved CaptureNow, ImageNow,
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Step-by-Step Guide for Creating and Testing Connection Manager Profiles in a Test Lab
Step-by-Step Guide for Creating and Testing Connection Manager Profiles in a Test Lab Microsoft Corporation Published: May, 2005 Author: Microsoft Corporation Abstract This guide describes how to create
Moving to Plesk Automation 11.5
Moving to Plesk Automation 11.5 Last updated: 2 June 2015 Contents About This Document 4 Introduction 5 Preparing for the Move 7 1. Install the PA Moving Tool... 8 2. Install Mail Sync Software (Windows
RSA Security Analytics Virtual Appliance Setup Guide
RSA Security Analytics Virtual Appliance Setup Guide Copyright 2010-2015 RSA, the Security Division of EMC. All rights reserved. Trademarks RSA, the RSA Logo and EMC are either registered trademarks or
Chapter 15: Advanced Networks
Chapter 15: Advanced Networks IT Essentials: PC Hardware and Software v4.0 1 Determine a Network Topology A site survey is a physical inspection of the building that will help determine a basic logical
Cisco Application Networking Manager Version 2.0
Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager (ANM) software enables centralized configuration, operations, and monitoring of Cisco data center networking equipment
SuperLumin Nemesis. Administration Guide. February 2011
SuperLumin Nemesis Administration Guide February 2011 SuperLumin Nemesis Legal Notices Information contained in this document is believed to be accurate and reliable. However, SuperLumin assumes no responsibility
Bosch Video Management System High Availability with Hyper-V
Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements
Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V
Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised
Microsoft Hyper-V Server 2008 R2 Getting Started Guide
Microsoft Hyper-V Server 2008 R2 Getting Started Guide Microsoft Corporation Published: July 2009 Abstract This guide helps you get started with Microsoft Hyper-V Server 2008 R2 by providing information
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.
Load Balancing VMware Horizon View. Deployment Guide
Load Balancing VMware Horizon View Deployment Guide v1.1.0 Copyright 2014 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 4 Appliances Supported... 4 VMware Horizon View Versions Supported...4
Deployment Guide AX Series with Active Directory Federation Services 2.0 and Office 365
Deployment Guide AX Series with Active Directory Federation Services 2.0 and Office 365 DG_ADFS20_120907.1 TABLE OF CONTENTS 1 Overview... 4 2 Deployment Guide Overview... 4 3 Deployment Guide Prerequisites...
Parallels Plesk Automation
Parallels Plesk Automation Contents Get Started 3 Infrastructure Configuration... 4 Network Configuration... 6 Installing Parallels Plesk Automation 7 Deploying Infrastructure 9 Installing License Keys
Overview... 1 Requirements... 1. Installing Roles and Features... 3. Creating SQL Server Database... 9 Setting Security Logins...
Contents CHAPTER 1 IMail Server using Failover Clustering Overview... 1 Requirements... 1 CHAPTER 2 IIS Installing Roles and Features... 3 CHAPTER 3 Configuring Storage Area Network Requirements... 5 Connecting
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Security Provider Integration LDAP Server
Security Provider Integration LDAP Server 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property
Virtual Web Appliance Setup Guide
Virtual Web Appliance Setup Guide 2 Sophos Installing a Virtual Appliance Installing a Virtual Appliance This guide describes the procedures for installing a Virtual Web Appliance. If you are installing
Polycom RealPresence Resource Manager System Getting Started Guide
[Type the document title] Polycom RealPresence Resource Manager System Getting Started Guide 8.0 August 2013 3725-72102-001B Polycom Document Title 1 Trademark Information POLYCOM and the names and marks
Microsoft File and Print Service Failover Using Microsoft Cluster Server
Microsoft File and Print Service Failover Using Microsoft Cluster Server TechNote First Edition (March 1998) Part Number 309826-001 Compaq Computer Corporation Notice The information in this publication
Setup Guide Access Manager Appliance 3.2 SP3
Setup Guide Access Manager Appliance 3.2 SP3 August 2014 www.netiq.com/documentation Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS
DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5
DEPLOYMENT GUIDE Version 1.1 Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5 Table of Contents Table of Contents Deploying the BIG-IP system v10 with Citrix Presentation Server Prerequisites
Installing GFI MailSecurity
Installing GFI MailSecurity Introduction This chapter explains how to install and configure GFI MailSecurity. You can install GFI MailSecurity directly on your mail server or you can choose to install
Microsoft Lync Server 2010
Microsoft Lync Server 2010 Scale to a Load Balanced Enterprise Edition Pool with WebMux Walkthrough Published: March. 2012 For the most up to date version of the Scale to a Load Balanced Enterprise Edition
RingStor User Manual. Version 2.1 Last Update on September 17th, 2015. RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816.
RingStor User Manual Version 2.1 Last Update on September 17th, 2015 RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816 Page 1 Table of Contents 1 Overview... 5 1.1 RingStor Data Protection...
Digital certificates and SSL
Digital certificates and SSL 20 out of 33 rated this helpful Applies to: Exchange Server 2013 Topic Last Modified: 2013-08-26 Secure Sockets Layer (SSL) is a method for securing communications between
Technical Brief for Windows Home Server Remote Access
Technical Brief for Windows Home Server Remote Access Microsoft Corporation Published: October, 2008 Version: 1.1 Abstract This Technical Brief provides an in-depth look at the features and functionality
An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database
An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always
Load Balancing McAfee Web Gateway. Deployment Guide
Load Balancing McAfee Web Gateway Deployment Guide rev. 1.1.4 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
Step By Step Guide: Demonstrate DirectAccess in a Test Lab
Step By Step Guide: Demonstrate DirectAccess in a Test Lab Microsoft Corporation Published: May 2009 Updated: October 2009 Abstract DirectAccess is a new feature in the Windows 7 and Windows Server 2008
TANDBERG MANAGEMENT SUITE 10.0
TANDBERG MANAGEMENT SUITE 10.0 Installation Manual Getting Started D12786 Rev.16 This document is not to be reproduced in whole or in part without permission in writing from: Contents INTRODUCTION 3 REQUIREMENTS
Network Scanner fi-6000ns
P3PC-1971-03ENZ0 Network Scanner fi-6000ns Associated Servers Setup Guide Introduction This manual explains the procedures for setting the associated servers used by the Network Scanner fi-6000ns. The
MOC 5047B: Intro to Installing & Managing Microsoft Exchange Server 2007 SP1
MOC 5047B: Intro to Installing & Managing Microsoft Exchange Server 2007 SP1 Course Number: 5047B Course Length: 3 Days Certification Exam This course will help you prepare for the following Microsoft
SharePoint 2013 Logical Architecture
SharePoint 2013 Logical Architecture This document is provided "as-is". Information and views expressed in this document, including URL and other Internet Web site references, may change without notice.
Load Balancing Web Proxies Load Balancing Web Filters Load Balancing Web Gateways. Deployment Guide
Load Balancing Web Proxies Load Balancing Web Filters Load Balancing Web Gateways Deployment Guide rev. 1.4.9 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Appliances
Linux VPS with cpanel. Getting Started Guide
Linux VPS with cpanel Getting Started Guide First Edition October 2010 Table of Contents Introduction...1 cpanel Documentation...1 Accessing your Server...2 cpanel Users...2 WHM Interface...3 cpanel Interface...3
Configuration Guide BES12. Version 12.3
Configuration Guide BES12 Version 12.3 Published: 2016-01-19 SWD-20160119132230232 Contents About this guide... 7 Getting started... 8 Configuring BES12 for the first time...8 Configuration tasks for managing
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster Tuesday, December 26, 2013 KernSafe Technologies, Inc www.kernsafe.com Copyright KernSafe Technologies 2006-2013.All right reserved.
PHD Virtual Backup for Hyper-V
PHD Virtual Backup for Hyper-V version 7.0 Installation & Getting Started Guide Document Release Date: December 18, 2013 www.phdvirtual.com PHDVB v7 for Hyper-V Legal Notices PHD Virtual Backup for Hyper-V
Installing Policy Patrol on a separate machine
Policy Patrol 3.0 technical documentation July 23, 2004 Installing Policy Patrol on a separate machine If you have Microsoft Exchange Server 2000 or 2003 it is recommended to install Policy Patrol on the
Installation & Configuration Guide
Installation & Configuration Guide Bluebeam Studio Enterprise ( Software ) 2014 Bluebeam Software, Inc. All Rights Reserved. Patents Pending in the U.S. and/or other countries. Bluebeam and Revu are trademarks
How to Install Microsoft Mobile Information Server 2002 Server ActiveSync. Joey Masterson
How to Install Microsoft Mobile Information Server 2002 Server ActiveSync Joey Masterson How to Install Microsoft Mobile Information Server 2002 Server ActiveSync Joey Masterson Copyright Information
Load Balancing Trend Micro InterScan Web Gateway
Load Balancing Trend Micro InterScan Web Gateway Deployment Guide rev. 1.1.7 Copyright 2002 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Loadbalancer.org Appliances Supported...
VMware vrealize Automation
VMware vrealize Automation Reference Architecture Version 6.0 and Higher T E C H N I C A L W H I T E P A P E R Table of Contents Overview... 4 What s New... 4 Initial Deployment Recommendations... 4 General
REQUIREMENTS LIVEBOX. http://www.liveboxcloud.com
2015 REQUIREMENTS LIVEBOX http://www.liveboxcloud.com LiveBox Srl does not release declarations or guarantees about this documentation and its use and decline any expressed or implied commercial or suitability
CA Identity Manager. Installation Guide (WebLogic) r12.5 SP8
CA Identity Manager Installation Guide (WebLogic) r12.5 SP8 This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
ClusterLoad ESX Virtual Appliance quick start guide v6.3
ClusterLoad ESX Virtual Appliance quick start guide v6.3 ClusterLoad terminology...2 What are your objectives?...3 What is the difference between a one-arm and a two-arm configuration?...3 What are the
Proof of Concept Guide
Proof of Concept Guide Version 4.0 Published: OCT-2013 Updated: 2005-2013 Propalms Ltd. All rights reserved. The information contained in this document represents the current view of Propalms Ltd. on the
Configuring MailArchiva with Insight Server
Copyright 2009 Bynari Inc., All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any
System Compatibility. Enhancements. Email Security. SonicWALL Email Security 7.3.2 Appliance Release Notes
Email Security SonicWALL Email Security 7.3.2 Appliance Release Notes System Compatibility SonicWALL Email Security 7.3.2 is supported on the following SonicWALL Email Security appliances: SonicWALL Email
Course Description and Outline. IT Essential II: Network Operating Systems V2.0
Course Description and Outline IT Essential II: Network Operating Systems V2.0 Course Outline 1. Operating System Fundamentals 1.1 Operating System Basics 1.1.1 Overview of PC operating systems 1.1.2 PCs
I N S T A L L A T I O N M A N U A L
I N S T A L L A T I O N M A N U A L 2015 Fastnet SA, St-Sulpice, Switzerland. All rights reserved. Reproduction in whole or in part in any form of this manual without written permission of Fastnet SA is
CommuniGate Pro White Paper. Dynamic Clustering Solution. For Reliable and Scalable. Messaging
CommuniGate Pro White Paper Dynamic Clustering Solution For Reliable and Scalable Messaging Date April 2002 Modern E-Mail Systems: Achieving Speed, Stability and Growth E-mail becomes more important each
OVERVIEW OF TYPICAL WINDOWS SERVER ROLES
OVERVIEW OF TYPICAL WINDOWS SERVER ROLES Before you start Objectives: learn about common server roles which can be used in Windows environment. Prerequisites: no prerequisites. Key terms: network, server,
Load Balancing Microsoft AD FS. Deployment Guide
Load Balancing Microsoft AD FS Deployment Guide rev. 1.1.1 Copyright 2002 2015 Loadbalancer.org, Inc. Table of Contents About this Guide...4 Loadbalancer.org Appliances Supported...4 Loadbalancer.org Software
Virtual Private Servers
Virtual Private Servers Application Form Guide Internode Pty Ltd ACN: 052 008 581 150 Grenfell St Adelaide SA 5000 PH: (08) 8228 2999 FAX: (08) 8235 6999 www.internode.on.net Internode VPS Application
Open-Xchange Server High Availability
OPEN-XCHANGE Whitepaper Open-Xchange Server High Availability High Availability Concept for OX Example Configuration v1.00 Copyright 2005, OPEN-XCHANGE Inc. This document is the intellectual property of
Configuration Guide BES12. Version 12.2
Configuration Guide BES12 Version 12.2 Published: 2015-07-07 SWD-20150630131852557 Contents About this guide... 8 Getting started... 9 Administrator permissions you need to configure BES12... 9 Obtaining
Configuring Nex-Gen Web Load Balancer
Configuring Nex-Gen Web Load Balancer Table of Contents Load Balancing Scenarios & Concepts Creating Load Balancer Node using Administration Service Creating Load Balancer Node using NodeCreator Connecting
Deploying Exchange Server 2007 SP1 on Windows Server 2008
Deploying Exchange Server 2007 SP1 on Windows Server 2008 Product Group - Enterprise Dell White Paper By Ananda Sankaran Andrew Bachler April 2008 Contents Introduction... 3 Deployment Considerations...
Load Balancing VMware Horizon View. Deployment Guide
Load Balancing VMware Horizon View Deployment Guide rev. 1.2.6 Copyright 2002 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide...4 Loadbalancer.org Appliances Supported...4 Loadbalancer.org
DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING
PolyServe High-Availability Server Clustering for E-Business 918 Parker Street Berkeley, California 94710 (510) 665-2929 wwwpolyservecom Number 990903 WHITE PAPER DNS ROUND ROBIN HIGH-AVAILABILITY LOAD
Load Balancing Sophos Web Gateway. Deployment Guide
Load Balancing Sophos Web Gateway Deployment Guide rev. 1.0.9 Copyright 2002 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide...3 Loadbalancer.org Appliances Supported...3 Loadbalancer.org
9236245 Issue 2EN. Nokia and Nokia Connecting People are registered trademarks of Nokia Corporation
9236245 Issue 2EN Nokia and Nokia Connecting People are registered trademarks of Nokia Corporation Nokia 9300 Configuring connection settings Legal Notice Copyright Nokia 2005. All rights reserved. Reproduction,
Load Balancing Microsoft Terminal Services. Deployment Guide
Load Balancing Microsoft Terminal Services Deployment Guide rev. 1.5.7 Copyright 2002 2016 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 4 Loadbalancer.org Appliances Supported... 4 Loadbalancer.org
BorderWare Firewall Server 7.1. Release Notes
BorderWare Firewall Server 7.1 Release Notes BorderWare Technologies is pleased to announce the release of version 7.1 of the BorderWare Firewall Server. This release includes following new features and
Integrating the F5 BigIP with Blackboard
Integrating the F5 BigIP with Blackboard Nick McClure [email protected] Lead Systems Programmer University of Kentucky Created August 1, 2006 Last Updated June 17, 2008 Integrating the F5 BigIP with Blackboard
