The MONSOON Generic Pixel Server Software Design
|
|
|
- Brent Gardner
- 10 years ago
- Views:
Transcription
1 The MONSOON Generic Pixel Server Software Design Nick C Buchholz *, Philip N. Daly National Optical Astronomy Observaty, Maj Instrumentation Group, 950 N Cherry Ave., Tucson, AZ , USA ABSTRACT MONSOON is the next generation OUV-IR controller development project being conducted at NOAO. MONSOON was designed from the start as an architecture that provides the flexibility to handle multiple detect types, rather than as a set of specific hardware to control a particular detect. The hardware design was done with maintainability and scalability as key facts. We have, wherever possible chosen commercial off-the-shelf components rather than use in-house proprietary systems. From first principles, the software design had to be configurable in der to handle many detect types and focal plane configurations. The MONSOON software is multi-layered with simulation of the hardware built in. By keeping the details of hardware interfaces confined to only two libraries and by strict confmance to a set of interface control documents the MONSOON software is usable with other hardware systems with minimal change. In addition, the design provides that focal plane specific details are confined to routines that are selected at load time. At the top-level, the MONSOON Supervis Level (MSL), we use the GPX dictionary, a defined interface to the software system that instruments and high-level software can use to control and query the system. Below this are PAN- DHE pairs that interface directly with ptions of the focal plane. The number of PAN-DHE pairs can be scaled up to increase channel counts and processing speed to handle larger focal planes. The range of detect applications suppted goes from single detect LAB systems, four detect IR systems like NEWFIRM, up to 500 focal planes like LSST. In this paper we discuss the design of the PAN software and it's interaction with the detect head electronics. 1. INTRODUCTION In the past 18 years the auths have written, modified used me than 12 separate and distinct hardware/software systems designed to control focal plane detects and capture the data generated by them. When the limitations of current controllers fced NOAO to start a new controller development project, the software design began along with the hardware design. The MONSOON controller was, from its inception, designed to allow the control of any astronomical focal plane, such as IR s, s, thogonal transfer arrays OTA s, etc. It was designed to be extensible so that the same set of electronics and software could be replicated to control massive homogeneous focal planes. A key feature of the MONSOON system design was that a single software system would be used to handle the various focal plane arrangements and detect types that had been proposed. The MONSOON software interface is based on the Generic Pixel Server, GPX, interface 1,2. This interface was first discussed at the Nov ACCORD conference in Santa Cruz. The interface describes the interaction between an image acquisition system pixel server and external systems. The interface will be used f all new instruments, focal planes and/ detect types under consideration f development at NOAO. A proposal f the Generic Pixel Server interface definition was presented at the February 2002 AURA Software Conference held at the STSI in Baltime, MD. Further development of that interface as NOAO ICD 4.0 was done in the spring of 2002 and a complete GPX interface, including a software library which implements the interface commands, has been developed in conjunction with the MONSOON project software efft. With very few exceptions the iginal interface definition presented in 2002 has held up to the rigs of actually writing code to control detects and acquire data from IR and OUV detects. The MONSOON software design was guided by a set of requirements 3,4 designed to minimize the impact of hardware changes and maximize the flexibility and reusability of the software system. First, the software system needed to be configurable at runtime. This would allow the system to handle the details of each of the current and projected detects and focal plane configurations. Second, the details of the underlying hardware and operating system should be confined to a small set of routines so that moving to new hardware would be as painless as possible. This included a decision to use operating system constructs that were common to a wide range of systems. Third, the system needed to * Further infmation NCB: [email protected], PND: [email protected]
2 be flexible enough to handle new observing modes and methods without a maj redesign and rewrite of the basic system. These goals have f the most part been achieved as will be discussed in later sections 2. MONSOON Functionality MONSOON is an image acquisition system. Like a digital camera, the user configures the MONSOON system f image acquisition in a particular mode. The user then initiates an exposure series of exposures and the captured images are archived f future reference. The user communicates with the system through the MSL, which handles the details of codinating the PAN/DHE pairs to take the exposure. Because MONSOON is designed f massive focal planes the MSL does not handle the data at any time. No provision is made f the control of telescope instrument mechanisms. In addition, the image data handling system deals with the details of archiving, displaying and processing the data sent to it by the MONSOON GPX. MONSOON has no infmation about the nature of the image n does it know where the photons arriving on the focal plane iginated. It provides meta-data to be sted with the image that gives its internal state, but infmation about telescope pointing, focal plane geometry, the nature of the image (spectra, sky dark, etc.) must be provided by the controlling Instrument Observation control system. The image and meta-data captured by the MONSOON software is passed to a data handling system (DHS) to be archived sted. Nmally, the DHS is provided as a shared library that is loaded at runtime into the MONSOON PAN software. The MONSOON program does provide a simple DHS library that writes a FITS image to the local disk. 3. MONSOON Hardware The MONSOON system was proposed and designed to handle massive mosaic focal planes. It replaces the existing Arcon, Wildfire and SDSUII array controllers that cannot handle the requirements of these massive focal planes. The hardware design uses COTS technology wherever possible. The maj subsystems are also designed to be easily extensible. Figure 1 below diagrams the MONSOON subsystems. Science Client Node SUPERVISOR NODE LINUX PC Ethernet Link 100Mb/s Ethernet Link Ethernet Link 100Mb/s Ethernet Link 100Mb/s 100Mb/s Ethernet 1Gb/s PIXEL ACQUISITION NODE 1 PIXEL ACQUISITION NODE 2 PIXEL ACQUISITION NODE 3 Data Handling System Nodes LINUX PC PCI FIBER CARD 1Gb/s Fiber (50Mpixel/s) 10Mb/s Ethernet DETECTOR HEAD ELECTRONICS NODE 1 SYNC LINUX PC PCI FIBER CARD 1Gb/s Fiber (50Mpixel/s) 10Mb/s Ethernet DETECTOR HEAD ELECTRONICS NODE 2 SYNC LINUX PC PCI FIBER CARD 1Gb/s Fiber (50Mpixel/s) 10Mb/s Ethernet DETECTOR HEAD ELECTRONICS NODE 3 N NODES Figure 1. MONSOON Hardware Components The basic building block of the MONSOON hardware system is the PAN/DHE pair. The Pixel Acquisition Node PAN is a console-less, generic, commercial PC box running LINUX. The Detect Head Electronics is a commercial electronics enclosure containing the MONSOON compact PCI backplane carrying the circuit boards needed to interface to the section of the focal plane being controlled by the local PAN/DHE pair. Except f a synchronizing hardware clock, each PAN/ DHE pair only communicates with the MONSOON supervis Layer ( MSL) and does not share infmation with other PANs. Communication between the PAN and DHE is over an industry standard bi-directional fiber interface capable of handling 100 to 240 Mbytes/s. In the MONSOON hardware implementation, data is sent f
3 archiving over a gigabit ethernet connection to a Data Handling system (DHS) that determines its final disposition and processing. The MONSOON system does not include a local hardware display capability The codinating entity f the various PAN/DHE pairs is the MONSOON Supervis Layer. This program also runs on a commercial PC running LINUX. The MSL handles the codination of the PAN/DHE pairs, communications with other systems and the user and connection security. The MSL does not see participate in the image data transfers, though it may send meta-data to the DHS f inclusion with the archived image data. (Note: that the supervis level can run on one of the PANs in an Instrument control system computer.) 4. MONSOON Data flows and Interfaces Figure 2 below gives a diagram of how the MONSOON system interacts with other observaty systems. In particular it should be noted that the local status and DHS interfaces and the Science Client system are not part of the MONSOON program. These systems interact with MONSOON using, the GPX interface (NOAO ICD 4.0) f the command and response stream, the DHS Interface (NOAO ICD 1.0) f the image and meta-data stream and the Local Status Interface f the system status data stream. Monsoon System Context Diagram Client System (Engineering Lab Console) Science Client System (Instrument Control System) (Observation Control System) Local Status Interface Df1.1 Command String Connection Df1.2 Response String Connection ICD 4.0 GPX Interface ICD 4.1 MONSOON Restrictions (TBD) 1.0 Supervis Layer Df1.3 Asynchronous Status Connection TBD Df1.4 Pixel Data Stream Local DHS Interface Df1.5 Status Data Connection ICD 5.0 PPX Interface 2.0 PAN System Df1.4 Pixel Data Stream MONSOON GPX (Pixel Server) ICD 6.0 Generic DHE ICD 6.1 MONSOON DHE 3.0 DHE System Fits Image on Disk Figure 2. MONSOON Software Context 4.1. MONSOON GPX Interface The MONSOON GPX interface generally follows NOAO Interface Control Document (ICD) 4.0 that describes the interface between software systems and combines the functionality of the detect controller and data pre-process. It presents a common interface to the upper level observaty systems regardless of the underlying detect technologies focal plane arrangement being used f observing. The GPX handles detect set-up, taking and controlling exposures and if one is provided, control of a local shutter. The GPX also allows f the archiving of small amounts of image data on a local disk in extradinary circumstances. 5. MONSOON Software Design Philosophy In addition to the science requirements imposed on the MONSOON system by its astronomical uses, the MONSOON software design was guided by a number of requirements that were me constraints on the process and scope than they were requirements f doing the science. The software was to be Open-Source and would make maximum use of facilities provided by the OS(LINUX>. It uses processes to perfm identifiable sub-tasks and uses shared libraries to isolate and limit the scope of software functions.
4 A key requirement of the MONSOON design was flexibility and code reuse. Since we knew that the MONSOON system would be used in a wide variety of systems we wanted to be sure that we could add functionality quickly. This key feature was tested almost immediately. Soon after the completion of the design we were presented with the requirements f the thogonal transfer array (OTA) device control. These requirements added a level of complexity to the device control that the iginal design did not include. A preliminary evaluation of the requirements and MONSOON design has shown that with only a modest additional efft the design can handle the increased complexity 7. Another key feature of the design was that the software be usable in a wide variety of applications at different institutions. It was our goal to provide a software system that could be easily adopted by any observaty. In der to avoid trying to impose any control philosophy on other institutions the MONSOON software does not include certain facilities. First, while there is a MONSOON engineering interface provided, most user interface tasks are delegated to the observaty staff. The MSL and PAN interfaces are software interfaces, not GUI s. Some suppt f GUI interfaces is provided, however. Second, a number of systems are unique to specific observaties (EPICS, DRAMA, the MPG/WIYN Router, etc); the interface to these systems is also left to the observaty staff. The nature of the MONSOON system allows multiple connections to the system and can accommodate connections from multiple clients. 6. MONSOON system Layering Both the MONSOON software and hardware are built in isolated layers. The interactions between the layers are defined by a series of ICD s that describe the interfaces. In general the ICD s come in two types: the generic interfaces (ICD 1.0, 4.0, 5.0, 6.0) give a set of generic functions that are used to interact with the layer, The hardware system specific ICD s (ICD 4.1, 6.1, 7.1) give details about the specific implementation used by the MONSOON hardware software system. Easily modified shared libraries implement the various ICD s and conceal the details of the hardware system interface. This layering is part of the reason the MONSOON system remains flexible. Figure 3 is an example of how the ICD s and layering can be used to implement an SDSU-II other version of the MONSOON system. Science Engineering Clients ICD 4.0 Generic Pixel Server - Communications, Command/Response and Data Stream Interface Description Supervis Layer Software ICD 5.0 PAN Pixel Node - Communications, Command/Response & Data Stream Interface Description Pixel Acquisition Node Software ICD 6.0 Generic Detect Head Electronics Command and Data Stream Interface Description Communications to Simulat ICD 6.1 MONSOON DHE Interface & Design ICD 6.2 SDSU-II DHE Interface & Design ICD 6.99 Other DHE Interface & Design ICD 6.1 MONSOON DHE Interface & Design MONSOON DHE Interface Software SDSU-II DHE Interface Software OTHER DHE Interface Software Systran Fiber Drivers SDSU-II Fiber Drivers OTHER Fiber Drivers Systran Fiber Hdwr SDSU-II Fiber Hdwr OTHER Fiber Hdwr Simulated MONSOON Detect Head Electronics MONSOON Detect Head Electronics SDSU-II Detect Head Electronics OTHER Detect Head Electronics Figure 3. Monsoon Software Layers In addition to allowing the flexibility to accommodate changing hardware, the layering also allows the effect of hardware changes to be confined to a small subset of the shared libraries used in MONSOON. The specifics of the fiber interface hardware are contained in the communications hardware library (libcomhdw). Likewise the DHE hardware specifics are contained in the DHE hardware library (libdhehdw). No infmation about the hardware specifics needs to be propagated above the generic DHE library (libdheutil)
5 7. MONSOON Supervis Layer Structure MONSOON systems that contain multiple PAN s will require a MONSOON Supervis Layer MSL 8. This system will become the common interface to a multi-pan MONSOON system deployed f science operations. The MSL is a command and control layer, no pixel data will flow through it. The MSL may provide some pixel data flow control if required by the DHS library. It provides access to the MONSOON system through the GPX interface to client software and provides a single point of access to the focal plane. It provides multiple client connections and access security to the science clients to prevent unauthized manipulation of the focal plane. The MSL is the system level that will handle sending commands to multiple PAN s and gathering the responses from multiple PANs and summarizing them f the science client. It is planned that the MSL will provide f err moniting & recovery f the PANs and repting of such to the science client. The MSL may be configured to run on a separate machine, on one of the PAN s on the Instrument OCS computer. Communications to the PAN s from the MSL will be over the general observaty ethernet link. 8. MONSOON PAN Process structure The largest and most complex component of the MONSOON system is in the PAN software. In der to keep the flexibility needed to accommodate multiple detects and focal plane arrangements the PAN software is constructed as a set of cooperating independent processes that isolate the various functions of the PAN. Communications to the PAN software is through a modified GPX interface, called PPX (NOAO ICD 5.0), a set of commands and responses that allow an engineering interface MSL to control the PAN system. The PAN software consists of four permanent process and two tempary processes. pandaemon, pancapture, panprocalg and pansaver are permanent processes started by a shell script and required f the PAN to take data. exttrigger and fsaver are tempary helper processes started by pancapture and pansaver respectively. The processes are internally constrained to start up in the crect der. pandaemon starts first and reads the configuration recds (files at the present time) f the system 6. It creates the shared memy buffers needed by the processes and fills the Attribute-Value and command tables from the configuration recds. The other processes wait f the shared memy spaces to be created and initialised and then start their own initialisation procedures. This includes mapping the shared memy into the local address space and setting up any local resources the process controls. libppx - Generic Pan Interface ICD 5.0 PanDaemon Socket interface and CLI to command process High level DHE control Pan Process Startup and shutdown Process control f Pan processes Err checking and recovery PAN process Process Startup Command Semaphe access Data data dddress flow path Shared memy access Child PAN process Shared Memy Interface fsaver (FITS Writer) exttrgger pancapture panprocalg pansaver libdetcmnd - Detect Specific Routines libdheutil - Generic DHE Interface ICD 6.0 libmonsoon - Monsoon DHE hardware interface ICD6.1 libcomutil - generic communications link routines libsystran - systran versions of communications link interface libfxslapi - Systran Sl240 Driver (COTS) Well defined API's Figure 4. PAN Process Structure
6 The PAN processes are set up as a chain of producer/consumer processes which interact through the shared memy spaces, through semaphes and through image buffer queues. Facilities f additional communications through peer-to-peer socket connections were considered in the iginal design but have not been implemented at this time, as we have not needed that level of communication thus far. Figure 4 diagrams the inter-process communications used by the PAN software and shows that three techniques are used to pass data between the processes. Shared memy segments are the most frequently used method f transferring infmation between the PAN Processes. Whenever several processes must simultaneously use data the shared memy technique is used. Queues are used f passing data buffers between processes. These FIFO style queues allow the der infmation inherent in the data capture process to be maintained across processes. Each queue is mirred by a semaphe that allows the processes to avoid polling to determine if there is something to do. Processes wait on semaphes that are given by the producer and taken by the consumer. E.G. At the start of an exposure the pancapture process waits f data to arrive from the DHE. When it does, pancapture puts the buffer on the fullrawbufferq queue and gives the rawbufready semaphe. panprocalg has been waiting on the rawbufready semaphe and is now able to take the buffer off the queue, process it and pass it on in the same way to pansaver Shared Library Usage in MONSOON An imptant design feature of the PAN processes and the MONSOON design in general is the extensive use of shared libraries to implement functionality not provided by the OS. Each process loads and uses a set of shared libraries that implement shared memy, queue, semaphe and socket operations, Attribute-Value setting and query, hardware interaction operations and the PPX and GPX interfaces. The use of shared libraries insures that all processes on the PAN get the same version of the libraries and have a consistent view of their interactions and functionality. These libraries are common to all MONSOON systems and have been thoughly tested befe release. The queue and socket libraries have been in use at NOAO f over ten years. Shared libraries are also used to isolate functionality unique to a specific Focal plane detect type. When the system is started it loads the library designed f the system being started. Four classes of system specific libraries exist and as many me as are needed can be added to the system. The system specific libraries are needed when a system is built which does not fit into one of the standard pre defined systems. We expect that all MONSOON systems using the NOAO DHE hardware and the Systran fibre communications boards will use the same libdhehdw and libcomhdw libraries. Likewise it is possible that the generic detect library will be used f most systems and the DHS FITS library will be sufficient f most testing. A carefully thought out makefile system f these system specific libraries allows a new system to change only those files where differences from a base system occur. It is our intension to extend this system so that a system can be built using inheritance; this means a system could be built by starting with the generic routines, modifying those by using routines from system A and modifying those by using routines from system B The DHE hardware libraries The DHE hardware libraries (called libdhehdw), handle hardware differences between DHE designs. This library is called through a set of generic routines contained in libdheutil and understands the details of the hardware protocol used by a particular DHE type. At this time two versions of the DHE hardware library exist. One suppts a simulated DHE within the PAN processes. The second library suppts the MONSOON DHE version currently in use. We have already used this multiple library facility to implement an expansion of the PAN/DHE communications protocol required to suppt the OTA devices being developed f QUOTA and ODI. A third version of the libraries to suppt the SDSU II DHE hardware has been designed but not implemented The Communications Hardware Libraries The communications hardware libraries (called libcomhdw), handle differences between different interconnection hardware. These routines are also called through a set of generic routines (libcomutil). The libcomhdw library understands the details of the communications protocol and the underlying communications hardware. We have implemented two versions of this library as well. A simulation version of the library essentially provides a loop back version of the communications link. The production version uses the Systran hardware and software to communicate with the MONSOON DHE. The library libsystran makes calls to and understands the COTS device driver and
7 hardware interface library provided with the Systran SL100/SL240 Fiber Extreme communications link. This link uses an industry standard fibre channel hardware interface. Two other communications hardware libraries are planned. The first will suppt the SDSU II style of fiber interface. The second will use a private point-to-point ethernet connection to communicate with a DHE The Data Handling System Libraries One of the classes of libraries included in the configurable library set was added to accommodate use of MONSOON at a wide variety of institutions. The Data Handling System DHS libraries (called libdhsutil), provides the data archiving facilities f the MONSOON system. Since there are at the various observaties a number of DHS system already in place, we decided MONSOON should be able to interface with each of them. To accomplish this a DHS API 5 was developed which treats the DHS system as just another device f data stage. The API provides f a set of routines, which are written by the local observaty staff, to implement open, close, configure and write functionality to the local DHS system. In this way it is possible to handle different Data archiving methods without modifying the base pansaver process. Two DHS libraries have been written, libdhsnull implements a /dev/null style of data saving. That is, it immediately discards the data and returns success. This library has been useful in providing a method of testing that does not fill the limited PAN disk space and helps determine PAN data throughput rates. The second, libdhsfits, uses the data to write standard multi-extension FITS file onto the local PAN disk. This library not only provides a template f future DHS libraries but also provides a method f continuing observations in the event of a failure of the local DHS hardware ethernet. NOAO s Data Products Division is currently building a DHS library f NEWFIRM science operations The Detect Hardware Libraries The fourth configuration library class is the detect hardware library (called libdetcmnds). This library will be the place most of the changes to a MONSOON will be made since this library handles the differences between different detect types and focal plane arrangements. Differences between IR and data capture and pre-processing will be handled here. Also additional functionality f future detect types can be added within this library. Three detect libraries have been developed and are the basis of the hardware and software testing and detect development and testing being carried out at NOAO. The first, the generic detect library was developed to be suitable f hardware board testing and general software development. This library will eventually be expanded into a version that handles mosaics. The second detect library is an Aladdin III library. This library implements the control of Aladdin III detects such as those used in NIRI and GNIRS and was used to take first light data in The third library is an Orion II detect library. This library is being used to take data in the evaluation of the Orion II InSb detects destined f NEWFIRM. Detect libraries f a generic, the OTA testing LAB, and the NEWFIRM 2x2 Orion II focal plane are currently under development and testing Additional Configuration Flexibility In addition to the flexibility in handling hardware differences provided by the detect hardware shared libraries there is another level of configuration flexibility built into the MONSOON system. In the case that a focal plane detect type is so different that the standard method of data capture, processing and archiving will not wk, it is possible to extend amend the system by adding deleting processes to deal with the variation. A wave front sens application f MONSOON might contain only a data capture and processing algithm without an archive process. The ODI application may contain two data capture processes, one that handles the details of the guiding, centroid calculation and charge shifting and one which handles the Science data capture and processing. This ability to change the processing, capture and archiving methods is included in the base design and may be added in such a way as to allow on-the-fly reconfiguration of the process set during a run. (Note: this facility was discussed and included in the design but has not been and may not be implemented.) 8.3. System Start-up The key to the MONSOON system is the automated system start-up. Included in the hardware design and provided f but not implemented in the current software, is the ability f a PAN to read a serial identifier chip in the
8 MONSOON DHE hardware. This chip can be implemented in the detect specific configuration and protection hardware in the DHE itself. The identifier is then used to determine the appropriate configuration recds and start the PAN software with the crect configuration. The current start-up script uses a command line argument to determine the crect configuration. The start-up script determines from the configuration recd the required configuration files. It calls the focal plane configuration script that customizes the configuration directy. It then reconfigures the shared library load path and library directy to the crect settings f the current focal plane, hardware and site. The various processes are then started and focal plane set-up can begin. Using the gpxsetmode, and gpxsetmemcfg commands the system is brought to readiness f data taking. 9. Building the software The MONSOON software development efft has made extensive use of automated software tools f building the system components. We have used CVS f version control and GNU gcc/gmake to build the systems. A set of near identical makefiles is used to build the hardware libraries, the Utility libraries and the Application processes. In the early development we frequently saw several developers wking on the same sections of code. By using these techniques we have been able to minimize the problems seen in multi-developer effts. An additional help in the code development is the inclusion of an automated system f generating the API associated with the MONSOON libraries. TeX and LateX are used to generate the API document directly from embedded comments in the library source code. /*************************************************************************** * * doc \section {The queutil <<VERSION>> Library} * doc \subsection {queutil.h} * doc \begin{description} * doc \item[\sc use:] \emph{\#include ``queutil.h''} * doc \item[\sc description:] this file contains all common code * doc required by the functions needed to build the static * doc and dynamic queutil libraries. These libraries * doc abstract the queue, dequeue and stack interface to the system. * doc \item[\sc argument(s):] not applicable * doc \item[\sc return(s):] not applicable * doc \item[\sc last modified:] Monday, 4 November 2002 * doc \end{description} * **************************************************************************/ Figure 5. API Documentation Fragment An installation system f distributing source code using tar, gzip, etc., has been developed and eventually we expect to have an Open Source version of the source code available under CVS. Additionally a system f binary updates on multiple PAN s has been implemented using rsync and ssh. We now keep eight PANs on two continents updated from the main source distribution whenever required. 10. Handling Legacy systems From the beginning it was hoped that the MONSOON system software would be used to upgrade existing systems that use older array controllers. Proposals ranging from complete hardware replacement to software upgrades have been studied. The auths believe that the MONSOON software could be retrofitted to control older hardware in as little as two man-weeks. The changes required f such a change over would be concentrated in the libdhehdw and libcomhdw libraries. Additional wk would be needed in the libdetcmnds library if the detect type did not match one of the existing MONSOON detects. The creation of a detect specific library also takes about two man-weeks. However given the current man power availability and the number of MONSOON systems being developed at NOAO over the next year it is unlikely that any older systems will be converted in the near future.
9 ACKNOWLEDGEMENTS We would like to acknowledge the members of the MONSOON team f their effts refining the software requirements and of course in testing the MONSOON software. We especially commend the tolerance of the hardware engineers in testing the concepts used in the final software while trying to develop and debug the MONSOON hardware using a hastily written PAN/DHE software program developed in less than a month at the start of the MONSOON development and then igned as far as possible by the software developers. We also ask their understanding as we move into real operations with the MONSOON software and they struggle to learn another method of interacting with the detects. REFERENCES 1. Nick C. Buchholz, Phil N. Daly, Barry M. Starr NOAO Interface Control Document Generic Pixel Server Communications, Command/Response and Data Stream Interface Description. 2. Nick C. Buchholz, Phil N. Daly, 2004, The Generic Pixel Server Dictionary, Proc. SPIE Vol. 5496, Advanced Software, Control and Communications Systems f astronomy, Hilton Lewis, Gianni Raffi, Eds. (this volume) 3. MONSOON Project Team MONSOON Image Acquisition System (Pixel Server) - Functional and Perfmance Requirements Document (FPRD) Initial Draft. 4. N. C. Buchholz, P. N. Daly 2003 MONSOON Software PDR - Requirements, Software Architecture & Design power point presentation. 5. N. C. Buchholz, G. Chisholm, P. N. Daly, P. Ruckle Interface Control Document Data Handling System Interface - Status and Data Stream Transfers - Draft 6. P. N. Daly, N. C. Buchholz and P. Moe 2004 Automated Software Configuration in the MONSOON system - Proc. SPIE Vol. 5496, Advanced Software, Control and Communications Systems f astronomy, Hilton Lewis, Gianni Raffi, Eds. (this volume) 7. D. Sawyer, P. Moe, G. Rahmer, and N.C. Buchholz 2004 Orthogonal Transfer Array Control Solutions Using the Monsoon Image Acquisition System - Proc. SPIE Vol. 5499, Optical and Infrared Detects f Astronomy, James Beletic, James D. Garnett Eds. 8. P. N. Daly and N. C. Buchholz 2004 The Monsoon Implementation of the Generic Pixel Server - Proc. SPIE Vol. 5496, Advanced Software, Control and Communications Systems f astronomy, Hilton Lewis, Gianni Raffi, Eds. (this volume)
LOCAL AREA NETWORK (LAN) SUPPORT SERIES
LOCAL AREA NETWORK (LAN) SUPPORT SERIES Occ. Wk Prob. Effective Code No. Class Title Area Area Period Date 4826(3291) Local Area Netwk (LAN) Suppt Specialist I 02 734 6 mo. 12/23/96 4827(3291) Local Area
In: Proceedings of RECPAD 2002-12th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal
Paper Title: Generic Framework for Video Analysis Authors: Luís Filipe Tavares INESC Porto [email protected] Luís Teixeira INESC Porto, Universidade Católica Portuguesa [email protected] Luís Corte-Real
TMT SOFTWARE REQUIREMENTS FOR LOW-LEVEL SUBSYSTEMS
TMT SOFTWARE REQUIREMENTS FOR LOW-LEVEL SUBSYSTEMS TMT.SFT.DRD.12.001.REL05 October 15, 2012 TMT.SFT.DRD.12.001.REL05 PAGE 2 OF 16 TABLE OF CONTENTS 1 INTRODUCTION 4 1.1 Purpose... 4 1.2 Scope... 4 1.3
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
High Availability Option for Windows Clusters Detailed Design Specification
High Availability Option for Windows Clusters Detailed Design Specification 2008 Ingres Corporation Project Name Component Name Ingres Enterprise Relational Database Version 3 Automatic Cluster Failover
System Requirements. SAS Profitability Management 2.2. Deployment
System Requirements SAS Profitability Management 2.2 This document provides the requirements f installing and running SAS Profitability Management 2.2 software. You must update your computer to meet these
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Backup with synchronization/ replication
Backup with synchronization/ replication Peer-to-peer synchronization and replication software can augment and simplify existing data backup and retrieval systems. BY PAUL MARSALA May, 2001 According to
Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata
Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached
LICENSE4J FLOATING LICENSE SERVER USER GUIDE
LICENSE4J FLOATING LICENSE SERVER USER GUIDE VERSION 4.5.5 LICENSE4J www.license4j.com Table of Contents Getting Started... 2 Floating License Usage... 2 Installation... 4 Windows Installation... 4 Linux
Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option
WHITE PAPER Optimized Performance for SAN Environments Backup Exec 9.1 for Windows Servers SAN Shared Storage Option 11/20/2003 1 TABLE OF CONTENTS Executive Summary...3 Product Highlights...3 Approaches
Introduction to Operating Systems. Perspective of the Computer. System Software. Indiana University Chen Yu
Introduction to Operating Systems Indiana University Chen Yu Perspective of the Computer System Software A general piece of software with common functionalities that support many applications. Example:
Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH
Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware
DAS to SAN Migration Using a Storage Concentrator
DAS to SAN Migration Using a Storage Concentrator April 2006 All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to
Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers
Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers Enterprise Product Group (EPG) Dell White Paper By Todd Muirhead and Peter Lillian July 2004 Contents Executive Summary... 3 Introduction...
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
AN FPGA FRAMEWORK SUPPORTING SOFTWARE PROGRAMMABLE RECONFIGURATION AND RAPID DEVELOPMENT OF SDR APPLICATIONS
AN FPGA FRAMEWORK SUPPORTING SOFTWARE PROGRAMMABLE RECONFIGURATION AND RAPID DEVELOPMENT OF SDR APPLICATIONS David Rupe (BittWare, Concord, NH, USA; [email protected]) ABSTRACT The role of FPGAs in Software
Operating System for the K computer
Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements
Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture
Review from last time CS 537 Lecture 3 OS Structure What HW structures are used by the OS? What is a system call? Michael Swift Remzi Arpaci-Dussea, Michael Swift 1 Remzi Arpaci-Dussea, Michael Swift 2
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
Resource Utilization of Middleware Components in Embedded Systems
Resource Utilization of Middleware Components in Embedded Systems 3 Introduction System memory, CPU, and network resources are critical to the operation and performance of any software system. These system
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
Chapter 3: Operating-System Structures. Common System Components
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System Design and Implementation System Generation 3.1
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper
High Availability with Postgres Plus Advanced Server An EnterpriseDB White Paper For DBAs, Database Architects & IT Directors December 2013 Table of Contents Introduction 3 Active/Passive Clustering 4
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization
Advanced Server Virtualization: Vmware and Microsoft Platforms in the Virtual Data Center
Advanced Server Virtualization: Vmware and Microsoft Platforms in the Virtual Data Center Marshall, David ISBN-13: 9780849339318 Table of Contents BASIC CONCEPTS Introduction to Server Virtualization Overview
What to do about Fiber Skew?
What to do about Fiber Skew? White Paper July 2008 www.commscope.com A number of years ago there was a good deal of discussion about the problem of fiber skew in high speed optical links, but the issue
FICON Extended Distance Solution (FEDS)
IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon [email protected] FEDS: The Optimal Transport Solution
Data Lab System Architecture
Data Lab System Architecture Data Lab Context Data Lab Architecture Astronomer s Desktop Web Page Cmdline Tools Legacy Apps User Code User Mgmt Data Lab Ops Monitoring Presentation Layer Authentication
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
Open Source Software
Open Source Software Title Experiences and considerations about open source software for standard software components in automotive environments 2 Overview Experiences Project Findings Considerations X-by-wire
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation
CSE 120 Principles of Operating Systems. Modules, Interfaces, Structure
CSE 120 Principles of Operating Systems Fall 2000 Lecture 3: Operating System Modules, Interfaces, and Structure Geoffrey M. Voelker Modules, Interfaces, Structure We roughly defined an OS as the layer
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
INTRODUCTION TO CLOUD MANAGEMENT
CONFIGURING AND MANAGING A PRIVATE CLOUD WITH ORACLE ENTERPRISE MANAGER 12C Kai Yu, Dell Inc. INTRODUCTION TO CLOUD MANAGEMENT Oracle cloud supports several types of resource service models: Infrastructure
Wave Relay System and General Project Details
Wave Relay System and General Project Details Wave Relay System Provides seamless multi-hop connectivity Operates at layer 2 of networking stack Seamless bridging Emulates a wired switch over the wireless
Next Generation Operating Systems
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015 The end of CPU scaling Future computing challenges Power efficiency Performance == parallelism Cisco Confidential 2 Paradox of the
Network Virtualization Server for Adaptive Network Control
Network Virtualization Server for Adaptive Network Control Takashi Miyamura,YuichiOhsita, Shin ichi Arakawa,YukiKoizumi, Akeo Masuda, Kohei Shiomoto and Masayuki Murata NTT Network Service Systems Laboratories,
Enhanced Diagnostics Improve Performance, Configurability, and Usability
Application Note Enhanced Diagnostics Improve Performance, Configurability, and Usability Improved Capabilities Available for Dialogic System Release Software Application Note Enhanced Diagnostics Improve
Testing Intelligent Device Communications in a Distributed System
Testing Intelligent Device Communications in a Distributed System David Goughnour (Triangle MicroWorks), Joe Stevens (Triangle MicroWorks) [email protected] United States Smart Grid systems
Managing your Red Hat Enterprise Linux guests with RHN Satellite
Managing your Red Hat Enterprise Linux guests with RHN Satellite Matthew Davis, Level 1 Production Support Manager, Red Hat Brad Hinson, Sr. Support Engineer Lead System z, Red Hat Mark Spencer, Sr. Solutions
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Computer Organization & Architecture Lecture #19
Computer Organization & Architecture Lecture #19 Input/Output The computer system s I/O architecture is its interface to the outside world. This architecture is designed to provide a systematic means of
Creating Disk Backups for the Cisco IOS XR Software and Configurations
Creating Disk Backups f the Cisco IOS XR Software and Configurations This module describes the process to configure disk mirring and create a backup disk of the Cisco IOS XR software packages and configurations.
Implementing and Managing Windows Server 2008 Hyper-V
Course 6422A: Implementing and Managing Windows Server 2008 Hyper-V Length: 3 Days Language(s): English Audience(s): IT Professionals Level: 300 Technology: Windows Server 2008 Type: Course Delivery Method:
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
ZooKeeper. Table of contents
by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes...
Chapter 13 Selected Storage Systems and Interface
Chapter 13 Selected Storage Systems and Interface Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols.
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Ms Lavanya Thunuguntla 1, Saritha Sapa 2 1 Associate Professor, Department of ECE, HITAM, Telangana
Distributed Realtime Systems Framework for Sustainable Industry 4.0 applications
Distributed Realtime Systems Framework for Sustainable Industry 4.0 applications 1 / 28 Agenda Use case example Deterministic realtime systems Almost deterministic distributed realtime systems Distributed
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
EMC CENTERA VIRTUAL ARCHIVE
White Paper EMC CENTERA VIRTUAL ARCHIVE Planning and Configuration Guide Abstract This white paper provides best practices for using EMC Centera Virtual Archive in a customer environment. The guide starts
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
Motherboard- based Servers versus ATCA- based Servers
Motherboard- based Servers versus ATCA- based Servers Summary: A comparison of costs, features and applicability for telecom application hosting After many years of struggling for market acceptance, it
Use Cases for Target Management Eclipse DSDP-Target Management Project
Use Cases for Target Management Eclipse DSDP-Target Management Project Martin Oberhuber, Wind River Systems [email protected] Version 1.1 June 22, 2005 Status: Draft Public Review Use Cases
Best Practices for Deploying and Managing Linux with Red Hat Network
Best Practices for Deploying and Managing Linux with Red Hat Network Abstract This technical whitepaper provides a best practices overview for companies deploying and managing their open source environment
High Availability Database Solutions. for PostgreSQL & Postgres Plus
High Availability Database Solutions for PostgreSQL & Postgres Plus An EnterpriseDB White Paper for DBAs, Application Developers and Enterprise Architects November, 2008 High Availability Database Solutions
Management of VMware ESXi. on HP ProLiant Servers
Management of VMware ESXi on W H I T E P A P E R Table of Contents Introduction................................................................ 3 HP Systems Insight Manager.................................................
RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware
RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware Contact Information Go to the RSA corporate website for regional Customer Support telephone
Data Center Virtualization and Cloud QA Expertise
Data Center Virtualization and Cloud QA Expertise Highlights Broad Functional QA Experience Deep understanding of Switching and Routing Protocols Strong hands on experience in multiple hyper-visors like
Contents Overview of RD Web Access... 2. What is RD Web Access?... 2 What are the benefits of RD Web Access versus thin client?...
Purpose & Scope The purpose of this document is to provide business advantages, system administrat installation, and end-user access procedures f the use of Remote Desktop (RD) Web Access f Instrument
Streamline Computing Linux Cluster User Training. ( Nottingham University)
1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running
Outline SSS6422 - Microsoft Windows Server 2008 Hyper-V Virtualization
Outline SSS6422 - Microsoft Windows Server 2008 Hyper-V Virtualization Duration: Three consecutive Saturdays About this Course This instructor led course teaches students how to implement and manage Windows
A Real Time, Object Oriented Fieldbus Management System
A Real Time, Object Oriented Fieldbus Management System Mr. Ole Cramer Nielsen Managing Director PROCES-DATA Supervisor International P-NET User Organisation Navervej 8 8600 Silkeborg Denmark [email protected]
Notes and terms of conditions. Vendor shall note the following terms and conditions/ information before they submit their quote.
Specifications for ARINC 653 compliant RTOS & Development Environment Notes and terms of conditions Vendor shall note the following terms and conditions/ information before they submit their quote. 1.
SCADA Questions and Answers
SCADA Questions and Answers By Dr. Jay Park SCADA System Evaluation Questions Revision 4, October 1, 2007 Table of Contents SCADA System Evaluation Questions... 1 Revision 4, October 1, 2007... 1 Architecture...
EBERSPÄCHER ELECTRONICS automotive bus systems. solutions for network analysis
EBERSPÄCHER ELECTRONICS automotive bus systems solutions for network analysis DRIVING THE MOBILITY OF TOMORROW 2 AUTOmotive bus systems System Overview Analyzing Networks in all Development Phases Control
JOINT TACTICAL RADIO SYSTEM - APPLICATION PROGRAMMING INTERFACES
JOINT TACTICAL RADIO SYSTEM - APPLICATION PROGRAMMING INTERFACES Cinly Magsombol, Chalena Jimenez, Donald R. Stephens Joint Program Executive Office, Joint Tactical Radio Systems Standards San Diego, CA
VMware Server 2.0 Essentials. Virtualization Deployment and Management
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
CiscoWorks Resource Manager Essentials 4.3
. Data Sheet CiscoWorks Resource Manager Essentials 4.3 Product Overview CiscoWorks Resource Manager Essentials (RME) 4.3 is the cornerstone application of CiscoWorks LAN Management Solution (LMS). CiscoWorks
SECTION 16911 WEB-BASED POWER MONITORING COMMUNICATIONS SYSTEM
WEB-BASED POWER MONITORING COMMUNICATIONS SYSTEM PART 1 GENERAL 01 02 03 04 SCOPE This section describes the metering, communications, and visualization requirements for a modular, scalable Web-based Power
Compellent Source Book
Source Book Remote Instant Replay Thin Replication Compellent Corporate Office Compellent Technologies 12982 Valley View Road Eden Prairie, Minnesota 55344 www.compellent.com Contents Contents...2 Document
Software Life-Cycle Management
Ingo Arnold Department Computer Science University of Basel Theory Software Life-Cycle Management Architecture Styles Overview An Architecture Style expresses a fundamental structural organization schema
Chapter 6, The Operating System Machine Level
Chapter 6, The Operating System Machine Level 6.1 Virtual Memory 6.2 Virtual I/O Instructions 6.3 Virtual Instructions For Parallel Processing 6.4 Example Operating Systems 6.5 Summary Virtual Memory General
: HP HP0-771. Version : R6.1
Exam : HP HP0-771 Title : Designing & Implementing HP Enterprise Backup Solutions Version : R6.1 Prepking - King of Computer Certification Important Information, Please Read Carefully Other Prepking products
enabling Ultra-High Bandwidth Scalable SSDs with HLnand
www.hlnand.com enabling Ultra-High Bandwidth Scalable SSDs with HLnand May 2013 2 Enabling Ultra-High Bandwidth Scalable SSDs with HLNAND INTRODUCTION Solid State Drives (SSDs) are available in a wide
ESE566 REPORT3. Design Methodologies for Core-based System-on-Chip HUA TANG OVIDIU CARNU
ESE566 REPORT3 Design Methodologies for Core-based System-on-Chip HUA TANG OVIDIU CARNU Nov 19th, 2002 ABSTRACT: In this report, we discuss several recent published papers on design methodologies of core-based
Agent Languages. Overview. Requirements. Java. Tcl/Tk. Telescript. Evaluation. Artificial Intelligence Intelligent Agents
Agent Languages Requirements Overview Java Tcl/Tk Telescript Evaluation Franz J. Kurfess, Cal Poly SLO 211 Requirements for agent Languages distributed programming large-scale (tens of thousands of computers)
Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB
Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
Configuring NetFlow on Cisco ASR 9000 Series Aggregation Services Router
Configuring NetFlow on Cisco ASR 9000 Series Aggregation Services Router This module describes the configuration of NetFlow on the Cisco ASR 9000 Series Aggregation Services Router. A NetFlow flow is a
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
TMA Management Suite. For EAD and TDM products. ABOUT OneAccess. Value-Adding Software Licenses TMA
For EAD and TDM products Value-Adding Software Licenses ABOUT OneAccess OneAccess designs and develops a range of world-class multiservice routers for over 125 global service provider customers including
Postgres Plus xdb Replication Server with Multi-Master User s Guide
Postgres Plus xdb Replication Server with Multi-Master User s Guide Postgres Plus xdb Replication Server with Multi-Master build 57 August 22, 2012 , Version 5.0 by EnterpriseDB Corporation Copyright 2012
