High-Speed Information Utilization Technologies Adopted by Interstage Shunsaku Data Manager
|
|
|
- Posy Gilbert
- 10 years ago
- Views:
Transcription
1 High-Speed Information Utilization Technologies Adopted by Interstage Shunsaku Data Manager V Hiroya Hayashi (Manuscript received November 28, 2003) The current business environment is characterized by many difficulties, including ongoing economic stagnation, fierce competition, and rapid staff turnover. The key to business survival is to gather as much information as possible, to utilize it as effectively as possible, and to be able to tie this information to business operations. However, at present the data integration and management technologies for utilizing a diverse range of business information are still incomplete. While attempting to find a solution to this problem, our attention was drawn to XML, and we developed Interstage Shunsaku Data Manager-an XML database engine that achieves high-speed searches of XML data. This paper explains the features of Interstage Shunsaku Data Manager and the high-speed information utilization technologies that it employs. 1. Introduction The key to business survival in the current environment is to gather as much information as possible and utilize it as effectively as possible. To achieve these aims, particular attention is being paid to how information utilization can be directly linked to business activities in the field. However, data integration and management technologies suitable for a diverse range of business information have yet to be perfected. Until now, Relational Database Management Systems (RDBMSs) have generally been adopted as solutions for data integration and management. However, RDBMSs require various schemas and data structures for integrating business data (data normalization), which makes designing such systems difficult. Also, performance tuning such as adding indexes to the search target data is required to search the data quickly. Indexes can be effective for performing high-speed searches on particular data items, but these techniques cannot be used on data items that do not have indexes. Another problem is that if data items are changed, then the indexes also need to be redesigned. These issues present difficulties when it comes to designing and operating RDBMSs. Our attention was drawn to XML because of its potential in terms of data integration. XML prescribes simple data structures whereby data items are enclosed by start and end tags. This eliminates the need to normalize data and design integrated schemas, even when integrating different types of data. All that is required is to add tags to the data and to group the data into a single document. XML can also use recursive structures to deal flexibly with variable-length and variable-item data, which are not handled well by RDBMSs. However, despite XML s flexibility, the XML data management systems that have been developed up to now have had unresolved issues regarding search performance. We developed Interstage Shunsaku Data Manager (hereafter referred to as Shunsaku) as an XML database engine that can search data 110 FUJITSU Sci. Tech. J., 40,1,p (June 2004)
2 quickly while still retaining the flexibility of XML. 1)-3) This was achieved using a combination of our original SIGMA data search algorithm and parallel search technology using blade servers. Shunsaku combines XML s flexibility with high-speed data search technology, both of which are important for data integration and management and suitable for using diverse business information in the field. Shunsaku can also further expand the scope of XML utilization. The following sections outline Shunsaku s architecture and its operation features. 2. Shunsaku s architecture This section describes the architecture of Shunsaku. Shunsaku systems are made up of director servers and search servers. Shunsaku employs an architecture whereby search servers, each equipped with a search engine, are arranged in parallel and can execute parallel searches. It is assumed that these parallel search servers will be blade servers. The role of each type of server is explained below. Figure 1 shows the architecture used by Shunsaku. 1) Director servers Director servers have functions for coordinating multiple search servers in parallel and do the following: Manage files that store the search target data Distribute the search target XML data evenly to each search server Receive search requests from applications Distribute search requests to the search servers Merge search results from the search servers and return the merged result to the application that requested the search 2) servers servers have search processing functions for the target XML data and do the following: XML data at high speed by using the SIGMA search algorithm. Receive and store target XML data from the director server. As a result, there are no disk I/O operations during the search stage. Receive search requests from the director Director server server Blade server (PRIMERGY BX300, etc.) All CPUs process the same search condition in parallel. results are integrated, and the response is returned. target (XML) data file Director server: Receives search requests and returns search results. server: SIGMA high-speed, search engine. Figure 1 Architecture of Interstage Shunsaku Data Manager. FUJITSU Sci. Tech. J., 40,1,(June 2004) 111
3 server and return search results to the director server. Enable advanced parallel searches. This is achieved by having each search server run independently. Enable several hundreds of search requests to be processed in parallel. This is achieved by using low-cost blade search servers. 3. Shunsaku s operation features Shunsaku can treat XML data as text. This allows the flexibility of XML to be exploited and, in terms of data integration, eliminates the need to design data normalization and integration schemas for various types of data. Also, because XML can easily handle variable-length and variableitem data (unlike RDBMSs), it is not limited to data integration because it can be expanded into other applications. In addition, Fujitsu s original SIGMA algorithm makes indexless searches possible, which means that every item (character, numeric, date, etc.) of XML data can be searched. Furthermore, by combining parallel search architecture with blade servers, both the performance and size of Shunsaku systems can be scaled up easily. These features mean that the construction and operating costs of Shunsaku systems can be from 30 to 70% lower than those of a conventional RDBMS alternative. The following sections give further details of Shunsaku s features. 3.1 Easy changes to data items Shunsaku can perform indexless searches on XML data by using search engines equipped with the SIGMA algorithm. 4) This means that all XML data items can be searched. If search data is added or changed either during or after system construction, the system can respond flexibly to such additions and changes without the need to redesign the database. Next, we describe the SIGMA algorithm, which is Shunsaku s core technology. The SIGMA algorithm is a high-speed string lookup algorithm that uses unidirectional sequential processing. This algorithm is installed on the search servers and used to search XML data (Figure 2). The SIGMA algorithm performs the follow- High-speed string lookup algorithm through unidirectional sequential processing using an automaton performance remains constant regardless of the number of conditions. This removes the need for an index. SIGMA engine items target data loaded String search example: s (1), FUJI ; (2), FIRST ; (3), Shunsaku An automaton that synthesizes conditions (1), (2), and (3) is created. is successively checked against the automaton, and the agreement with the conditions is evaluated. Automaton Initial state Assessment Assessment Assessment (1) FUJI ( A 49) (2) Assessment Assessment Assessment FIRST ( ) Assessment Assessment Assessment Assessment (3) Shunsaku ( E B 75) (1) (3) (2) 5 items Result Figure 2 SIGMA algorithm. 112 FUJITSU Sci. Tech. J., 40,1,(June 2004)
4 ing steps: 1) Disassembles the search conditions in byte units and creates an automaton for the search conditions. In the example shown in Figure 2, condition 1, FUJI, is evaluated as A 49 ; condition 2, FIRST, is evaluated as ; and condition 3, Shunsaku, is evaluated as E B 75. Then, an automaton is created as shown in the figure. 2) Evaluates match conditions by successively comparing each byte of the data (from beginning to end) against the automaton created in Step (1). In the example, if the value of the byte being analyzed is 46, the automaton switches to the next state. If the next value is 55, the automaton again switches to the next state. This process continues until all of the data has been evaluated. If a value does not meet the automaton state transition conditions, the automaton returns to its initial state. When a block of data has been completed, the result state of the automaton is evaluated and the algorithm determines whether the data is a hit (whether it matches the search conditions). With the SIGMA algorithm, the search is completed as soon as it has run through the search target data once. The search performance of this algorithm is not affected by the number of hits, because it always runs through all of the data once, regardless of the number of hits. Furthermore, consistent search performance can be maintained no matter how many search conditions there are, because only the width of each automaton increases with the number of search conditions and there is no change in the cost of evaluating data. The strengths of the SIGMA algorithm are that any data item can be a search target and indexes are not required for searching, because every search always runs through all of the data. Shunsaku loads search target data into the memory of each search server and then executes the SIGMA algorithm. This means that search performance depends purely on the amount of data loaded and the performance of the CPU that is processing it. That is, the search performance increases as the CPU performance increases if the CPU performance is doubled, the search performance will also double. 3.2 Superior multiplexing performance In the implementation of the SIGMA algorithm used by Shunsaku, all of the search target data is loaded into memory, so the CPU usage rate for search processing is always 100%. As a result, new search requests cannot be received while another search request is being processed. This means that if each search process takes an average of one second to complete and 100 search requests are issued simultaneously, then the 100th request will take 101 seconds to process (Figure 3). In such situations, Shunsaku s original hightraffic technology delivers superior multiplexing performance. Figure 3 shows this technology, which is explained below. The search performance of the SIGMA algorithm does not depend on the number of search conditions. This characteristic can be exploited to chain together multiple simultaneous search requests using OR conditions. In this way, a single search condition can be formed so that all of the search requests can be processed simultaneously. The time taken for this kind of batch processing does not differ from that for a single search request, thanks to the characteristics of the SIGMA algorithm. When the search process is completed, the results are distributed for each request. Requests that are issued when search processing is already underway are kept on hold until the processing for the preceding search request is completed. If multiple search requests are issued during this time, then all of these requests are grouped together and processed as a batch. The amount of time that searches can be ex- FUJITSU Sci. Tech. J., 40,1,(June 2004) 113
5 pected to take using high-traffic technology is the time taken to keep the request on hold plus the time taken to process the search itself. If, for example, it takes an average of one second to process a search, then the search can be expected to take two seconds (one second to keep the request on hold plus another second to process the search itself). Figure 4 compares the performance obtained using multiplexing and a conventional RDBMS. For example, if there are 100 concurrent requests, the processing time (6.63 s) is only about 4.5 times the time (1.46 s) required for a single request, which is a remarkable improvement over conventional RDBMSs. 3.3 Simple scale-up operations Shunsaku achieves parallel search processing by utilizing blade servers for the search servers that run the search engines. The search processing performance solely depends on the amount of search target data loaded into memory and the performance of the CPUs. For example, suppose that there is 50 MB of search target data, that this data is loaded into a single search server, and that it takes one second to process a search. Then, if the amount of data increases to 100 MB after system operations commence, the time taken for each search simply increases to two seconds. At this point, if another search server (blade server) is added, then each search server (blade server) will hold 50 MB of data, the time taken to process each search will be back to one second, and the search performance is maintained. In this way, Shunsaku systems can be scaled up simply by adding search servers (blades) as the amount of data increases. Simple FIFO system cannot produce high throughput performance and constant response times. (1) (2) to (100) request result Waiting request (1) target data 1 item 2 items Waiting time Hours 100 items Multiple search requests are integrated and executed as a single conditional search. High-traffic technology Integration of requests time remains constant even when the level of concurrency increases. Batch search 1 item 2 items Distribution of results 100 items target data Hours Figure 3 High-traffic technology. 114 FUJITSU Sci. Tech. J., 40,1,(June 2004)
6 condition: Extract exactly 100 records possessing a specific tag whose value is N or greater. Total number of records = 1 million Record length = 2280 bytes; Data items (tags) = search servers ( records per search server) Environment Application program Director server Director server ( 1) PRIMEPOWER 650 CPU: SPARC64-IV 675 MHz 8 Memory: 8 GB server ( 20) PRIMERGY BX300 CPU: Pentium III 866 MHz 1 Memory: 2 GB file 20 search servers Response time (s) Results s 1.46 s 320 s 640 s 4.16 s 6.63 s Existing technology (RDB) Shunsaku Number of concurrent executions Note: In the measurement environment, the number of records that matched the search condition was set at Figure 4 Test results (benefits of using high-traffic technology). In addition, if a Shunsaku system is operating with multiple search servers and one of the search servers fails, the director server automatically detects the failure and redistributes the data held by that search server among the other search servers (automatic degradation). As a result, if a search server fails, processing can be continued without needing to stop the system. 4. Future development scenarios Shunsaku provides basic functions that focus on search processing. data is updated and added using command-based batch processing. In the future, users will become more concerned about whether search data is up to date. To ensure that search data is up to date, the next step is for Shunsaku to achieve real-time data updates. The basic requirement for achieving this goal is to ensure that the files Shunsaku uses to hold the search target data are robust. To achieve this, we will introduce the concept of transactions for data updates. Achieving real-time updates is likely to further expand the range of Shunsaku applications. The current version of Shunsaku assumes that data is concentrated at the location where the Shunsaku system is installed. However, the reality is that the cost of the operations for collecting data from each site is often high. In response, we plan to implement a data grid concept in Shunsaku so that data distributed over multiple sites can be easily searched from any site. 5. Conclusion We have developed an XML database engine, called Interstage Shunsaku Data Manager, to overcome the shortcomings of the data integration and management technologies that are suitable for using diversified business information. Shunsaku enables high-speed data searches while making the most of XML s flexibility. It has a parallel search engine equipped with Fujitsu s original SIGMA algorithm. These features enable Shunsaku to handle variable-length and variableitem XML data without modifications and perform high-speed searches of all data items. Shunsaku is well suited to production systems, because it can easily be scaled up in response to increases in FUJITSU Sci. Tech. J., 40,1,(June 2004) 115
7 the amount of data and provides functions such as automatic degradation that ensure availability. By using Shunsaku as the core engine, optimal data integration/management solutions can be created, while further expanding the use of XML. In the future, Shunsaku will be further developed to include support for real-time updates and will evolve to include the data grid concept, which will further expand the scope of Shunsaku applications. References 1) Interstage XML (Product: Shunsaku Data Manager V6) InterstageSuite/XML/overview.html 2) Interstage Shunsaku. (in Japanese), Fujitsu Journal, 29, 5, ) Interstage Shunsaku Data Manager. (in Japanese), ) Full-text retrieval. Fujitsu Journal, 28, 1, Hiroya Hayashi received the M.S. degree in Computer Science from the Graduate School of Engineering at Gunma University, Gunma, Japan in He joined Fujitsu Ltd., Numazu, Japan in 1987, where he worked on development of an operating system for fault-tolerant systems. He then worked on development of the relational database for the open systems Syfmoware, and is currently working on development of an XML database engine FUJITSU Sci. Tech. J., 40,1,(June 2004)
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
StreamStorage: High-throughput and Scalable Storage Technology for Streaming Data
: High-throughput and Scalable Storage Technology for Streaming Data Munenori Maeda Toshihiro Ozawa Real-time analytical processing (RTAP) of vast amounts of time-series data from sensors, server logs,
How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)
Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2
How To Make A Backup System More Efficient
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
Legal Notices... 2. Introduction... 3
HP Asset Manager Asset Manager 5.10 Sizing Guide Using the Oracle Database Server, or IBM DB2 Database Server, or Microsoft SQL Server Legal Notices... 2 Introduction... 3 Asset Manager Architecture...
MS SQL Performance (Tuning) Best Practices:
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware
Disaster Recovery Feature of Symfoware DBMS
Disaster Recovery Feature of Symfoware MS V Teruyuki Goto (Manuscript received December 26, 2006) The demands for stable operation of corporate information systems continue to grow, and in recent years
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Capacity Planning Process Estimating the load Initial configuration
Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting
Overview of Next-Generation Green Data Center
Overview of Next-Generation Green Data Center Kouichi Kumon The dramatic growth in the demand for data centers is being accompanied by increases in energy consumption and operating costs. To deal with
WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE
WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance
Performance And Scalability In Oracle9i And SQL Server 2000
Performance And Scalability In Oracle9i And SQL Server 2000 Presented By : Phathisile Sibanda Supervisor : John Ebden 1 Presentation Overview Project Objectives Motivation -Why performance & Scalability
File Management. Chapter 12
Chapter 12 File Management File is the basic element of most of the applications, since the input to an application, as well as its output, is usually a file. They also typically outlive the execution
Optimizing the Performance of Your Longview Application
Optimizing the Performance of Your Longview Application François Lalonde, Director Application Support May 15, 2013 Disclaimer This presentation is provided to you solely for information purposes, is not
Design Issues in a Bare PC Web Server
Design Issues in a Bare PC Web Server Long He, Ramesh K. Karne, Alexander L. Wijesinha, Sandeep Girumala, and Gholam H. Khaksari Department of Computer & Information Sciences, Towson University, 78 York
HiRDB 9 HiRDB is an evolving database for continuing business services.
Nonstop database Version 9 9 is an evolving database for continuing business services. All Rights Reserved. Copyright 2011, Hitachi, Ltd. Ensuring Non-stop Business - Hitachi is an IT vendor representing
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture
PRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
Overlapping Data Transfer With Application Execution on Clusters
Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm [email protected] [email protected] Department of Computer Science Department of Electrical and Computer
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
bla bla OPEN-XCHANGE Open-Xchange Hardware Needs
bla bla OPEN-XCHANGE Open-Xchange Hardware Needs OPEN-XCHANGE: Open-Xchange Hardware Needs Publication date Wednesday, 8 January version. . Hardware Needs with Open-Xchange.. Overview The purpose of this
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program
ICONICS Choosing the Correct Edition of MS SQL Server
Description: This application note aims to assist you in choosing the right edition of Microsoft SQL server for your ICONICS applications. OS Requirement: XP Win 2000, XP Pro, Server 2003, Vista, Server
Chapter 1: Introduction. What is an Operating System?
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real -Time Systems Handheld Systems Computing Environments
FAWN - a Fast Array of Wimpy Nodes
University of Warsaw January 12, 2011 Outline Introduction 1 Introduction 2 3 4 5 Key issues Introduction Growing CPU vs. I/O gap Contemporary systems must serve millions of users Electricity consumed
On-Demand Virtual System Service
On-Demand System Service Yasutaka Taniuchi Cloud computing, which enables information and communications technology (ICT) capacity to be used over the network, is entering a genuine expansion phase for
XBRL Processor Interstage XWand and Its Application Programs
XBRL Processor Interstage XWand and Its Application Programs V Toshimitsu Suzuki (Manuscript received December 1, 2003) Interstage XWand is a middleware for Extensible Business Reporting Language (XBRL)
Integration of PRIMECLUSTER and Mission- Critical IA Server PRIMEQUEST
Integration of and Mission- Critical IA Server V Masaru Sakai (Manuscript received May 20, 2005) Information Technology (IT) systems for today s ubiquitous computing age must be able to flexibly accommodate
Data Integrator Performance Optimization Guide
Data Integrator Performance Optimization Guide Data Integrator 11.7.2 for Windows and UNIX Patents Trademarks Copyright Third-party contributors Business Objects owns the following
Development and Runtime Platform and High-speed Processing Technology for Data Utilization
Development and Runtime Platform and High-speed Processing Technology for Data Utilization Hidetoshi Kurihara Haruyasu Ueda Yoshinori Sakamoto Masazumi Matsubara Dramatic increases in computing power and
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Identifying the Hidden Risk of Data Deduplication: How the HYDRAstor TM Solution Proactively Solves the Problem
Identifying the Hidden Risk of Data Deduplication: How the HYDRAstor TM Solution Proactively Solves the Problem Advanced Storage Products Group Table of Contents 1 - Introduction 2 Data Deduplication 3
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Media Cloud Service with Optimized Video Processing and Platform
Media Cloud Service with Optimized Video Processing and Platform Kenichi Ota Hiroaki Kubota Tomonori Gotoh Recently, video traffic on the Internet has been increasing dramatically as video services including
DELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
White Paper February 2010. IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario
White Paper February 2010 IBM InfoSphere DataStage Performance and Scalability Benchmark Whitepaper Data Warehousing Scenario 2 Contents 5 Overview of InfoSphere DataStage 7 Benchmark Scenario Main Workload
Symfoware Server Reliable and Scalable Data Management
Symfoware Server Reliable and Scalable Data Management Kazunori Takata Masaki Nishigaki Katsumi Matsumoto In recent years, the scale of corporate information systems has continued to expand and the amount
Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III
White Paper Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III Performance of Microsoft SQL Server 2008 BI and D/W Solutions on Dell PowerEdge
Inside Track Research Note. In association with. Enterprise Storage Architectures. Is it only about scale up or scale out?
Research Note In association with Enterprise Storage Architectures Is it only about scale up or scale out? August 2015 About this The insights presented in this document are derived from independent research
How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
Definition of RAID Levels
RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds
Fujitsu Big Data Software Use Cases
Fujitsu Big Data Software Use s Using Big Data Opens the Door to New Business Areas The use of Big Data is needed in order to discover trends and predictions, hidden in data generated over the course of
Cloud Computing Based on Service- Oriented Platform
Cloud Computing Based on Service- Oriented Platform Chiseki Sagawa Hiroshi Yoshida Riichiro Take Junichi Shimada (Manuscript received March 31, 2009) A new concept for using information and communications
Hitachi s Midrange Disk Array as Platform for DLCM Solution
Hitachi Review Vol. 54 (2005), No. 2 87 Hitachi s Midrange Disk Array as Platform for DLCM Solution Teiko Kezuka Ikuya Yagisawa Azuma Kano Seiki Morita OVERVIEW: With the Internet now firmly established
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Performance Monitoring of Parallel Scientific Applications
Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure
Data Center Performance Insurance
Data Center Performance Insurance How NFS Caching Guarantees Rapid Response Times During Peak Workloads November 2010 2 Saving Millions By Making It Easier And Faster Every year slow data centers and application
Types Of Operating Systems
Types Of Operating Systems Date 10/01/2004 1/24/2004 Operating Systems 1 Brief history of OS design In the beginning OSes were runtime libraries The OS was just code you linked with your program and loaded
Accelerating Business Intelligence with Large-Scale System Memory
Accelerating Business Intelligence with Large-Scale System Memory A Proof of Concept by Intel, Samsung, and SAP Executive Summary Real-time business intelligence (BI) plays a vital role in driving competitiveness
High performance ETL Benchmark
High performance ETL Benchmark Author: Dhananjay Patil Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 07/02/04 Email: [email protected] Abstract: The IBM server iseries
A Tool to aid nmon analysis and utility
A Tool to aid nmon analysis and utility By David Tansley and Dave Seol Many AIX system administrators and engineers consider nmon as the greatest performance tool created for AIX. They love nmom for being
Phire Architect Hardware and Software Requirements
Phire Architect Hardware and Software Requirements Copyright 2014, Phire. All rights reserved. The Programs (which include both the software and documentation) contain proprietary information; they are
features at a glance
hp availability stats and performance software network and system monitoring for hp NonStop servers a product description from hp features at a glance Online monitoring of object status and performance
Deploying De-Duplication on Ext4 File System
Deploying De-Duplication on Ext4 File System Usha A. Joglekar 1, Bhushan M. Jagtap 2, Koninika B. Patil 3, 1. Asst. Prof., 2, 3 Students Department of Computer Engineering Smt. Kashibai Navale College
Inmagic Content Server Workgroup Configuration Technical Guidelines
Inmagic Content Server Workgroup Configuration Technical Guidelines 6/2005 Page 1 of 12 Inmagic Content Server Workgroup Configuration Technical Guidelines Last Updated: June, 2005 Inmagic, Inc. All rights
Performance Counters. Microsoft SQL. Technical Data Sheet. Overview:
Performance Counters Technical Data Sheet Microsoft SQL Overview: Key Features and Benefits: Key Definitions: Performance counters are used by the Operations Management Architecture (OMA) to collect data
The Google File System
The Google File System By Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (Presented at SOSP 2003) Introduction Google search engine. Applications process lots of data. Need good file system. Solution:
Operating Systems 4 th Class
Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science
Key Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments February 2014 Contents Microsoft Dynamics NAV 2013 R2 3 Test deployment configurations 3 Test results 5 Microsoft Dynamics NAV
SQL Server 2012 Performance White Paper
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
SPARC64 VIIIfx: CPU for the K computer
SPARC64 VIIIfx: CPU for the K computer Toshio Yoshida Mikio Hondo Ryuji Kan Go Sugizaki SPARC64 VIIIfx, which was developed as a processor for the K computer, uses Fujitsu Semiconductor Ltd. s 45-nm CMOS
Memory-Centric Database Acceleration
Memory-Centric Database Acceleration Achieving an Order of Magnitude Increase in Database Performance A FedCentric Technologies White Paper September 2007 Executive Summary Businesses are facing daunting
Performance Report Modular RAID for PRIMERGY
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
Running a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
RAID Overview 91.520
RAID Overview 91.520 1 The Motivation for RAID Computing speeds double every 3 years Disk speeds can t keep up Data needs higher MTBF than any component in system IO Performance and Availability Issues!
Inmagic Content Server v9 Standard Configuration Technical Guidelines
Inmagic Content Server v9.0 Standard Configuration Technical Guidelines 5/2006 Page 1 of 15 Inmagic Content Server v9 Standard Configuration Technical Guidelines Last Updated: May, 2006 Inmagic, Inc. All
Three Stages for SOA and Service Governance
Three Stages for SOA and Governance Masaki Takahashi Tomonori Ishikawa (Manuscript received March 19, 2009) A service oriented architecture (SOA), which realizes flexible and efficient construction of
Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:
Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design
PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General
The Cubetree Storage Organization
The Cubetree Storage Organization Nick Roussopoulos & Yannis Kotidis Advanced Communication Technology, Inc. Silver Spring, MD 20905 Tel: 301-384-3759 Fax: 301-384-3679 {nick,kotidis}@act-us.com 1. Introduction
How To Build A Clustered Storage Area Network (Csan) From Power All Networks
Power-All Networks Clustered Storage Area Network: A scalable, fault-tolerant, high-performance storage system. Power-All Networks Ltd Abstract: Today's network-oriented computing environments require
Why Relative Share Does Not Work
Why Relative Share Does Not Work Introduction Velocity Software, Inc March 2010 Rob van der Heij rvdheij @ velocitysoftware.com Installations that run their production and development Linux servers on
External Sorting. Why Sort? 2-Way Sort: Requires 3 Buffers. Chapter 13
External Sorting Chapter 13 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Why Sort? A classic problem in computer science! Data requested in sorted order e.g., find students in increasing
