1 In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst
2 Agenda Introduction to in-memory tables What data works best in in-memory tables What you can do with in-memory tables The details of how in-memory tables can achieve its speed How in-memory tables solves critical business challenges Provide consolidated customer statements Create a market adaptive application for credit card offerings Provide capacity and scale to service ever increasing volume of transactions Increase capacity of batch window and meet contractual SLAs Summary 2
3 Introduction to in-memory tables
4 What is an in-memory table Data Space in memory for table processing In-memory tables are kept in a Table Share Region (TSR) Tables are made up of Rows, Keys, Index, Organization and Search Methods 4
5 In-memory tables Options for in-memory tables Define, autoload and index tables Tables have a fixed row length Each row contains a key and structured data of variable format Key can be multiple fields Data can be values, instructions, locations, rules or decisions Group multiple rows from different tables with different formats into a single table Create alternate indices on the fly Use alternate indices as a virtual sort Optimize table search Options for table organization and table search 5
6 How in-memory tables reduces elapsed time Access I/O speed Each request for data can result in one or more I/Os IMS or DB2 can buffer the data, but it still needs to be reformatted IMS or DB2 brings back a block, data is extracted and reformatted Access memory speed First call for data loads the table, creates index All other calls access in-memory tables Returns entire row Create alternate indices on the fly Virtual sorts using alternate indices Optimize search method No DBMS or OS overhead 6 6 With tableextenz the path to data is shorter and much, much faster
7 Performance of in-memory tables Shortest, fastest possible path to data Use in-memory tables with a DBMS to offload read-only I/Os and replace creation of temporary files When the amount of time taken for each transaction is reduced, and the number of transactions per unit of time is increased, performance and capacity are increased significantly
8 What data works best for in-memory tables
9 The data just keeps growing! Your enterprise data Your enterprise has many different kinds of data: Meta Data Reference Data Transaction Structure Data Some is used for transactions Some of it is temporary Some of it changes often, some changes infrequently You have all these different types of data, but you may have different naming conventions. Enterprise Structure Data Master Data Transaction Activity Data Transaction Temporary Data Transaction Audit Data Transaction Data 9
10 Not all data is handled the same way Reference Data Transaction Temporary Data Reference data Is 5-15% of your total data Changes infrequently Is accessed often, may represent as much as 80% of your accesses Temporary data Is created, processed and then deleted Generates a high volume of data accesses for the volume of data Remaining data The largest volume of data Read often followed by a write The lowest number of accesses 10
11 A small proportion of your data generates the most I/Os Between 5-15 % of your total data Generates 80% of your I/Os Takes time and CPU 11
12 How to optimize your reference and temporary data 1. Identify the highly accessed reference data 2. Put it into table BASE 3. Reduce I/Os and elapsed time by 85% 4. Reduce CPU by 65% 12
13 What is reference data? Data that is used to validate transactions, or data that is looked-up as part of a transaction, or that categorizes other data As part of credit card transactions name, address, credit card number, expiry date As part of settlement vendor name, address, account information; purchaser name, address, account number, bank information Price tables Lists of cities, states and countries Rate tables, which may be differ depending on geographic location Tax tables Product or part numbers SIC classification 13
14 What is temporary data? Data that is collected often from multiple sources, usually sorted, and then used as inputs for subsequent applications which may be a printing program Consolidating financial information from the various accounts a customer may have to create a consolidated statement Calculating instantaneous net worth 14
15 What you can do with in-memory tables
16 Barriers to performance and flexibility What slows data down? Moving data from disk to memory and back again Repeated retrieval of identical information Looping serially through information to find the piece you want Waiting for one transaction to complete before starting another What slows down market adaptiveness? Relearning complex logic in order to enhance or repair Rules embedded in program logic Logic trees embedded in program logic Updates that need to be propagated to many programs Comprehensive testing cycles
17 Optimize Access to Your Data Optimize access to your data Copy your Reference Data into in-memory tables, Access the data from there to minimize I/O and CPU resource usage Use in-memory tables to store your Temporary Data Avoid unnecessary and wasteful I/O access Any I/O you can save is a good I/O Access the rest of your data directly from the DBMS
18 Temporary tables Efficient use of temporary tables Temporary tables can be defined and used in lieu of DASD temporary tables Temporary processing is done in memory and the table is deleted at the end of the processing Can be shared between two or more programs Can be shared between two or more transactions
19 Build more powerful applications Build more powerful applications using table-drive design techniques, using in-memory tables: Business rules contained in DBMS tables are easy enough to maintain, but have performance issues, as every transaction experiences many I/O accesses to the disk-based rules. Business rules contained in application code run very fast, but are difficult to maintain change control involves recompiles, and redeployment, and inefficient use and over-use of programming staff Business rules contained in in-memory tables process very fast and are easily maintained changes to tables and new tables can be managed by non-technical staff.
20 Capabilities of in-memory tables Define, build, maintain and manage in-memory tables Assemble data from multiple sources into a temporary table for subsequent processing or renderings Optimize table search Place reference data and rules in read-only tables Replace sort with a virtual sort using alternate indices that can be defined on the fly Replace logic trees with decision tables Use tables as a write through cache or as message queues Share tables between applications Applications simultaneously switched to access new data
21 Results that in-memory tables can provide With in-memory tables you can: Place reference data in tables Replace temporary files with temporary tables Decrease MSU Yes Decrease elapsed time Yes Increase flexibility and market adaptation Reduce maintenance Enable new paradigms Yes Yes Yes Use tables for rules Yes Yes Use tables as a message queue Use decision tables to replace logic trees Use tables for process control Use temporary tables for implementing complex algorithms Yes Yes Yes Often Yes Yes Yes Yes Yes Yes Yes Yes Yes 21
22 Where can in-memory tables be used? Challenges Performance issues Results Improved efficiency & performance of existing applications Eliminated the cost, complexity, risk of hardware upgrade / migration plans Application complexity Reduced their application support and maintenance costs Redeployed their people to revenue-generating tasks Adding new services is painful Enhances ability to add new services quickly w/o additional cost Turn around time is dramatically reduced M&A activity Integrated operations with success Reduced costs by maximizing existing investments Improve operational efficiency while reducing costs
23 The details of how in-memory tables can achieve its speed
24 Designed for performance Does not do metadata translation on a field by field basis Avoids OS to do access, eg, link command, or I/O Avoid getmains Avoid locking Efficient algorithms for search Choose the search that is the most efficient for the data Returns entire rows or portions thereof Uses implicit commands to reduce changes to calling application Application asks for row, and if table not open tablebase opens the table avoids explicit changes to application to first open and then find row Uses shortcuts to reduce path to data Searches list of tables first time, and next time, uses a shortcut so doesn t have to search again Allows dynamic creation of indexes; can populate and then create index, can create multiple indices after population of tables 24
25 Other advantages of In-memory tables Efficient use of temporary tables Temporary tables- can be shared by multiple programs Row addressing efficiency Get next 1, get next n, get last n, etc. All built in; no special structures/set up required Multiple search strategies Dynamically switch between binary/serial/hash search; by programmer, no help from DBA needed Multiple dynamic indexing Dynamically add new indexes; multiple indexes for one table; no help from DBA needed Indirect opens Abstracts out the data allowing simple indirect references to tables/groups of data; tables designed by programmer, no help from DBA needed Date-sensitive processing Data based on based on effective dates can be automatically selected using a built-in function rather than by program logic; no need for WHERE clause, can be used by any tablebase app.
26 Multiple search strategies Multiple search strategies: Can dynamically switch between binary, serial and hash search This can be done by the programmer on the fly No need for DBA involvement to analyze tables, build new indexes and deploy.
27 Row addressing Rich command set with positional addressing of rows: Insert, update and retrieval commands that work by position in a row Insert by count, replace by count Fetch by count Get next, get next n Get previous, previous n Last, last n First, first n Useful especially for processing that involves scrolling of rows in a table Requires no programmatic set up
28 Multiple dynamic indexing Multiple dynamic indexing Can dynamically create new indexes for a table Can be multiple indexes for a table at one time No need for DBA involvement to analyze tables, build new indexes and deploy
29 Simplifies, better manages generational / time-based data Indirect open processing : A built-in feature of tablebase Example: Two or more price tables exist Another table contains the names of the price tables Zip code is used as a key to select the correct tax table tablebase checks the key table name, and passes back the name of the correct table The correct tax table is opened indirectly You don't need program logic (or extra columns) to select the correct table Tables can be designed by the programmer. No need for DBA to define tables, no need for complex SQL design.
30 Simplifies, better manages generational / time-based data Date sensitive processing : A built-in feature of tablebase Command request on Widget price on date Example: Normally a table is searched via a match on the key In this case, you can match on the key and a date Date is a match on the date range within the price table All with a single command Using same technique, price can be checked for last month, two years ago, etc. You don't need program logic (or extra columns) to obtain the date-sensitive pricing data Access to table of this type is built in to tablebase- no need for WHERE clause. A program can access either a datesensitive table or a non-date-sensitive table without change.
31 Why in-memory tables are such a good match for optimizing batch applications
32 Key attributes of batch processing Process large volumes, or batches, of instructions or files sequentially A large volume of work requires repetitive actions or repetitive logic Jobs often queued, with the outputs of one job providing the inputs for the next job Often have repeated reads of static data May create temporary files as part of the data record processing Batch jobs do a lot with reading, writing and sorting sequential and VSAM files Manual intervention or inputs not required Runs automatically once a batch job begins, it continues until it is done or until an error occurs Traditionally there has been a batch window a period of time when there would not be any OLTP, and batch jobs would run uninterrupted Data for OLTP would then always be current 32
33 Options to address batch window compression Scheduling solutions More hardware Use grid workflows Application re-architecture or optimization or DBMS optimization Run batch and OLTP concurrently May reduce performance of OLTP (as seen at Home Depot) May be some challenges in accessing the data concurrently Lockouts Data not current Move data into memory caching DataKinetics in-memory solution Many of these solutions require hardware, and ongoing monitoring and optimization of the processes. 33
34 One of IBMs recommendations... Normally batch programs process a huge amount of data. Thereby often, depending on the business logic, some data is static and does not change during the program run (for example company address information). It is recommended to read such data only once from the database and cache it somewhere for further processing. This will prevent your system from running unnecessary round trips to the database. Eliminate repeated reads of the same data BUT Implementing data in memory (DIM) techniques is a complex task Whereas implementing in-memory tables is not a complex task. 34 From IBM Redbook on Batch Modernization on z/os, December 2009
35 The easiest solution to implement Solution to batch window compression Hardware Software Ongoing monitoring and optimization Code changes Complexity Scheduling solutions No* Yes Yes No* Medium More MIPS Yes Licensing* No change None Low Grid workflows Yes Yes Yes Some Application re-architecture, optimization No* No* No* Could be extensive Medium to High Medium to High DBMS optimization No* Yes** Yes Yes Medium Run batch and OLTP concurrently Maybe*** Maybe*** Increased Maybe*** Medium IBMs data in memory (DIM) solution (caching) Spare memory and spare CPU Yes Yes Could be extensive High DataKinetics in-memory solution No Yes No Minor Low * Typical ** DBMS optimization tools are needed in many solutions *** Depends on specific environment and specific solution 35
36 Why in-memory tables are such a good solution for batch processing Process large volumes, or batches, of instructions or files sequentially A large volume of work requires repetitive actions or repetitive logic Jobs often queued, with the outputs of one job providing the inputs for the next job Often have repeated reads of static data May create temporary files as part of the transaction processing Batch jobs do a lot with reading, writing and sorting sequential and VSAM files Manual intervention or inputs not required Runs automatically once a batch job begins, it continues until it is done or until an error occurs Traditionally there has been a batch window a period of time when there would not be any OLTP, and batch jobs would run uninterrupted 36 Data for OLTP would then always be current These are the areas where in-memory tables excels at improving performance
37 Typical results of customer implementations Optimize data access in batch processing Provide consolidated customer statements Provide capacity and scale to service ever increasing volume of transactions
38 Pressures on the batch window In today s web world, demands for 24/7 OLTP constantly increasing Global business Ecommerce Customers demand access to services such as banking 24/7 Batch processing work load keeps increasing as business changes The need to handle larger volumes of data The need to incorporate additional functions The need to integrate corporate acquisitions 38
39 Typical results from some of our customers 39
40 Typical customer results - CPU 40
41 More average results from some of our customers I/Os 41
43 Consolidated customer statements - challenge Challenge Produce a single customer statement for all their financial products, provide a summary of their net worth, and promote those products the customer does not have When each financial product that the customer uses resides in a different data base and is accessed by a different application, consolidating that information takes considerable CPU and elapsed time Creating a statement of net worth requires integrating the information after consolidating it Identifying what products the customer does not have and promoting those products requires more processing 43
44 Consolidated customer statements - solution Solution Populate account consolidation table in memory with account data Render contents via virtual sorts and using formatting tables (rules table) Provide a summary of their net worth at that moment taken from consolidation table which is summarized in the table Based on what services the customer has and the customer profile taken from consolidation table, provide customized offers for new services using decision tables Do it all in tables Results Reduced cost and elapsed time to generate statements By replacing temporary files with in-memory tables Read all data once into in-memory tables and use virtual sorts 44
45 Credit card settlement - challenge Challenge Be able to handle a higher volume of transactions, and be able to implement changes to the settlement process As credit card use increases, the volumes of transactions that need to go through the settlement process increases When a customer uses their credit card, the transaction goes through an approval process (and some customers use tablebase for transaction approvals), and the settlement is done to ensure the vendor gets their money, and the customer gets charged There is both a capacity challenge and a challenge in adapting the processes to accommodate changes to rules, such as currency exchange 45
46 Credit card settlement solution Solution With DataKinetics tablebase each transaction runs through only the business rules that are applicable to it a far more efficient process. New accounts, users, card types, regions, jurisdictions, vendors, etc. can be added quickly to the appropriate table. Results Financial company gets the capacity and throughput required for this business critical application handles 45B accesses per hour using tablebase for settlement or 12.5M accesses per second 46
48 How in-memory tables can optimize data access Optimize data access Use DBMS for read / write updating of data Use in-memory tables to off-load read only I/Os Optimize application performance by reducing I/Os Check to see how many I/Os are read only Check to see how many read accesses there are to different tables Identify any temporary files that are created Use temporary tables for consolidation and continuous summation Optimize application performance by sharing data Check to see how many applications use the same data Place one copy in in-memory tables for all applications to use Check to see if data is passed from one application to another Use tables to pass that data at memory speed 48
49 Results that in-memory tables can provide With in-memory tables you can: Place reference data in tables Replace temporary files with temporary tables Decrease MSU Yes Decrease elapsed time Yes Increase flexibility and market adaptation Reduce maintenance Enable new paradigms Yes Yes Yes Use tables for rules Yes Yes Use tables as a message queue Use decision tables to replace logic trees Use tables for process control Use temporary tables for implementing complex algorithms Yes Yes Yes Often Yes Yes Yes Yes Yes Yes Yes Yes Yes 49
50 In-memory tables In-memory tables are used by enterprises to: Optimize computing efficiency and cost reductions Achieve superior performance, capacity, and scale Choose where you put your data Make your infrastructure change transparent to customers, employees and partners Adapt quickly to market changes and be more competitive 50
51 Thank you for your time. Verna Bartlett Head of Marketing ext
Is your database application experiencing poor response time, scalability problems, and too many deadlocks or poor application performance? One or a combination of zparms, database design and application
SQL Server 2014 New Features/In- Memory Store Juergen Thomas Microsoft Corporation AGENDA 1. SQL Server 2014 what and when 2. SQL Server 2014 In-Memory 3. SQL Server 2014 in IaaS scenarios 2 SQL Server
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware
PLATFORM Top Ten Questions for Choosing In-Memory Databases Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases. Are my applications accelerated without manual intervention and tuning?.
Oracle EXAM - 1Z0-117 Oracle Database 11g Release 2: SQL Tuning Buy Full Product http://www.examskey.com/1z0-117.html Examskey Oracle 1Z0-117 exam demo product is here for you to test the quality of the
Version 5.0 MIMIX ha1 and MIMIX ha Lite for IBM i5/os Using MIMIX Published: May 2008 level 5.0.13.00 Copyrights, Trademarks, and Notices Product conventions... 10 Menus and commands... 10 Accessing online
High-Volume Data Warehousing in Centerprise Product Datasheet Table of Contents Overview 3 Data Complexity 3 Data Quality 3 Speed and Scalability 3 Centerprise Data Warehouse Features 4 ETL in a Unified
1Z0-117 Oracle Database 11g Release 2: SQL Tuning Oracle To purchase Full version of Practice exam click below; http://www.certshome.com/1z0-117-practice-test.html FOR Oracle 1Z0-117 Exam Candidates We
The Comeback of Batch Tuning By Avi Kohn, Time Machine Software Introduction A lot of attention is given today by data centers to online systems, client/server, data mining, and, more recently, the Internet.
ICOM 6005 Database Management Systems Design Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001 Readings Read Chapter 1 of text book ICOM 6005 Dr. Manuel
Oracle Database 12c In-Memory Option 491 ORACLE DATABASE 12C IN-MEMORY OPTION c The Top Tier of a Multi-tiered Database Architecture There is this famous character, called Mr. Jourdain, in The Bourgeois
MySQL Storage Engines Data in MySQL is stored in files (or memory) using a variety of different techniques. Each of these techniques employs different storage mechanisms, indexing facilities, locking levels
What are the top new features of DB2 10? As you re probably aware, at the end of October 2010 IBM launched the latest version of its flagship database product DB2 10 for z/os. Having been involved in the
Providing flexible, easy-to-use application development tools designed to enhance file processing IBM File Manager for z/os, V13.1 Figure 1: File Manager environment Highlights Supports development and
1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
1 PHIL FACTOR GRANT FRITCHEY K. BRIAN KELLEY MICKEY STUEWE IKE ELLIS JONATHAN ALLEN LOUIS DAVIDSON 2 Database Performance Tips for Developers As a developer, you may or may not need to go into the database
Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier Simon Law TimesTen Product Manager, Oracle Meet The Experts: Andy Yao TimesTen Product Manager, Oracle Gagan Singh Senior
Controlling Dynamic SQL with DSCC By: Susan Lawson and Dan Luksetich Controlling Dynamic SQL with DSCC By: Susan Lawson and Dan Luksetich In today s high performance computing environments we are bombarded
SharePoint Server 2010 Capacity Management: Software Boundaries and s This document is provided as-is. Information and views expressed in this document, including URL and other Internet Web site references,
Seeking Fast, Durable Data Management: A Database System and Persistent Storage Benchmark In-memory database systems (IMDSs) eliminate much of the performance latency associated with traditional on-disk
Storage Structures Unit 4.3 Unit 4.3 - Storage Structures 1 The Physical Store Storage Capacity Medium Transfer Rate Seek Time Main Memory 800 MB/s 500 MB Instant Hard Drive 10 MB/s 120 GB 10 ms CD-ROM
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
Session 2358 University Data Warehouse Design Issues: A Case Study Melissa C. Lin Chief Information Office, University of Florida Abstract A discussion of the design and modeling issues associated with
IBM DB2 Near-Line Storage Solution for SAP NetWeaver BW A high-performance solution based on IBM DB2 with BLU Acceleration Highlights Help reduce costs by moving infrequently used to cost-effective systems
Improve SQL Performance with BMC Software By Rick Weaver TECHNICAL WHITE PAPER Table of Contents Introduction................................................... 1 BMC SQL Performance for DB2.......................................
Performance Implications of Various Cursor Types in Microsoft SQL Server By: Edward Whalen Performance Tuning Corporation INTRODUCTION There are a number of different types of cursors that can be created
WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.
Chapter 6: Physical Database Design and Performance Modern Database Management 6 th Edition Jeffrey A. Hoffer, Mary B. Prescott, Fred R. McFadden Robert C. Nickerson ISYS 464 Spring 2003 Topic 23 Database
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
About the Author Geoff Ingram (mailto:email@example.com) is a UK-based ex-oracle product developer who has worked as an independent Oracle consultant since leaving Oracle Corporation in the mid-nineties.
Database Systems Journal vol. VI, no. 1/2015 59 In-memory databases and innovations in Business Intelligence Ruxandra BĂBEANU, Marian CIOBANU University of Economic Studies, Bucharest, Romania firstname.lastname@example.org,
coursemonster.com/au IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs View training dates» Overview Learn how to tune for optimum performance the IBM DB2 9 for Linux,
Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment
CICS Transactions Measurement with no Pain Prepared by Luiz Eduardo Gazola 4bears - Optimize Software, Brazil December 6 10, 2010 Orlando, Florida USA This paper presents a new approach for measuring CICS
1 / 36 The Problem Application Data? Filesystem Logical Drive Physical Drive 2 / 36 Requirements There are different classes of requirements: Data Independence application is shielded from physical storage
SQL Server Query Tuning Klaus Aschenbrenner Independent SQL Server Consultant SQLpassion.at Twitter: @Aschenbrenner About me Independent SQL Server Consultant International Speaker, Author Pro SQL Server
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
Enhancing SQL Server Performance Bradley Ball, Jason Strate and Roger Wolter In the ever-evolving data world, improving database performance is a constant challenge for administrators. End user satisfaction
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
Original-page small file oriented EXT3 file storage system Zhang Weizhe, Hui He, Zhang Qizhen School of Computer Science and Technology, Harbin Institute of Technology, Harbin E-mail: email@example.com
SafeNet vs. Native Encryption Executive Summary Given the vital records databases hold, these systems often represent one of the most critical areas of exposure for an enterprise. Consequently, as enterprises
Base One's Rich Client Architecture Base One provides a unique approach for developing Internet-enabled applications, combining both efficiency and ease of programming through its "Rich Client" architecture.
DB2 running on Linux, Unix, and Windows (LUW) continues to grow at a rapid pace. This rapid growth has resulted in a shortage of experienced non-mainframe DB2 DBAs. IT departments today have to deal with
Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should
In Memory Accelerator for MongoDB Yakov Zhdanov, Director R&D GridGain Systems GridGain: In Memory Computing Leader 5 years in production 100s of customers & users Starts every 10 secs worldwide Over 15,000,000
Monitors Monitor: A tool used to observe the activities on a system. Usage: A system programmer may use a monitor to improve software performance. Find frequently used segments of the software. A systems
(PGDIT 01) Paper - I : BASICS OF INFORMATION TECHNOLOGY 1) What is an information technology? Why you need to know about IT. 2) What is the structure of an organization? Explain how IT support at different
ABAP SQL Monitor Implementation Guide and Best Practices TABLE OF CONTENTS ABAP SQL Monitor - What is it and why do I need it?... 3 When is it available and what are the technical requirements?... 5 In
DB2 for Linux, UNIX, and Windows Performance Tuning and Monitoring Workshop Duration: 4 Days What you will learn Learn how to tune for optimum performance the IBM DB2 9 for Linux, UNIX, and Windows relational
CHAPTER 15: Operating Systems: An Overview The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint
Optimizing the Performance of Your Longview Application François Lalonde, Director Application Support May 15, 2013 Disclaimer This presentation is provided to you solely for information purposes, is not
Scaling Web Applications on Server-Farms Requires Distributed Caching A White Paper from ScaleOut Software Dr. William L. Bain Founder & CEO Spurred by the growth of Web-based applications running on server-farms,
Files What s it all about? Information being stored about anything important to the business/individual keeping the files. The simple concepts used in the operation of manual files are often a good guide
DATA WAREHOUSE PHYSICAL DESIGN The physical design of a data warehouse specifies the: low-level storage structures e.g. partitions underpinning the warehouse logical table structures low-level structures
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center Presented by: Dennis Liao Sales Engineer Zach Rea Sales Engineer January 27 th, 2015 Session 4 This Session
Performance Tuning and Optimizing SQL Databases 2016 http://www.homnick.com firstname.lastname@example.org +1.561.988.0567 Boca Raton, Fl USA About this course This four-day instructor-led course provides students
SQL Server 2014 In-Memory by Design Anu Ganesan August 8, 2014 Drive Real-Time Business with Real-Time Insights Faster transactions Faster queries Faster insights All built-in to SQL Server 2014. 2 Drive
Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely
Virtual Tape Systems for IBM Mainframes A comparative analysis Virtual Tape concepts for IBM Mainframes Mainframe Virtual Tape is typically defined as magnetic tape file images stored on disk. In reality
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
Physical Data Organization Database design using logical model of the database - appropriate level for users to focus on - user independence from implementation details Performance - other major factor
Data Compression in Blackbaud CRM Databases Len Wyatt Enterprise Performance Team Executive Summary... 1 Compression in SQL Server... 2 Perform Compression in Blackbaud CRM Databases... 3 Initial Compression...
New Security Options in DB2 for z/os Release 9 and 10 IBM has added several security improvements for DB2 (IBM s mainframe strategic database software) in these releases. Both Data Security Officers and
In-Memory Columnar Databases HyPer Arto Kärki University of Helsinki 30.11.2012 1 Introduction Columnar Databases Design Choices Data Clustering and Compression Conclusion 2 Introduction The relational
ENHANCEMENTS TO SQL SERVER COLUMN STORES Anuhya Mallempati #2610771 CONTENTS Abstract Introduction Column store indexes Batch mode processing Other Enhancements Conclusion ABSTRACT SQL server introduced
PERFORMANCE TUNING FOR PEOPLESOFT APPLICATIONS 1.Introduction: It is a widely known fact that 80% of performance problems are a direct result of the to poor performance, such as server configuration, resource
Cognos Performance Troubleshooting Presenters James Salmon Marketing Manager James.Salmon@budgetingsolutions.co.uk Andy Ellis Senior BI Consultant Andy.Ellis@budgetingsolutions.co.uk Want to ask a question?
WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Revision Date: July 2009 Crystal Reports Server 2008 Sizing Guide Overview Crystal Reports Server system sizing involves the process of determining how many resources are required to support a given workload.
FAQ: HPA-SQL FOR DB2 MAY 2013 Table of Contents 1 WHAT IS HPA-SQL FOR DB2?... 3 2 WHAT ARE HPA-SQL FOR DB2 UNIQUE ADVANTAGES?... 4 3 BUSINESS BENEFITS... 4 4 WHY PURCHASING HPA-SQL FOR DB2?... 5 5 WHAT
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
Proactive database performance management white paper 1. The Significance of IT in current business market 3 2. What is Proactive Database Performance Management? 3 Performance analysis through the Identification
Physical Database Design Process Physical Database Design Process The last stage of the database design process. A process of mapping the logical database structure developed in previous stages into internal
Information Systems Capacity Planning Monthly Report NOV 87 4Q87... Capacity Planning Monthly Report - November, 1987 This issue of the "Capacity Planning Monthly Report" contains information current through
Front-End Performance Testing and Optimization Abstract Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client
Expert Oracle Exadata Kerry Osborne Randy Johnson Tanel Poder Apress Contents J m About the Authors About the Technical Reviewer a Acknowledgments Introduction xvi xvii xviii xix Chapter 1: What Is Exadata?
Accelerating Web-Based SQL Server Applications with SafePeak Plug and Play Dynamic Database Caching A SafePeak Whitepaper February 2014 www.safepeak.com Copyright. SafePeak Technologies 2014 Contents Objective...