q for Gods Whitepaper Series (Edition 3) kdb+ Data Management: Sample Customisation Techniques
|
|
|
- Vernon Owens
- 10 years ago
- Views:
Transcription
1 Series (Edition 3) kdb+ Data Management: Sample Customisation Techniques (With Amendments from Original) December 2012 Author: Simon Mescal joined First Derivatives in Based in New York, Simon is a Financial Engineer who has designed and developed kdb+ related data management solutions for top tier investment banks and trading firms, across multiple asset classes. Simon has extensive experience implementing a range of applications in both object oriented and vector programming languages. 2013First Derivatives plc. All rights reserved. No part of this material may be copied, photocopied or duplicated in any form by any means or redistributed without the prior written consent of First Derivatives. All third party trademarks are owned by their respective owners and are used with permission. First Derivatives and its affiliates do not recommend or make any representation as to possible benefits from any securities or investments, or third-party products or services. Investors should undertake their own due diligence regarding securities and investment practices. This material may contain forward-looking statements regarding First Derivatives and its affiliates that are based on the current beliefs and expectations of management, are subject to significant risks and uncertainties, and which may differ from actual results. All data is as of January 10, 2013 First Derivatives disclaims any duty to update this information.
2 Table of Contents 1 Introduction Sample Data Management Techniques Multiple Enumeration Files Automating Schema Changes Attributes on Splayed Partitioned Tables ID Fields The GUID Data Type Combining Real-time and Historical Data Persisting Intra-day Data to Disk Intra-day Eliminate the Intra-day Process Conclusion kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 2
3 1 INTRODUCTION This paper illustrates the flexibility of kdb+ and how instances can be customized to meet the practical needs of business applications across different financial asset classes. Given the vast extent to which kdb+ can be customized, this document focuses on some specific cases relating to the capture and storage of intra-day and historical time-series data. Many of the cases described should resonate with those charged with managing large scale kdb+ installations. These complex systems can involve hundreds of kdb+ processes and/or databases supporting risk, trading and compliance analytics powered by petabytes of data, including tick, transaction, pricing and reference data. The goal of the paper is to share implementation options to those tasked with overseeing kdb+ setups, in the hope that the options will allow them to scale their deployments in a straight forward manner, maximize performance and enhance the end user experience. Seven scenarios are presented focused on tuning the technology using some common and uncommon techniques that allow for a significantly more efficient and performant system. The context where these techniques would be adopted is discussed, as well as the gains and potential drawbacks of adopting them. Where appropriate, sample code snippets are provided to illustrate how the technique might be implemented. The code examples are there for illustrative purposes only and we recommend that you always check the use and effect of any of the samples on a test database before implementing changes to a large dataset or production environment. Sample on disk test databases can be created using scripts found at Code examples are intended for readers with at least a beginner s level knowledge of kdb+ and the q language. However we encourage any interested reader to contact us at [email protected] with questions regarding code syntax or any other aspect of the paper. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 3
4 2 SAMPLE DATA MANAGEMENT TECHNIQUES 2.1 Multiple Enumeration Files In splayed databases, the default behavior is that all columns of type symbol are enumerated using the same list, a file called sym. Enumerating in this context means that rather than containing the actual symbol values, the column, when saved to disk, will consist of a list of integers each of which is the index of the actual value in the list (file) called sym. For more on enumerations, see This scenario is not always desirable in databases containing multiple tables and numerous columns of type symbol, which may represent for instance different types of security identifier, country codes, and exchange codes. If the sym file becomes corrupt then all the symbol columns will be lost. By decoupling the columns of type symbol - maintaining separate enumeration files for each symbol column corruption or loss of data in one enumeration file will not impact on all columns of type symbol. The q function which enumerates a table is.q.en. It takes a directory path and a table and enumerates the table using a file called sym in the directory (if there is no such file it creates it). The enumerate function below takes the same arguments but, rather than enumerating against the sym file, it enumerates against a different file for each column, with the file name matching the column name. enumerate:{[d;t] /directory,table /t is a mapping from column name to column (list) t:cols[t]! {[d;n;c] /directory,name,column /if column is a symbol enumerate it, otherwise return it $[11=type c; sv d,n;n set `symbol$()]; /load the file if it exists if[not all c in value n; f set value n set distinct value[n],c /add new values to the enum file ]; n$c /enumerate the list (c) with enumeration file ]; c ] [d]'[cols t;value flip t]; /iterate over each column-name/column pair flip t /continued overleaf kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 4
5 /the following line calls enumerate for each (non-keyed) table in the root namespace, writing the table in the current date directory {hsym[`$string[.z.d],"/",string[x],"/"] set enumerate[`:.;value x]each tables[`.] where 98=type each get each tables`. A slight variation on the enumerate function detailed below specifies a mapping from column to enumeration file so that certain columns share enumeration files. /mapping from columns names to enumeration names columnfilemapping:![`cusip`isin`symbol`market`currency`sector`trader`traderi D; `products`products`products`markets`markets`markets`people`people] enumerate:{[d;t] /directory,table t:cols[t]! {[d;n;c] /directory,name,column /if there is no mapping for column n, use sym enum:`sym^columnfilemapping n; $[11=type c; sv d,enum;enum set `symbol$()]; if[not all c in value enum; f set value enum set distinct value[enum],c ]; enum$c ]; c ] [d]'[cols t;value flip t]; flip t 2.2 Automating Schema Changes In some kdb+ installations, table schemas change frequently. This can be difficult to manage if the installation consists of a splayed partitioned database since each time the schema changes you have to manually change all the historical data to comply with the new schema. This is time-consuming and prone to error (dbmaint.q in the contributed code section of the KX website is a useful tool for performing these manual changes). To reduce the administrative overhead, the historical database schema can be updated programmatically as intra-day data is persisted. This can be achieved by invoking function updatehistoricalschema (overleaf), which connects to the historical database and runs locally defined functions to add and remove tables, add and remove columns, reorder columns kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 5
6 and change their types. It does this by comparing the table schemas in the last partition with those of the other partitions, and making the necessary adjustments to the other partitions. Note that this code is only compatible with kdb+ versions 2.6 and upwards (in previous versions the meta data is read from the first, rather than the last, partition therefore minor changes are necessary to get this to work on earlier versions). Note also that column type conversion (function changecolumntypes) will not work when changing to or from nested types or symbols - this functionality is beyond the scope of this paper. The updatehistoricalschema function should be invoked once the in-memory data is fully persisted. If kdb+ tick is being used, the function should be called as the last line of the.u.end function in r.q - the real-time database. updatehistoricalschema takes the port of the historical database as an argument, opens a connection to the historical database and calls functions that perform changes to the historical data. Note that the functions are defined in the calling process. This avoids having to load functionality into the historical process, which usually has no functionality defined in it. updatehistoricalschema:{[con] h:hopen con; h(addtables;()); h(removetables;()); h(addcolumns;()); h(removecolumns;()); h(reordercolumns;()); h(changecolumntypes;()); The following function simply adds empty tables to any older partitions based on the contents of the latest partition. addtables:{.q.chk`:. The following function finds all tables that are not in the latest partition but are in other partitions and removes those tables. removetables:{ t: distinct[raze key each hsym each `$string -1_date]except tables`.; {@[system;x;::]each "rm -r ",/:string[-1_date] cross "/",/:string t; The function overleaf iterates over each table-column pair in all partitions except for the latest one. It makes sure all columns are present and if not, creates the column with the default value of the column type in the latest partition. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 6
7 addcolumns:{ {[t] {[t;c] {[t;c;d] defaults:" Cefihjsdtz"!("";""),first each "efihjsdtz"$\:(); if[0=type key f:hsym`$string[d],"/",string[t],"/",string c; f set count[get hsym`$string[d],"/",string[t],"/sym"]#defaults ] [t;c]each -1_date [t] each cols[t]except `date each tables`. The following function deletes columns in earlier partitions that are not in the last partition. It does this by iterating over each of the tables in all partitions, except for the last one, getting the columns that are in that partition but not in the last partition and deletes them. removecolumns:{ {[t] {[t;d] delcols:key[t:hsym`$string[d],"/",string t]except cols t; {[t;c] hdel`$string[t],"/",string c; [t]each delcols; if[count except `date] ] [t] each -1_date each tables`. The function overleaf re-orders the columns by iterating over each of the partitions except for the last partition. It checks that the column order matches that of the latest partition by looking at the.d file. If there is a mismatch, it modifies the.d file to the column list in the last partition. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 7
8 reordercolumns:{ /.d file specifies the column names and order {[d] {[d;t] if[not except[cols t;`date]~get f:hsym`$string[d],"/",string[t],"/.d"; f set cols[t] except `date ] [d]each tables`. each -1_date The following function iterates over every table-column pair in all partitions except for the last. It checks that the type of the column matches that of the last partition and if not, it casts it to the correct type. changecolumntypes:{ {[t] {[t;c] typ:meta[t][c]`t; frst:type get hsym`$string[first date],cpath:"/",string[t],"/",string c; lst:type get hsym`$string[last date],cpath; /if type of column in first and last partition are different /and type in last partition is not symbol, character or list /and type in first partition is not generic list, char vector or symbol /convert all previous partitions to the type in last partition if[not[frst=lst]¬[typ in "sc ",.Q.A]¬ frst in h; {[c;t;d] hsym[`$string[d],c] set t$get hsym`$string[d],c [cpath;typ]each -1_date ] [t]each cols[t]except `date each tables`. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 8
9 2.3 Attributes on Splayed Partitioned Tables Attributes are used to speed up table joins and searches on lists. There are 4 different attributes: grouped (`g#) parted (`p#) unique (`u#) sorted (`s#) To apply the parted attribute to a list, all occurrences in the list of each element n must be adjacent to one another. For example, the following list is not of the required structure and attempting to apply the parted attribute to it will fail: However, the following list is of the required structure and the parted attribute can be applied: Sorting a list in ascending or descending order guarantees that the parted attribute can be applied. Likewise, to apply the unique attribute, the elements of the list must be unique and to apply the sorted attribute, the elements of a list must be in ascending order. The grouped attribute does not dictate any constraints on the list and the attribute can be applied to any list of atoms. When the unique, parted or grouped attribute is set on a list, q creates a hashtable alongside the list. For a list with the grouped attribute, the hashtable maps the elements to the indices where they occur, for a parted list it maps to the index at which the element starts and for a list with the unique attribute applied, it maps to the index of the element. The sorted attribute merely marks the list as sorted, so that q knows to use binary search on certain function (such as =, in,?, within). Now we explore the use of attributes on splayed partitioned tables. The most common approach with regard to attributes in splayed partitioned tables is to set the parted attribute on the security identifier column of the table, as this is usually the most commonly queried column. However, we are not limited to just one attribute on splayed partitioned tables. Given the constraint required for the parted and sorted attributes, as explained above, it is rarely possible to apply more than one parted, more than one sorted or a combination of the two on a given table. There would have to be a specific functional relationship between the two data sets for this to be feasible. That leaves grouped as a viable additional attribute to exist alongside parted or sorted. To illustrate the performance gain of using attributes, the following query was executed with the various attributes applied, where the number of records in t in partition x was 10 million, the number of distinct values of c in partition x is 6700 and c was of type enumerated symbol. Note that the query itself is not very useful but does provide a useful benchmark. select c from t where date=x,c=y kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 9
10 The query was 18 times faster when column c had the grouped attribute compared with no attribute. With c sorted and parted attribute applied the query was 29 times faster than with no attribute. However, there is a trade off with disk space. The column with the grouped attribute applied consumed 3 times as much space as the column with no attribute. The parted attribute applied consumed only 1% more than no attribute. Parted and sorted attributes are significantly more space efficient than grouped and unique. Suppose the data contains IDs which for a given date are unique, then the unique attribute can be applied regardless of any other attributes on the table. Taking the same dataset as in the example above with c (of type long instead of enumerated symbol) unique for each date and the unique attribute applied, the same query is 42 times faster with the attribute compared with no attribute. However, the column consumes 5 times the amount of disk space. Sorting the table by that same column and setting the sorted attribute results in the same query performance as with the unique attribute, and the same disk space consumed as with no attribute. However, the drawback of using sorted, as with parted for reasons explained above, is not being able to use the sorted or parted attributes on any other columns. The following is an approximation of the additional disk space (or memory) overhead, in bytes, of each of the attributes, where n is the size of the list and u is the number of unique elements. Table 1 Disk space/memory overhead of each attribute Attribute Space overhead sorted 0 unique 16*n parted 24*u grouped (24*u)+4*n kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 10
11 2.4 ID Fields The GUID Data Type It is often the case that a kdb+ application receives an ID field which does not repeat and contains characters that are not numbers. The question is which data type should be used for such a field. The problem with using enumerated symbols in this case is that the enumeration file becomes too large, slowing down searches and resulting in too much memory being consumed by the enumeration list. (see for more on splayed tables and enumerations) The problem with using char vector is that it is not an atom, so attributes cannot be applied, and searches on char vectors are very slow. In this case, it is worth considering GUID which can be used for storing 16-byte values (kdb+ version 3.0 onwards, see As the data is persisted to disk, or written to an intra-day in-memory instance, the ID field can be converted to GUID and persisted along with the original ID. The following code shows how this would be done in memory. It assumes that the original ID is a char vector of no more than 16 characters in length. /guidfromrawid function pads the char vector with spaces to the left, converts to byte array, converts byte array to GUID guidfromrawid:{0x0 sv `byte$#[16-count x;" "],x update guid:guidfromrawid each id from `t Once the GUID equivalent of the ID field is generated, attributes can be applied to the column and queries such as the following can be performed. select from t where date=d,guid=guidfromrawid["raw-id"] If the ID is unique for each date, the unique attribute could be considered for GUID, but at the expense of more disk space as illustrated above. If the number of characters in the ID is not known in advance, one should consider converting the right most 16 characters to GUID (or left most depending on which section of the ID changes the most), and keeping the remaining 16 character sections of the ID in a column consisting of GUID lists. An alternative is to just convert the right most 16 characters to GUID and use the original ID in the query as follows: select from t where guid=guidfromrawid["raw-id"],id like "raw-id" In this case, the right most 16 characters will not necessarily be unique therefore parted or grouped might be wise choices of attribute on the GUID column in the splayed partitioned table. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 11
12 2.5 Combining Real-time and Historical Data Typically, intra-day and historical data reside in separate kdb+ processes. Although there are instances when this is not desirable: The data set may be too small to merit multiple processes Users might want to view all their data in one location It may be desirable to minimise the number of components to manage Performance of intra-day queries may not be a priority (by combining intra-day and historical data into a single process intraday queries may have to wait for expensive I/O bound historical queries to complete) Moreover, by having separate processes for intra-day and historical data, it is often necessary to introduce a gateway as a single point of contact for querying intra-day and historical data, thereby running three processes for what is essentially a single data set. However, one of the main arguments against combining intraday and historical data is that large I/O bound queries on the historical data prevent updates being sent from the feeding process. This results in the TCP/IP buffer filling up, or blocking the feeding process entirely if it is making synchronous calls. The code snippet below provides a function which simply writes the contents of the in-memory tables to disk in splayed partitioned form, with different names to their in-memory equivalent. The function then memory maps the splayed partitioned tables. eod:{ /get the list of all in-memory, non-keyed tables t: tables[`.] where {not[.q.qp x]&98=type x each get each tables`.; /iterate through each of the tables, x is a date {[d;t] /enumerate the table, ascend sort by sym and apply the parted attribute /save to a splayed table called <table>_hist hsym[`$string[d],"/",string[t],"_hist/"] set update `p#optimisedcolumn from `optimisedcolumn xasc.q.en[`:.;get t]; /delete the contents of the in-memory table and re-apply the grouped attribute update `g#optimisedcolumn from delete from t [x]each t; /re-memory map the splayed partitioned tables system"l." The above will result in an additional (splayed partitioned) table being created for each non-keyed inmemory table. When combining intra-day and historical data into one process, it is best to provide functions to users and external applications which handle the intricacies of executing the query across the two tables (as a gateway process would), rather than allow them to run raw queries. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 12
13 2.6 Persisting Intra-day Data to Disk Intra-day Let us now assume there are memory limitations and you want the intra-day data to be persisted to disk multiple times throughout the day. The following example assumes the intraday and historical data reside in the same process, as outlined in the previous section. However in most cases large volumes of data dictate that separate historical and intraday processes are required. Accordingly, minor modifications can be made to the below code to handle the case of separate intra-day and historical processes. One way of achieving intraday persistence to disk is to create a function such as the one below, which is called periodically and appends the in-memory tables to their splayed/partitioned counterparts. The function takes the date as an argument and makes use of splayed upsert which appends to the splayed table (as compared to set which overwrites). flush:{[d] t:tables[`.] where {not[.q.qp x]&98=type x each get each tables`.; {[d;t] /f is the path to the table on disk f:hsym`$string[d],"/",string[t],"_hist/"; /if the directory exists, upsert (append) the enumerated table, otherwise set (creates the splayed partitioned table) $[0=type key f; set; upsert ][f;.q.en[`:.;get t]]; delete from t [d]each t; /re-memory map the data and garbage collect system"l.";.q.gc[] The above flush function does not sort or set the parted attribute on the data as this would take too much time blocking incoming queries - it would have to re-sort and save the entire partition each time. You could write functionality to be called at end-of-day to sort, set the parted attribute and re-write the data. Alternatively, to simplify things, do not make use of the parted attribute if query performance is not deemed a priority or use the grouped attribute that does not require sorting the data (see section 2.3 on Attributes on Splayed Partitioned Tables). If the component which feeds the process is a tickerplant (see kdb+ tick), it is necessary to truncate the tickerplant log file (file containing all records received) each time flush is called, otherwise the historical data could end up with duplicates. This can happen if the process saves to disk and then restarts, retrieving those same records from the log file which will subsequently be saved again. The call to the flush function can be initiated from the tickerplant in the same way it calls the end-of-day function. If kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 13
14 the source of the data is something other than a component which maintains a file of records received, such as a tickerplant, then calling the flush function from within the process is usually the best approach. 2.7 Eliminate the Intra-day Process In some cases, only prior day data is needed and there is no requirement to run queries on intra-day data. However, you still need to generate the historical data daily. If daily flat files are provided then simply use a combination of the following functions to load the data and write it to the historical database in splayed partitioned form. Table 2 Functions for creating splayed partitioned tables Function.Q.dsftg.Q.hdpf.Q.dpft.Q.en Brief Description Load and save file in chunks Save in-memory table in splayed partitioned format Same as above, but no splayed partitioned process specified Returns a copy of an in-memory table with symbol columns enumerated 0: Load text file into an in-memory table or dictionary set upsert Writes a table in splayed format Appends to a splayed table If kdb+ tick is being used, simply turn off the real-time database and write the contents of the tickerplant log file to the historical database as follows: \l schema.q /load schema \cd /path/to/hdb /change dir to HDB dir upd:insert -11!`:/path/to/tickerplantlog /replay log file.q.hdpf[hdb_port;`:.;.z.d;`partedcolumn] If there are memory limitations on the box, then upd can be defined to periodically flush the in-memory data to disk as follows: \l schema.q \cd /path/to/hdb flush:{ /for each table of count greater than 0 /create or append to the splayed table /continued overleaf kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 14
15 {[t] f:hsym`$string[.z.d],"/",string[t],"/"; $[0=type key f; set; upsert ][f;.q.en[`:.;get t]]; delete from t each tables[`.] where 0<count each get each tables`.; upd:{insert[x;y]; /call flush if any tables have more than 1 million records if[any <count each get each tables`.; flush[] ] /replay the log file -11!`:/path/to/tickerplantlog /write the remaining table contents to disk flush[] If kdb+ tick is not being used, the equivalent of the tickerplant log file can be created simply by feeding the data into a q process via a callback function, which writes it to a file and persists it to disk as follows: f:`:/path/to/logfile f set () /initialise the file to empty list h:hopen f callback:{h enlist(`upd;x;y) /x is table name, y is data kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 15
16 3 CONCLUSION When designing kdb+ applications to tackle the management of diverse and large volumes of content, you must take many business and technical factors into account. Considerations around the performance requirements of the application, network and hardware, personnel constraints, ease of maintenance and scalability all play a role in running a production system. This document has outlined practical options available to system managers for addressing some of the common scenarios they may encounter when architecting and growing their kdb+ deployment. In summary, the questions that frequently arise when one is in the process of designing robust, high performance kdb+ applications include: What are the appropriate datatypes for the columns? How often is it expected that the schema will change and is the need for manual intervention in ensuring data consistency a concern? What are the fields that will be queried most often and what (if any) attributes should be applied to them? What is the trade-off between performance and disk/memory usage when the attributes are applied and is it worth the performance gain? Is it a concern that symbol columns would share enumeration files, and if so, should the enumerations be decoupled? Is it necessary to make intra-day data available? If not, consider conserving in-memory requirements by, for example, persisting to disk periodically, or writing the contents of a tickerplant log file to splayed format at end-of-day. Consider the overhead of maintaining separate intra-day and historical processes. Is it feasible or worth while to combine them into one? As mentioned in the introduction, a key feature of kdb+ is that it is a flexible offering that allows users to extend the functionality of their applications through the versatility of the q language. There are vanilla intra-day and historical data processing architectures presented in existing documentation that cover many standard use cases. However, it is also common for a kdb+ system to quickly establish itself as a success within the firm, resulting in the need to process more types of data and requests from downstream users and applications. It is at this point that a systems manager is often faced with balancing how to ensure the goals of performance and scalability are realised, while at the same time dealing with resource constraints and the need for maintainability. The seven cases and customisation techniques covered in this paper provided examples of what you can do to help achieve those goals. kdb+ Data Management: Sample Customisation Techniques (Dec 2012) 16
q for Gods Whitepaper Series (Edition 1) Multi-Partitioned kdb+ Databases: An Equity Options Case Study
Series (Edition 1) Multi-Partitioned kdb+ Databases: An Equity Options Case Study October 2012 Author: James Hanna, who joined First Derivatives in 2004, has helped design and develop kdb+ implementations
q for Gods Whitepaper Series (Edition 20)
Series (Edition 20) Data Recovery for kdb+tick July 2014 Author: Fionnbharr Gaston, who joined First Derivatives in 2011, is a kdb+ consultant currently based in Singapore. He has worked with a number
q for Gods Whitepaper Series (Edition 7) Common Design Principles for kdb+ Gateways
Series (Edition 7) Common Design Principles for kdb+ Gateways May 2013 Author: Michael McClintock joined First Derivatives in 2009 and has worked as a consultant on a range of kdb+ applications for hedge
q for Gods Whitepaper Series (Edition 28) Query Routing: A kdb+ Framework for a Scalable, Load Balanced System
q for Gods Whitepaper Series (Edition 28) Query Routing: A kdb+ Framework for a Scalable, Load Balanced System February 2016 Author: Kevin Holsgrove Kevin Holsgrove, who joined First Derivatives in 2012,
Raima Database Manager Version 14.0 In-memory Database Engine
+ Raima Database Manager Version 14.0 In-memory Database Engine By Jeffrey R. Parsons, Senior Engineer January 2016 Abstract Raima Database Manager (RDM) v14.0 contains an all new data storage engine optimized
Integrating VoltDB with Hadoop
The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.
Performance Tuning for the Teradata Database
Performance Tuning for the Teradata Database Matthew W Froemsdorf Teradata Partner Engineering and Technical Consulting - i - Document Changes Rev. Date Section Comment 1.0 2010-10-26 All Initial document
Chapter 6: Physical Database Design and Performance. Database Development Process. Physical Design Process. Physical Database Design
Chapter 6: Physical Database Design and Performance Modern Database Management 6 th Edition Jeffrey A. Hoffer, Mary B. Prescott, Fred R. McFadden Robert C. Nickerson ISYS 464 Spring 2003 Topic 23 Database
Sybase Adaptive Server Enterprise
technical white paper Sybase Adaptive Server Enterprise Data Transfer Utility www.sybase.com Contents 1. Executive Summary..........................................................................................................
The Hadoop Distributed File System
The Hadoop Distributed File System Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia, Chansler}@Yahoo-Inc.com Presenter: Alex Hu HDFS
SQL Server Setup Guide for BusinessObjects Planning
SQL Server Setup Guide for BusinessObjects Planning BusinessObjects Planning XI Release 2 Copyright 2007 Business Objects. All rights reserved. Business Objects owns the following U.S. patents, which may
Acronis Backup & Recovery: Events in Application Event Log of Windows http://kb.acronis.com/content/38327
Acronis Backup & Recovery: Events in Application Event Log of Windows http://kb.acronis.com/content/38327 Mod ule_i D Error _Cod e Error Description 1 1 PROCESSOR_NULLREF_ERROR 1 100 ERROR_PARSE_PAIR Failed
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
New Features for Sybase Mobile SDK and Runtime. Sybase Unwired Platform 2.1 ESD #2
New Features for Sybase Mobile SDK and Runtime Sybase Unwired Platform 2.1 ESD #2 DOCUMENT ID: DC60009-01-0212-02 LAST REVISED: March 2012 Copyright 2012 by Sybase, Inc. All rights reserved. This publication
SAP HANA PLATFORM Top Ten Questions for Choosing In-Memory Databases. Start Here
PLATFORM Top Ten Questions for Choosing In-Memory Databases Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases. Are my applications accelerated without manual intervention and tuning?.
Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier
Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier Simon Law TimesTen Product Manager, Oracle Meet The Experts: Andy Yao TimesTen Product Manager, Oracle Gagan Singh Senior
NDK: Novell edirectory Core Services. novdocx (en) 24 April 2008. Novell Developer Kit. www.novell.com NOVELL EDIRECTORY TM CORE SERVICES.
NDK: Novell edirectory Core Services Novell Developer Kit www.novell.com June 2008 NOVELL EDIRECTORY TM CORE SERVICES Legal Notices Novell, Inc. makes no representations or warranties with respect to the
The Hadoop Distributed File System
The Hadoop Distributed File System The Hadoop Distributed File System, Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler, Yahoo, 2010 Agenda Topic 1: Introduction Topic 2: Architecture
Database Server Configuration Best Practices for Aras Innovator 10
Database Server Configuration Best Practices for Aras Innovator 10 Aras Innovator 10 Running on SQL Server 2012 Enterprise Edition Contents Executive Summary... 1 Introduction... 2 Overview... 2 Aras Innovator
Database Administration
Unified CCE, page 1 Historical Data, page 2 Tool, page 3 Database Sizing Estimator Tool, page 11 Administration & Data Server with Historical Data Server Setup, page 14 Database Size Monitoring, page 15
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult
Load Manager Administrator s Guide For other guides in this document set, go to the Document Center
Load Manager Administrator s Guide For other guides in this document set, go to the Document Center Load Manager for Citrix Presentation Server Citrix Presentation Server 4.5 for Windows Citrix Access
Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000
Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Your Data, Any Place, Any Time Executive Summary: More than ever, organizations rely on data
SAS 9.4 Intelligence Platform
SAS 9.4 Intelligence Platform Application Server Administration Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2013. SAS 9.4 Intelligence Platform:
ICE for Eclipse. Release 9.0.1
ICE for Eclipse Release 9.0.1 Disclaimer This document is for informational purposes only and is subject to change without notice. This document and its contents, including the viewpoints, dates and functional
Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases
Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
DMS Performance Tuning Guide for SQL Server
DMS Performance Tuning Guide for SQL Server Rev: February 13, 2014 Sitecore CMS 6.5 DMS Performance Tuning Guide for SQL Server A system administrator's guide to optimizing the performance of Sitecore
ORACLE SERVICE CLOUD GUIDE: HOW TO IMPROVE REPORTING PERFORMANCE
ORACLE SERVICE CLOUD GUIDE: HOW TO IMPROVE REPORTING PERFORMANCE Best Practices to Scale Oracle Service Cloud Analytics for High Performance ORACLE WHITE PAPER MARCH 2015 Table of Contents Target Audience
EMC RepliStor for Microsoft Windows ERROR MESSAGE AND CODE GUIDE P/N 300-002-826 REV A02
EMC RepliStor for Microsoft Windows ERROR MESSAGE AND CODE GUIDE P/N 300-002-826 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2003-2005
How To Backup A Database In Navision
Making Database Backups in Microsoft Business Solutions Navision MAKING DATABASE BACKUPS IN MICROSOFT BUSINESS SOLUTIONS NAVISION DISCLAIMER This material is for informational purposes only. Microsoft
Visual Basic. murach's TRAINING & REFERENCE
TRAINING & REFERENCE murach's Visual Basic 2008 Anne Boehm lbm Mike Murach & Associates, Inc. H 1-800-221-5528 (559) 440-9071 Fax: (559) 440-0963 [email protected] www.murach.com Contents Introduction
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
SQL Server 2008 Performance and Scale
SQL Server 2008 Performance and Scale White Paper Published: February 2008 Updated: July 2008 Summary: Microsoft SQL Server 2008 incorporates the tools and technologies that are necessary to implement
Running a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
A Dell Technical White Paper Dell Compellent
The Architectural Advantages of Dell Compellent Automated Tiered Storage A Dell Technical White Paper Dell Compellent THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
I. General Database Server Performance Information. Knowledge Base Article. Database Server Performance Best Practices Guide
Knowledge Base Article Database Server Performance Best Practices Guide Article ID: NA-0500-0025 Publish Date: 23 Mar 2015 Article Status: Article Type: Required Action: Approved General Product Technical
ABSTRACT INTRODUCTION SOFTWARE DEPLOYMENT MODEL. Paper 341-2009
Paper 341-2009 The Platform for SAS Business Analytics as a Centrally Managed Service Joe Zilka, SAS Institute, Inc., Copley, OH Greg Henderson, SAS Institute Inc., Cary, NC ABSTRACT Organizations that
Uni Sales Analysis CRM Extension for Sage Accpac ERP 5.5
SAGE ACCPAC OPTIONS Sage Accpac Options Uni Sales Analysis CRM Extension for Sage Accpac ERP 5.5 User Guide 2008 Sage Software, Inc. All rights reserved. Sage Software, Sage Software logos, and all Sage
TIBCO ActiveSpaces Use Cases How in-memory computing supercharges your infrastructure
TIBCO Use Cases How in-memory computing supercharges your infrastructure is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable,
Programming in C# with Microsoft Visual Studio 2010
Introducción a la Programación Web con C# en Visual Studio 2010 Curso: Introduction to Web development Programming in C# with Microsoft Visual Studio 2010 Introduction to Web Development with Microsoft
Paper 109-25 Merges and Joins Timothy J Harrington, Trilogy Consulting Corporation
Paper 109-25 Merges and Joins Timothy J Harrington, Trilogy Consulting Corporation Abstract This paper discusses methods of joining SAS data sets. The different methods and the reasons for choosing a particular
DiskPulse DISK CHANGE MONITOR
DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com [email protected] 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product
SDFS Overview. By Sam Silverberg
SDFS Overview By Sam Silverberg Why did I do this? I had an Idea that I needed to see if it worked. Design Goals Create a dedup file system capable of effective inline deduplication for Virtual Machines
MySQL Cluster 7.0 - New Features. Johan Andersson MySQL Cluster Consulting [email protected]
MySQL Cluster 7.0 - New Features Johan Andersson MySQL Cluster Consulting [email protected] Mat Keep MySQL Cluster Product Management [email protected] Copyright 2009 MySQL Sun Microsystems. The
Elastic Application Platform for Market Data Real-Time Analytics. for E-Commerce
Elastic Application Platform for Market Data Real-Time Analytics Can you deliver real-time pricing, on high-speed market data, for real-time critical for E-Commerce decisions? Market Data Analytics applications
Quick Connect Express for Active Directory
Quick Connect Express for Active Directory Version 5.2 Quick Start Guide 2012 Dell Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in
Business Application Services Testing
Business Application Services Testing Curriculum Structure Course name Duration(days) Express 2 Testing Concept and methodologies 3 Introduction to Performance Testing 3 Web Testing 2 QTP 5 SQL 5 Load
Physical Design. Meeting the needs of the users is the gold standard against which we measure our success in creating a database.
Physical Design Physical Database Design (Defined): Process of producing a description of the implementation of the database on secondary storage; it describes the base relations, file organizations, and
Novell LDAP Proxy Server
AUTHORIZED DOCUMENTATION Best Features Guide Novell LDAP Proxy Server 1.0 October 2011 www.novell.com Legal Notices Novell, Inc. makes no representations or warranties with respect to the contents or use
MyOra 3.0. User Guide. SQL Tool for Oracle. Jayam Systems, LLC
MyOra 3.0 SQL Tool for Oracle User Guide Jayam Systems, LLC Contents Features... 4 Connecting to the Database... 5 Login... 5 Login History... 6 Connection Indicator... 6 Closing the Connection... 7 SQL
Dynamic Decision-Making Web Services Using SAS Stored Processes and SAS Business Rules Manager
Paper SAS1787-2015 Dynamic Decision-Making Web Services Using SAS Stored Processes and SAS Business Rules Manager Chris Upton and Lori Small, SAS Institute Inc. ABSTRACT With the latest release of SAS
etrust Audit Using the Recorder for Check Point FireWall-1 1.5
etrust Audit Using the Recorder for Check Point FireWall-1 1.5 This documentation and related computer software program (hereinafter referred to as the Documentation ) is for the end user s informational
www.dotnetsparkles.wordpress.com
Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.
Jet Data Manager 2012 User Guide
Jet Data Manager 2012 User Guide Welcome This documentation provides descriptions of the concepts and features of the Jet Data Manager and how to use with them. With the Jet Data Manager you can transform
Benefits of multi-core, time-critical, high volume, real-time data analysis for trading and risk management
SOLUTION B L U EPRINT FINANCIAL SERVICES Benefits of multi-core, time-critical, high volume, real-time data analysis for trading and risk management Industry Financial Services Business Challenge Ultra
3.GETTING STARTED WITH ORACLE8i
Oracle For Beginners Page : 1 3.GETTING STARTED WITH ORACLE8i Creating a table Datatypes Displaying table definition using DESCRIBE Inserting rows into a table Selecting rows from a table Editing SQL buffer
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
Storing Measurement Data
Storing Measurement Data File I/O records or reads data in a file. A typical file I/O operation involves the following process. 1. Create or open a file. Indicate where an existing file resides or where
KMx Enterprise: Integration Overview for Member Account Synchronization and Single Signon
KMx Enterprise: Integration Overview for Member Account Synchronization and Single Signon KMx Enterprise includes two api s for integrating user accounts with an external directory of employee or other
SQL Server An Overview
SQL Server An Overview SQL Server Microsoft SQL Server is designed to work effectively in a number of environments: As a two-tier or multi-tier client/server database system As a desktop database system
QlikView 11.2 SR5 DIRECT DISCOVERY
QlikView 11.2 SR5 DIRECT DISCOVERY FAQ and What s New Published: November, 2012 Version: 5.0 Last Updated: December, 2013 www.qlikview.com 1 What s New in Direct Discovery 11.2 SR5? Direct discovery in
Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)
Journal of science e ISSN 2277-3290 Print ISSN 2277-3282 Information Technology www.journalofscience.net STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) S. Chandra
ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE
ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE Version 1.0 Oracle Corporation i Table of Contents TABLE OF CONTENTS... 2 1. INTRODUCTION... 3 1.1. FUNCTIONALITY... 3 1.2. SUPPORTED OPERATIONS... 4 1.3. UNSUPPORTED
Optimizing the Performance of Your Longview Application
Optimizing the Performance of Your Longview Application François Lalonde, Director Application Support May 15, 2013 Disclaimer This presentation is provided to you solely for information purposes, is not
Using an In-Memory Data Grid for Near Real-Time Data Analysis
SCALEOUT SOFTWARE Using an In-Memory Data Grid for Near Real-Time Data Analysis by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 IN today s competitive world, businesses
HYPERION SYSTEM 9 N-TIER INSTALLATION GUIDE MASTER DATA MANAGEMENT RELEASE 9.2
HYPERION SYSTEM 9 MASTER DATA MANAGEMENT RELEASE 9.2 N-TIER INSTALLATION GUIDE P/N: DM90192000 Copyright 2005-2006 Hyperion Solutions Corporation. All rights reserved. Hyperion, the Hyperion logo, and
ENHANCEMENTS TO SQL SERVER COLUMN STORES. Anuhya Mallempati #2610771
ENHANCEMENTS TO SQL SERVER COLUMN STORES Anuhya Mallempati #2610771 CONTENTS Abstract Introduction Column store indexes Batch mode processing Other Enhancements Conclusion ABSTRACT SQL server introduced
Database Design Patterns. Winter 2006-2007 Lecture 24
Database Design Patterns Winter 2006-2007 Lecture 24 Trees and Hierarchies Many schemas need to represent trees or hierarchies of some sort Common way of representing trees: An adjacency list model Each
Network device management solution
iw Management Console Network device management solution iw MANAGEMENT CONSOLE Scalability. Reliability. Real-time communications. Productivity. Network efficiency. You demand it from your ERP systems
Online Transaction Processing in SQL Server 2008
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
Chapter 13 File and Database Systems
Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File Allocation
Chapter 13 File and Database Systems
Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File Allocation
NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB
bankmark UG (haftungsbeschränkt) Bahnhofstraße 1 9432 Passau Germany www.bankmark.de [email protected] T +49 851 25 49 49 F +49 851 25 49 499 NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB,
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011
Scalable Data Analysis in R Lee E. Edlefsen Chief Scientist UserR! 2011 1 Introduction Our ability to collect and store data has rapidly been outpacing our ability to analyze it We need scalable data analysis
HDFS Architecture Guide
by Dhruba Borthakur Table of contents 1 Introduction... 3 2 Assumptions and Goals... 3 2.1 Hardware Failure... 3 2.2 Streaming Data Access...3 2.3 Large Data Sets... 3 2.4 Simple Coherency Model...3 2.5
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture
Constructing a Data Lake: Hadoop and Oracle Database United!
Constructing a Data Lake: Hadoop and Oracle Database United! Sharon Sophia Stephen Big Data PreSales Consultant February 21, 2015 Safe Harbor The following is intended to outline our general product direction.
ORACLE USER PRODUCTIVITY KIT USAGE TRACKING ADMINISTRATION & REPORTING RELEASE 3.6 PART NO. E17087-01
ORACLE USER PRODUCTIVITY KIT USAGE TRACKING ADMINISTRATION & REPORTING RELEASE 3.6 PART NO. E17087-01 FEBRUARY 2010 COPYRIGHT Copyright 1998, 2009, Oracle and/or its affiliates. All rights reserved. Part
Using Object Database db4o as Storage Provider in Voldemort
Using Object Database db4o as Storage Provider in Voldemort by German Viscuso db4objects (a division of Versant Corporation) September 2010 Abstract: In this article I will show you how
Fact Sheet In-Memory Analysis
Fact Sheet In-Memory Analysis 1 Copyright Yellowfin International 2010 Contents In Memory Overview...3 Benefits...3 Agile development & rapid delivery...3 Data types supported by the In-Memory Database...4
Agenda. Some Examples from Yahoo! Hadoop. Some Examples from Yahoo! Crawling. Cloud (data) management Ahmed Ali-Eldin. First part: Second part:
Cloud (data) management Ahmed Ali-Eldin First part: ZooKeeper (Yahoo!) Agenda A highly available, scalable, distributed, configuration, consensus, group membership, leader election, naming, and coordination
Understanding IBM Lotus Domino server clustering
Understanding IBM Lotus Domino server clustering Reetu Sharma Software Engineer, IBM Software Group Pune, India Ranjit Rai Software Engineer IBM Software Group Pune, India August 2009 Copyright International
Asta Powerproject Enterprise
Asta Powerproject Enterprise Overview and System Requirements Guide Asta Development plc Kingston House Goodsons Mews Wellington Street Thame Oxfordshire OX9 3BX United Kingdom Tel: +44 (0)1844 261700
Oracle Enterprise Manager
Oracle Enterprise Manager System Monitoring Plug-in Installation Guide for EMC Symmetrix DMX System Release 12.1.0.2.0 E27543-03 February 2014 This document provides installation and configuration instructions
I N T E R S Y S T E M S W H I T E P A P E R INTERSYSTEMS CACHÉ AS AN ALTERNATIVE TO IN-MEMORY DATABASES. David Kaaret InterSystems Corporation
INTERSYSTEMS CACHÉ AS AN ALTERNATIVE TO IN-MEMORY DATABASES David Kaaret InterSystems Corporation INTERSYSTEMS CACHÉ AS AN ALTERNATIVE TO IN-MEMORY DATABASES Introduction To overcome the performance limitations
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Better Together Writer: Bill Baer, Technical Product Manager, SharePoint Product Group Technical Reviewers: Steve Peschka,
Microsoft SharePoint and Records Management Compliance
Microsoft SharePoint and Records Management Compliance White Paper Revision: 2 Date created: 20 February 2015 Principal author: Nigel Carruthers-Taylor, Principal, icognition Reference: 15/678 Summary
Oracle Enterprise Manager
Oracle Enterprise Manager System Monitoring Plug-in for Oracle TimesTen In-Memory Database Installation Guide Release 11.2.1 E13081-02 June 2009 This document was first written and published in November
Mimer SQL Real-Time Edition White Paper
Mimer SQL Real-Time Edition - White Paper 1(5) Mimer SQL Real-Time Edition White Paper - Dag Nyström, Product Manager Mimer SQL Real-Time Edition Mimer SQL Real-Time Edition is a predictable, scalable
SIMPLE NETWORK MANAGEMENT PROTOCOL (SNMP)
1 SIMPLE NETWORK MANAGEMENT PROTOCOL (SNMP) Mohammad S. Hasan Agenda 2 Looking at Today What is a management protocol and why is it needed Addressing a variable within SNMP Differing versions Ad-hoc Network
SpatialWare. Version 4.9.2 for Microsoft SQL Server 2008 INSTALLATION GUIDE
SpatialWare Version 4.9.2 for Microsoft SQL Server 2008 INSTALLATION GUIDE Information in this document is subject to change without notice and does not represent a commitment on the part of the vendor
NS DISCOVER 4.0 ADMINISTRATOR S GUIDE. July, 2015. Version 4.0
NS DISCOVER 4.0 ADMINISTRATOR S GUIDE July, 2015 Version 4.0 TABLE OF CONTENTS 1 General Information... 4 1.1 Objective... 4 1.2 New 4.0 Features Improvements... 4 1.3 Migrating from 3.x to 4.x... 5 2
Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led
Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Course Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills
