(QWHUSULVH'DWDEDVH$UFKLWHFWXUH 0LJUDWLRQ

Size: px
Start display at page:

Download "(QWHUSULVH'DWDEDVH$UFKLWHFWXUH 0LJUDWLRQ"

Transcription

1 (QWHUSULVH'DWDEDVH$UFKLWHFWXUH 0LJUDWLRQ 7HLMR3HOWRQLHPL 06F3URMHFW'LVVHUWDWLRQ for the Degree of Master of Science in Informatics with major in Computer Systems and Software Engineering minor in Representation and Reasoning 7KH8QLYHUVLW\RI(GLQEXUJK 6HSWHPEHU

2 (QWHUSULVH'DWDEDVH$UFKLWHFWXUH0LJUDWLRQ $XWKRUTeijo Peltoniemi School of Informatics, University of Edingburgh $FDGHPLFVXSHUYLVRU J. Douglas Armstrong PhD School of Informatics, University of Edingburgh,QGXVWULDOVXSHUYLVRUV Terho Oinonen, Toomas Valjakka TietoEnator Forest Corporation 2

3 $EVWUDFW Fenix is a commercial enterprise level ERP software that depends heavily on the underlying database management system (DBMS). The customer considers relying solely on one DBMS vendor as a risk and requires contingency planning. This paper outlines a plan for migrating the DBMS and reviews migration tasks in theory as well as practice. An essential part of migration planning is to define the requirements for the new system. This is achieved by analysing the current system and prospective tech nologies. Another purpose of the analysis is to point out deficiencies in the current architecture and practices, which tend to be outdated with older systems; Fenix is no exception here. Furthermore, Fenix could gain significant performance improvements with new technologies. However, these require substantial investments with no exception. Since a large migration project contains a number of uncertainties piloting is necessary. This paper contains a report of an exemplary migration project carried out to a small partition of the system. The problems encountered during it were mostly tool-oriented; it is highly recommendable to acquire a complete migration suite package and establish a proper migration environment should the project become actual. The new system is validated through benchmarking it with the old system. This was part of the mini project also. 3

4 7DEOHRIFRQWHQWV $%675$&7 7$%/(2)&217(176 &+$37(5,1752'8&7,21 FENIX...7 ENTERPRISE DATABASE ARCHITECTURE...7 DEVELOPMENT...8 MIGRATION AND PROJECT OBJECTIVES...8 &+$37(50,*5$7,21352&(6629(59,(: TYPES OF MIGRATION...10 MIGRATION PLANNING LJUDWLRQSODQQLQJDFWLYLWLHV 3ODQQLQJPLJUDWLRQLQ2UDFOHZD\ 0LJUDWLRQSODQQLQJJXLGHOLQHV DATABASE MIGRATION AND PARALLEL OPERATION...14 &+$37(5$1$/<6,62)35(6(17$5&+,7(&785( PRESENTATION OF THE SYSTEM ARCHITECTURE...15 )HQL[V\VWHP 7KHEDFNHQGV\VWHPRYHUYLHZ 7KHGDWDEDVHDUFKLWHFWXUH $UFKLWHFWXUDOOD\HUV 'DWDEDVHFRQQHFWLRQV 'LVWULEXWHGWUDQVDFWLRQV 7KHIUDPHZRUN 'DWD:DUHKRXVLQJ 5HSOLFDWLRQ DEVELOPMENT PRACTICES...28 /RFNLQJDQGWLPHVWDPSLQJ,QWHJULW\FRQVWUDLQWVDQGLQGH[LQJ 7ULJJHUV 6WRUHGSURFHGXUHVDQGG\QDPLF64/ &XUVRUV 9LHZV THE CURRENT SITUATION...30,QWHJULW\FRQVWUDLQWV 4

5 'HQRUPDOL]DWLRQ (QWHUSULVHFRQVWUDLQWV 0DWHULDOL]HGYLHZV &+$37(57(&+12/2*<&203$5,621 MARKET SITUATION AND FINANCIAL MATTERS...33 MEMORY MODEL...34 DATA STRUCTURES...34 PERFORMANCE AND TUNING...35 CURSORS AND LOCKING...36 TEMPORARY TABLES...38 PROGRAMMING LANGUAGES...38 DATATYPES...39 REPLICATION...40 HIGH AVAILABILITY...42 DATA WAREHOUSING...43 CONCLUSION...43 &+$37(50,*5$7,21722/6 MIGRATION TASKS...45 POWERDESIGNER...46 OMWB OVERVIEW...47 OTHER TOOLS...48 &+$37(50,*5$7,21 THE SETUP...50 $UFKLWHFWXUH 6HWWLQJXSWKHDUFKLWHFWXUH MIGRATION PROCESS...52,QYRLFLQJVFKHPD 0LJUDWLQJWKHVFKHPD &UHDWLQJWKHGDWDEDVH 0LJUDWLQJWKHGDWD &+$37(56&+(0$0,*5$7,21$1'%(1&+0$5.,1* REFINING THE SCHEMA...61 'HFODUDWLYHLQWHJULW\FKHFNLQJ BENCHMARKING HVWLQJHQYLURQPHQWDQGSURJUDPV 7HVWGDWDEDVH 5

6 7HVWTXHULHV $VVXPSWLRQV 5XQQLQJWKHWHVW 'LVFXVVLRQRQWKHUHVXOWV &+$37(50,*5$7,2152$'0$3 PARTITION...71 MIGRATION ACTIVITIES WHS 6WHS 6WHS 6WHS 6WHS DURATIONS...76 &+$37(5&21&/86,21$1')857+(53/$16 %,%/,2*5$3+< $33(1',;$&21752/),/( $33(1',;%)250$7),/( $33(1',;&$%%5(9$7,216 6

7 &KDSWHU,QWURGXFWLRQ In this chapter I will describe the system at hand and the relating database architecture. Furthermore, I will discuss the objectives of this project. )HQL[ Fenix is a tailored Enterprise Resource Planning (ERP) system. ERP systems are used during each stage of the order chain including order handling, logistics and invoicing and Fenix is no exception. Other functions include, for example, steering and operational reporting and technical customer service. TietoEnator Forest Ltd. (TE) has continued developing Fenix since 1994 and the development team consists of 60 people working in six locations in three countries. The customer, a large paper company, has dedicated 50 people to developing Fenix. The project is one of the largest of this type conducted in Finland. (QWHUSULVHGDWDEDVHDUFKLWHFWXUH An enterprise-level system has thousands of users around the world and this places certain performance requirements on the underlying Database Management System (DBMS). It has to be able to process hundreds of concurrent transactions and deal with terabytes of data. A common practice is to distribute the data. High availability must also be provided. Besides traditional Online Transaction Processing (OLTP) functionality companies require management decision support tools to improve their competitive edge. Data Warehousing (DW) is designed for these purposes and it is proven to provide an excellent return on investment (ROI) (Connolly et al. 1999, p.915). By enterprise database architecture I mean a database architecture that consists of multiple databases residing in multiple servers. Also, it is distributed geographically 7

8 and provides high availability by architectural means. Furthermore, it provides DW. Other definitions also exist. For instance, Seacord (2001, p.15) takes a more functional approach and requires support for ad-hoc query, persistent summaries and complex transactions. These requirements are naturally also included in my definition (DW and data distribution place needs for them). 'HYHORSPHQW The actual development project was finished in Despite moving to the maintenance phase, development continues to meet changing business needs, new rollouts and emerging technology (TietoEnator Corporation 2003). For instance, the user population is constantly growing and scalability issues must be addressed. Fenix has faced many migration projects, for instance changing from two-tier to three-tier in 1996 when BEA Tuxedo was adopted. Other changes include becoming web and XML enabled. Further migration plans are under review. 0LJUDWLRQDQGSURMHFWREMHFWLYHV Fenix contains Sybase Adaptive Server (ASE), a DBMS that has served relatively well to date. The cooperation between Sybase and TE is well established and Sybase is willing to develop their products in the desired directions. However, the customer requires a worst-case plan for DBMS migration. This is mostly contingency planning; the customer regards relying solely on one vendor as a risk. However, preparing for migration has become more and more topical, as Sybase has lost market share. Sybase has not been rated among three winners which are Oracle, MS SQLServer and IBM DB/2. The first two are considered as alternatives to Sybase. TE Forest has long experience on Oracle and has used it with other projects. MS SQLServer is technologically based on Sybase (version 6) although the systems have grown apart more recently. MS SQLServer still uses T-SQL as the programming language for stored procedures. DB/2 is not very well known in the forestry industry; 8

9 not only would it be risky to adopt an unused technology but it would be expensive too: the staff would need to be educated to master DB2. TE considers open source options such as MySQL or Postgress not compliant due to the lack of some essential features. For example, MySQL does not support stored procedures that are essential within the system. Furthermore, TE considers high-class technical support an important factor that is not necessarily available with open source products. The goal of this project is to draw up a plan for the database architecture migration as well as investigate how to solve some existing problems and deficiencies with the architecture. Because the migration is a substantial effort anyway, it would be wise to take a progressive approach in taking full advantage of features of the new technology. The plan is summarised in a roadmap that maps actions into a timescale. The perspective taken here is technical and it addresses technical issues. There is also a practical component in this project: I will test migration in practice using designated tools. This requires setting up the testing environment. I will conclude the migration with benchmarking the systems. As database management systems are quite different in technical level, it is difficult to produce a general vendor-independent plan. Therefore, I am considering Oracle as the migration DBMS. Despite concentrating on a single vendor many parts of this paper are general in nature. 9

10 &KDSWHU 0LJUDWLRQSURFHVVRYHUYLHZ In this chapter I will discuss issues relating to the migration process. Most of the literature about migration concerns modernising legacy systems and is quite general. Also, relational to OO database conversion is well documented and material concerning simple MS Access to SQLServer projects is available. However, literature on a large scale database architecture migration project is rare. It is not surprising that companies want to keep in confidence large projects of this nature that pose a number of risks. 7\SHVRIPLJUDWLRQ Migration can be carried out in one shot or continuously. The first is applicable when only the schema is migrated to the new DBMS. Also, a relatively small amount of simple data can be transferred. However, the one shot approach with large systems leads to unacceptable long downtimes. 0LJUDWLRQSODQQLQJ 0LJUDWLRQSODQQLQJDFWLYLWLHV In this section I am discussing managerial issues since they drive the technical design. John Bergey suggests that a successful migration project requires a sound migration plan (2001, p.1). He divides the planning into six sub actions: 10

11 Figure 2.1 As the model above is designed considering legacy system modernisation it has to be adapted for our purposes. The migration management approach follows generally typical project management tasks including monitoring progress and risks. The relevant inputs for the review include system functionality documentation, nonfunctional requirements and available funding and resources. The first of these is partially described in this paper (chapter 3) and the last is naturally held in confidence. Non-functional requirements include high availability, high performance and flexibility (TietoEnator Corporation 2003). As the project is large and contains a number of uncertainties, prototyping is necessary. In fact, considering the project concentrates on back-end rather than frontend, piloting with a mini project would be highly recommendable. This would reveal whether the technology is suitable and how fast the developers learn it (Cockburn 1996). Issues with roll-outing include whether the system is taken into use in one release or in increments. The incremental approach is probably the only possible method 11

12 considering the amount of work required and the resource constraints in the situation. This is the most common approach for financial reasons: projects of this size require huge investment and there is pressure for early quantifiable benefits (Seacord 2001b, p.1). However, the overall cost of the project will grow if it is not conducted in a single increment. Also, if the chosen approach is incremental further planning is required to work out how the system is partitioned into increments. This might affect the system design itself. Support needs are typically divided into two parts, setting up the actual technical support and seeking acceptance for the new system from personnel. The first part is quite straightforward and consists of setting up the help desk and educating the developers. The latter might be trickier: some people might find it stressful to be compelled to learn new technologies. 3ODQQLQJPLJUDWLRQLQ2UDFOHZD\ There are also other views on migration planning. For instance, Oracle Corporation (2002, p.2-1) suggest in their documentation that migration planning consists of five tasks. The process starts with requirement analysis and end with risk assessment. This is not a totally different view from the above; the first activities in Bergey s model can be seen to consist of these actions. Task 1: Determine the requirements of the migration project. The purpose of this task is to clarify technical issues about the source database including character sets and version. Also, the impact on applications in terms of APIs and connectivity issues is analysed. This information is then used to determine additional requirements for the destination database. Acceptance criteria are also defined here. The end result of the task is a detailed requirements document. Task 2: Estimating workload. This is done using various reports produced during the migration. For instance, the used tool may not be able to convert some of the tables 12

13 or stored procedures. Time for fixing is then allocated to each of the errors found in reports. Task 3: Analysing operational requirements. This task consists of evaluating operational considerations concerning backups and recovery, required downtimes and their impact on business, parallel run requirements and training needs. Time and resources are allocated to all of the tasks. This forms the skeleton of the migration plan. Task 4: Analysing the application. At this step the application is evaluated in order to determine whether any changes are required. Sometimes rewriting the application is necessary. Connections are also evaluated here; Oracle may require new ODBC or JDBC drivers or some of the SQL almost certainly need to be written again. Again, times and resources are allocated. Furthermore, the original requirements document is updated. Task 5: Planning the migration project. Uncertain factors are evaluated at this juncture. Also financial, personnel and time constraints are defined and the final version of the plan is produced. 0LJUDWLRQSODQQLQJJXLGHOLQHV Bergey summarises his earlier studies into few guidelines: Analyse the needs of affected stakeholders to determine the migration schedules and training requirements. Define measures for assessing the success of the project. Do the planning thoroughly and do not consider it as an extra task. Involve customers and users. Do not allow implementation to begin before the plan is accepted by all the stakeholders. Divide the project into chunks defined by the roll-out plan. Put effort into planning and monitoring the migration project. 13

14 Although some of the guidelines seem rather altruistic it is quite possible to underestimate the size and difficulty of a migration project. Therefore the meaning of planning must be emphasised. 'DWDEDVHPLJUDWLRQDQGSDUDOOHORSHUDWLRQ Database migration provides a good opportunity to improve the representation of data (Seacord 2001b, p.19). This includes removing redundancies and seeking better performance through better organisation of data. An optimal solution might require partition of data. Seacord (2001b, p.19) discusses how the database schema evolves. The first version is an analogue to the legacy database and it is revised gradually. The one-shot approach is virtually impossible if the database is large and contains a substantial number of tables. An incremental approach typically requires parallel operation of old and new systems. However, there are several complications with this and the issues discussed in the preceding paragraph. Reorganising the data can lead to very complex mappings between the old and the modernised schemas. Also, maintenance is more expensive and performance is affected as well. Despite these negative aspects parallel operation significantly reduces operational risks. Usually the legacy system is kept as a warm stand-by. The parallel run is carried out with replication (Porrasmaa 2004). Naturally, there are many ways to do this, e.g. using triggers. 14

15 &KDSWHU $QDO\VLVRISUHVHQWDUFKLWHFWXUH 3UHVHQWDWLRQRIWKHV\VWHPDUFKLWHFWXUH I briefly introduced Fenix in the introductory chapter. Now I will analyse the architecture and the environment in which it works. The analysis is the basis for the requirements specification for the new system: the new system must meet the current functional requirements. The analysis also points out deficiencies. )HQL[V\VWHP Fenix operates on HP-UX machines with BEA Tuxedo Transaction Processing monitor (TM) and Sybase ASE Database Management System (DBMS). Front-ends include Windows based applications operated mainly via Citrix connections. It is also possible to install the application to the client. Some of the functionality is available via the Internet as well. 7KHEDFNHQGV\VWHPRYHUYLHZ The system consists of three environments, system test, acceptance test and production. TE developers and testers use system test for unit testing. The customer carries out acceptance testing in corresponding environment. These environments enable the rigorous configuration management process to be fully adopted but also increase replication needs. Each of the environments contains multiple databases, some of them being operational online and some Data Warehouses (DW) or archiving databases. Steering Reporting functions use DW extensively while other functions use Online Transaction Processing databases (OLTP). There are also multiple Tuxedo domains in each of the environments. 15

16 The TE Testing environment includes two server computers and Tuxedo domains for web applications and for the normal use. For unit testing purposes there are naturally DW and OLTP databases available. These are located at the customer s premises. There is also a local database in one of the TE offices, this being a replica of the master OLTP, which is maintained for efficiency reasons. The acceptance test environment consists of three server machines, five Tuxedo domains and three database environments. These are required for the actual acceptance testing, but also training, rollout purposes and web applications. In the production environment, which I will concentrate on in this paper, the OLTP, DW and Budgeting databases each have dedicated servers that are either HP V (16 CPU) or K class (4 CPU). The storage is handled with HP XP disk array with fibre channels to the servers. One server is allocated for distributed printing facilities. There are also geographically distributed databases for some partitions of the data. This is discussed below. (In addition to the environments discussed above there are also environments for build and version management and development. Discussing these is out of the scope of this paper.) 7KHGDWDEDVHDUFKLWHFWXUH The current production database architecture consists of the centralised main OLTP database, DW and a database for Budgeting. There are also warm stand-by databases for the first and the last of these for availability purposes. Some of the data is also held in distributed databases. There are plans to separate this data from the main OLTP to a separate database (BD). This is illustrated inside the dashed circle in Picture 1 (it is a slightly modified version from the TE Fenix Database Architecture presentation (TietoEnator 2000)): 16

17 GUI presentation data Warm stand-by Local Database Replication Budgeting OLTP Warm stand-by Warm stand-by BD master Update BDS etc. Operational reporting & ad-hoc Application server Tuxedo queue Work station client Data Warehouse Picture 3.1 The separated database holds basic data (BD) such as addresses, names, routes and tariffs, which is static in nature and rarely changes. This database is then replicated to local databases distributed geographically. This is known as star topology and is illustrated below in picture 3.2 (adapted from TietoEnator presentation (2000)). Rectangles in the picture denote actions. The replication is carried out to gain increased performance and scalability. The pitfall is decreased integrity, and this is why the database is functionally partitioned leaving business critical data in the OLTP centralised. The above picture also illustrates the collaboration within the system. Most of the connections are made through the application servers but not all: some of the reporting tools might take direct connections, some of the two-tier modules have survived to date and furthermore, BD is used via two-tier connection. There are plans to simplify the diverse connection types in the future. 17

18 OLTP WSB Replication Warm standby BD master BD Replication Local Database Picture 3.2 The databases and related responsibilities: 'DWDEDVHVHUYHU /RFDWLRQ 'HVFULSWLRQ OLTP Centralised The master database for all essential business data Budgeting Centralised The master database for budgeting related data such as months, price calculations etc. Basic data Centralised The master database for basic data Local database Distributed A replica from the above Data Warehouse Centralised Data Warehouse for Steering reporting Warm stand-by Centralised A full replica of the OLTP. Can continue as master immediately when the OLTP crashes 18

19 There are also initial plans for further partitioning of the OLTP. I am not discussing these here as the plans are still at an early stage. $UFKLWHFWXUDOOD\HUV Defining the layered model of the system architecture makes it easier to understand the database connections in the system. There are five main layers: Channels, Interfaces, Integration middleware, Service architecture and Databases, the first being naturally the top layer. In a stricter form of layer architecture upper layers depend on layers one level lower (Szyperski 1998, p.141). However, this is not the case with Fenix since upper layers can collaborate with any lower layer. This is called non-strict layering. Non-strict layering increases connection types and decreases the simplicity of the architecture. This seems to be the only possibility due to requirements that are placed by some technologies used within Fenix. Picture 3.3 illustrates the architectural layers: Channels Interfaces Integration middleware FML ICA http Inhouse XML EDIFACT SQL FML XML SQL Service architecture Databases Picture

20 As the picture suggests the topmost layer takes either straight connections or indirect connections via lower layers. Service or Interfaces layers do not connect straight as this is done via Integration middleware instead. The most common pattern is Channels Integration middleware Service Integration middleware Databases sequence. The Channels layer consists of Windows GUI, Citrix clients, web browsers, mill connections or external connections such as Cognos. External connections typically use in-house proprietary interface languages and protocols. Straight connections are done via Sybase API (Open Client) or Open Database Connectivity (ODBC). Recently Java Database Connectivity (JDBC) has also emerged increasingly but it is used more on a lower layer in rather than on Channels. SQL denotes these three options in the picture. Tuxedo is used via Field Manipulation Language (FML). Database tables and columns are mapped into FML fields, which are then used to pass data to and from the layers. The framework includes a number of methods for manipulating FML collections and models (a model stands for a single row). I will discuss the framework below more thoroughly. extensible Mark-up Language (XML) has also emerged lately and it has many uses within Fenix. Mill connections have traditionally been handled with EDI messages (EDIFACT), which XML is replacing at the moment. Standard XML document structures (papinet) have existed already for some time in the industry. The Interfaces layer consists of, as the name implies, interfaces to different systems such as papinet, PartnerWeb and edialogue. The latter two are web-based systems. PartnerWeb brings some of the functionality available via web. The interface to client s message subsystem also lies on this layer as well as Citrix Mainframe interface. 20

21 Integration middleware is a layer on top of Tuxedo based services and includes Fenix framework and BEA tools, for instance. The layer is required to enable the services interpret the calls made from layers above. For example, XtoF translates XML to FML and vice versa. StreamServe is a printing tool that manages geographically distributed printing. This technology was taken into use recently and is incrementally replacing old in-house tools. StreamServe uses XML as input. The service architecture is based on Tuxedo services programmed with C++. As mentioned above, Tuxedo is a transaction manager (TM) software. Tuxedo is capable of managing distributed transactions and call queues. Only the latter is used with Fenix and distributed transactions must be managed in application level. I will discuss this issue more shortly. Picture 3.4, which is adapted from TietoEnator presentation (2003), provides a view of the technology layer model. This is more technical than the architectural layer model. SENS is the client s closed network and lies in between the Channels and Interfaces layers with Extranet and Internet. It can be seen as an extra layer. As the architectural layer model is mostly logical it is not necessary to introduce another layer for it there. XtoF and BEAWLS form a gateway for XML. This is also the purpose of BEA WTC. Web is enabled wit BEA WLS and JOLT gateway technologies. 21

22 Picture 3.4 'DWDEDVHFRQQHFWLRQV As the previous section implies there are requirements for different types of database connections in the system. These include ODBC, JDBC, in-house technologies and Sybase API. There are few remaining parts of the system that run two-tier connections to the database. In addition to these there are also some reporting tools that take straight connections. These include Power++, Crystal Reports and Cognos. Power++ is a Sybase tool the usage of which has been deprecated recently after its development and support came to an end. Cognos is a relatively new purchase that provides reporting capabilities accessible through the Internet. At the moment the technology is used to draw up reports from production databases including the OLTP and DW and it does not have a testing environment. 22

23 As mentioned above the usage of JDBC has emerged heavily lately. This is because the use of the web has increased and web technologies tend to rely on JDBC. Threetier Tuxedo connections are the dominating pattern used to access the database. As the UX environment does not support ODBC or JDBC Sybase API is used here. BD data is accessed with direct ODBC connections. The connection types: 'DWDEDVHVHUYHU JDBC ODBC In-house Sybase API /RFDWLRQ Web applications BD data, old two-tier connections, reporting tools Reporting tools Tuxedo services 'LVWULEXWHGWUDQVDFWLRQV In a typical distributed environment distributed transactions form the skeleton of the framework. However, Fenix makes a distinction here. As discussed above Tuxedo manages distributed transactions between servers. Tuxedo follows the X/Open Distributed Transaction Processing (DTP) reference model that includes three types of interacting components, applications, resource managers and transaction managers. Typically the course of a transaction is as follows: An application initialises a transaction by requesting it from TM. TM then opens it with an RM that is, the DBMS in this case, but could be any subsystem that implements transactional data. The application can then directly access the database with a chosen method such as native programming interface. 23

24 The TM s responsibilities include allocating a unique identifier for the transaction, passing it to other parties and also deciding when to commit. After the RM has been provided the identifier it can determine which calls belong to which transactions. Two-phase commit is required when there are multiple RMs, which is not the case with Fenix. Two-phase commit includes voting and completion phases (Colouris et al. 2001, p ). If a participating service requires abortion during the voting phase the transaction is ended by the coordinator process that is typically one of the participating services. In case all the processes have voted for completion and one or more processes fails during the completion phase, the processes enquire the coordinator what the result of the voting was and work out the changes with the log file according to the answer. Two-phase commit protocol is designed to tolerate a succession of failures and secures the consistency of data. The pitfall with the protocol relates to performance. As the protocol includes a number of messages passed by participants and the coordinator, it consumes bandwidth and can cause considerable delays, especially if the coordinator fails. After the application has finished it calls back the TM and requests commit. Applications and TMs communicate via the (de juro) standard TX interface. The TX interface provides calls for defining the transaction demarcation (the scope of the transaction). RMs and TMs interact via XA interface that is supposed to be provided by the database vendor. Picture 3.5 illustrates this: 24

25 Picture 3.5 TX and XA interfaces ensure that the data stays consistent within a distributed system. However, securing the consistency in Fenix relies merely on the DBMS: only the TX interface is in use. The lack of XA interface derives from the early days when Sybase did not support it. Later, after starting to support XA, Sybase did not recommend that it be used. Adopting XA now requires a substantial amount of work since begin and commit transaction calls should be replaced with Tuxedo s tpbegin function. The developer has to use only local calls if he wants to secure the integrity of the transaction. Calling services in other servers lead to separate database connections and the integrity property of transaction is lost. Currently the only way to avoid fragmented transactions is to instruct the developers to use only local calls. 7KHIUDPHZRUN The framework is built on an in-house ANSI C++ class library called Phobos. The framework caters for two-tier ODBC connections and three-tier Sybase API and ODBC connections. It is implemented according to typical layer convention where upper layers provide higher abstraction of lower layers (Szyperski 1998, p.140). This 25

26 is a tried and tested pattern with frameworks and provides easy-to-use interfaces for application developers. The typical course of actions in the server begins with an instance of fxtx-class. Methods include running a dynamic SQL or a stored procedure. This class collaborates with a lower level framework class that in its turn collaborates with ctlibrary that is, the Sybase API. ct-library provides different methods for manipulating the data and allows the developer to choose whether to use cursors or other means, for example. The client calls services via b-mode objects. B-modes convert FML fields, which are returned from the service, into object attributes and vice versa when requesting a service. In the server FML fields map to database columns. The developer has to provide a bind set when retrieving data. Bind sets are used to determine which FML fields should be used. FML model maps to one row of data while a collection comprises of many rows. There are different field definitions for collections and models; therefore a collection is not just a collection of models. FML field definitions are made to specific header files and they are kept in a repository. The field types are Tuxedo related and not entirely compatible with the types Sybase provides. Therefore some conversion is required. This is also taken into account with in-house development conventions and standards. The layer pattern is advantageous with the migration process, as only classes that access ct-library must be replaced instead of all the application code. In fact the framework already provides interface to Oracle. 'DWD:DUHKRXVLQJ The purpose of DW is to provide data for management decision support (Connolly et al. 1999, p. 914). Fenix is no exception here: the uses of DW lie in steering reporting. Technical Customer Support (TCS) also uses DW, as the management monitors the development of user satisfaction this way. The DW is accessed by standard 26

27 application code as well as Cognos PowerPlay cubes (cube is a term for a multidimensional report). DW inflow, that is extracting data from the OLTP, happens every night. Following common conventions the data is cleansed and processed first in a temporary store. This enables summarising, packaging and distributing data. The processing is carried out in a database located in the same server as the OLTP. Since the DW locates in a different server, the cleansed data is uploaded there via Tuxedo queue. Another option would be using replication. Extraction, cleansing and transformation are done with custom-built procedures in the OLTP. Connolly et al. (1999, p. 928) lists three ways for this: code generators, database data replication tools and dynamic transformation engines. These could improve the processing since the task is complicated and it can be difficult to program an optimal procedure by hand. The DW design follows typical star schema where fact tables are surrounded by dimension tables. Fact tables contain factual data such as invoice related data. Dimension tables surrounding the fact table contain reference data such as customer, product data etc. The fact table contains a foreign key to each of the dimension tables. Therefore, the data can be seen as a multidimensional structure, or cubes. In order to be able to fully exploit this multidimensional data special technologies are required. Online Analytical Processing (OLAP) tools are designed for this purpose (Connolly et al. 1999, p.951). All the DW related tools including the database are from Sybase within Fenix. Sybase does not provide proper OLAP functioning. The Fenix DW is not of the purest form as the data in there is updated in addition to extracting data into it. This is seen as a bad thing by the developers. Sybase is not a pure DW database either. 27

28 5HSOLFDWLRQ The replication is used in multiple places in the architecture. Warm stand-by databases for the OLTP and Budgeting, and in the future local BD databases are all replicas of the corresponding master database. The replication method in Sybase is log based. The data is replicated at certain time intervals from the master redo log into the replica. This is discussed more in the next chapter. 'HYHORSPHQWSUDFWLFHV /RFNLQJDQGWLPHVWDPSLQJ Introducing row-level locking has been a big improvement to Sybase ASE. This has significantly improved the performance of the system. The performance reasons, again, have affected the decision to adopt an optimistic approach to locking which is based on timestamping. Complex and long transactions to a vast number of large tables with conservative locking mechanisms would cause significant delays (Kroenke 2002, p.305). The timestamping mechanism is included in the application level. Update services compare timestamps before proceeding with updates. If the time stamp value passed to the procedure from a FML message is older than the current time stamp of the data set an error message is raised and the transaction is aborted.,qwhjulw\frqvwudlqwvdqglqgh[lqj The system relies on loose integrity constraints in the table definitions. Primary keys, foreign keys and other integrity constraint declarations are avoided and tables reference others just seemingly. The uniqueness constraint of primary keys is implemented with unique indexes. In addition to primary key fields all highly referenced fields are indexed. All the large tables can be queried only using indexed fields according to development practices. However, there is no mechanism preventing this in the DBMS. 28

29 The data is also heavily denormalized to speed up retrievals. This is quite a common practice with a database of this size (Connolly et al. 2002, p.507). Denormalization (controlled redundancy) is not unproblematic though; it makes the implementation more complex and decreases flexibility. As many of the tables in the system contain hundreds of thousands or millions of rows, table scans cause massive delays since a scan causes table level locking. This is prevented with using only indexed columns in the SQL WHERE clauses. Also complicated join operations can cause a table scan and are therefore divided into multiple joins using temporary tables. 7ULJJHUV Triggers are used in places within Fenix. However, using them has not been recommended, since there have not been proper management tools in the market before recent times. Triggers that are not properly managed and documented can easily lead to awkward situations. 6WRUHGSURFHGXUHVDQGG\QDPLF64/ Dynamic SQL has not been recommended. The reasons derive from performance problems: dynamic SQL is parsed and optimized every time it is run. Stored procedures, on the other hand, are parsed and optimized only once. Another downside of complicated multi-part dynamic SQL program is increased bandwidth overhead. This is because of multiple connections to the database. Sybase enables the use of prepared dynamic SQL. This way dynamic SQL works as a temporary stored procedure. This feature is not used within the system and could be introduced in parts where complicated logic is required. The programming language used in stored procedures is called Transactional-SQL (T/SQL). &XUVRUV Using cursors is not recommended for performance reasons; using cursors can lead to page- or table-level locking. The issue is not just a company regulation but a wellknown restriction in Sybase: 29

30 The fact is that cursors introduce a fantastic performance problem in your applications and nearly always need to be avoided. (Rankins et al. 1999, p.160) However, one could argue that this issue is outdated and the functioning of cursors has improved (Talebzadeh 2004). This is discussed in more depth in the next chapter. 9LHZV A view is essentially a dynamic temporary relation, a result of a query to one or more persistent base relations (Connolly et al. 1999, p.101). Views are used to simplify complex operations on the base relations, providing a security mechanism and enabling users to access data in a customised way. Currently views are not widely used within the system and could be exploited in many places. For instance, it could be used to hide parts of data from some users and provide reduced complexity and convenience for the developers. As the operations on the views are actually done to the base relations, extreme care must be taken to ensure that queries do not lead to table scans on large tables. Secondly, the criteria for the view must be indexed in the base relation if it is large. Moreover, DBMSs do typically not allow update operations on a view that consists of multiple base relations. 7KHFXUUHQWVLWXDWLRQ To conclude I have summarised the issues with the current practices and systems. I have also proposed some solutions to improve the situation. The issues relate mostly to the restricted use of DBMS; Fenix is database oriented and driven software and could clearly gain from using some of the powerful features and functions a modern DBMS system provides. The development practices are based on the situation ten years ago and they should be modernized.,qwhjulw\frqvwudlqwv As discussed above the integrity constraint checking is implemented into the application layer. This was a typical design decision some 10 years ago since many 30

31 commercial systems did not support them fully (Connolly et al. 1999, p.732). Declarative integrity is also considered to reduce performance. However, this is dangerous considering duplication and inconsistencies. DBMS systems have evolved since those days and provide better support and locking schemes now. Constraints stored in the catalog can be enforced and controlled centrally; Codd calls this integrity independence (Connolly et al. 1999, p.105). Declarative integrity should be at least tested and depending on the results the development practices could be changed to encourage using them. 'HQRUPDOL]DWLRQ Denormalization can be justified at some parts where it is used currently due to heavy usage of that particular data. However, it seems that denormalizing has become a typical development practice and no attempts are made to avoid it. Redundant data, again, leads easily to inconsistencies and also to redundant effort, which then causes network and processing overhead. Therefore, the schema should be inspected and redundancies should be removed where they are not necessary. Their use should be discouraged in the future. (QWHUSULVHFRQVWUDLQWV Enterprise constraints enforce the updating of the relations following business rules (Connolly et al. 1999, p.276). Including them in the data definition, instead of the application, simplifies the development and again allows centralised control and enforcement. Enterprise constraints could be introduced to Fenix: There is processing that is timed, for instance DW runs. This could be handled by enterprise constraints. There are lots of simple business rules for maximum and minimum values. Simple business rules, which are widely used and are not necessarily properly documented, and which lead to long if clauses, could be replaced. 0DWHULDOL]HGYLHZV One possible solution for compensating the downsides of view resolution is view materialization. Materialized views are temporary tables that are created when the 31

32 view is queried first time. These are widely used in DW, replication servers, data visualization and mobile systems (Connolly et al. 2002, p.186) and are worthy of investigation in Fenix as well. The system contains complex data structures and relationships and could benefit from this approach that is capable of improving data integrity and query optimisation. 32

33 &KDSWHU 7HFKQRORJ\FRPSDULVRQ In this chapter I will compare Oracle and Sybase. I will start with business-oriented discussion and then move to technological aspects. While doing so I will try to discuss the issues in context. At the end I will offer some suggestions. 0DUNHWVLWXDWLRQDQGILQDQFLDOPDWWHUV Oracle is the dominant database vendor on the market with a 37% share (Ryan 2003) and the world s second biggest software vendor. On the other hand, Sybase is a relatively small player with a 2% share. This does not mean that the company is inactive. Sybase is currently developing new innovations including JMS based products and mobile databases. Sybase also claims to provide 15% lower life cycle costs than Oracle since ASE runs better in smaller computers. In Sybase tests four-processor HP computer can process transactions per minute. It seems quite obvious that Sybase is seeking to improve its market share within smaller companies. Furthermore, it has recently partnered with SAP (Sybase Corporation 2003) and provides the database for their smaller packages (Business One). Oracle, on the other hand, has been dominant especially in enterprise computing, Vincent Ryan cites Noel Yuhanna from Forrester Research in his article in (Ryan 2003) who claims that Oracle s dynamic cache and job scheduling are superior and Oracle can deal with concurrent users. Oracle has lately developed its grid computing solutions and provides large-scale products. 33

34 These recent reviews imply that there is space on the market for both companies. However, Oracle is a safer bet considering its market share and popularity. The current version of Oracle is 10g while ASE version was launched last year. 0HPRU\PRGHO The Sybase memory model is based on multithreading. All the processes run in a single OS memory space. Oracle, on the other hand, requires OS processes for each user, log writer etc. This is the main reason why ASE s performance is better in smaller platforms; Oracle requires more memory and processing capacity. 'DWDVWUXFWXUHV An Oracle database is divided into physical datafiles, which map to tablespaces that are logical structures (Oracle Corporation 2002c). Datafiles are divided into data blocks where the logical units, such as tables and indexes, locate. Tablespaces correspond to Sybase segments. Oracle checks buffer cache before looking up the data from datafiles. All the retrieved data is stored into the cache. Oracle enables dynamic cache sizing; this enables the best hit ratio (Thakkar 2002). Also ASE stores recently retrieved data into cache. ASE allows dividing the cache into distinct caches and the binding of objects into them. This is a useful feature, as it enables the user to prevent the cache from flushing important work tables. Oracle records information about database and operating system files into a control file. Backup metadata locates in this file in addition to database creation timestamp, checkpoint information and Recovery manager (RMAN). When the physical makeup of the database changes the control file changes as well. Oracle supports multiple copies of control files. Sybase ASE uses logical devices to store physical data (Sybase Corporation 2000, p.8). A database consists of segments that locate in one or more logical devices. ASE allows the user to decide in which segment the data is stored by specifying the segment name in a table definition (ON SEGMENT). Oracle, on the other hand, 34

35 allows the user to define in which tablespace the data is stored. The decisions concerning storage specification within ASE should apply similarly in Oracle. Whereas Oracle stores system information in the system table space Sybase contains a system database for this purpose. Moreover, Oracle contains control files for some vital data such as names and locations of datafiles. Control files can be mirrored and archived. Sybase has no similar functionality; related data locates in the master database. Oracle records all the operations into the redo log. Typically a database update operation causes Oracle to write two entries to the redo log: one right after the update and another after committing the transaction. The first consists of changes in the transaction table undo, the undo data block and the data block the updated table maps to. Log writer writes over old redo data when redo logs are filled. There are always at least two redo logs and Oracle supports archived redo files and log mirroring. These allow the system to recover completely from instance and media failures from a chosen point in time while creating overhead to storage. Sybase records database activities into a transaction log that locates in a designated device. Transaction logging has caused a serious bottleneck. It fills up quickly because of transactions that remain open. Bad transaction handling has to be removed in order to avoid the problem but it is not easy to find all the black spots in the code. The transaction log device size can be also increased but this is hardly a lasting solution for the transaction log problem. When the log fills up it reduces the performance of the application. 3HUIRUPDQFHDQGWXQLQJ Oracle provides wider configuration options while ASE is easier to manage. ASE contains only one configuration file and provides only a restricted interface to developers. Oracle, on the other hand, contains many configuration files and is more 35

36 open to users. This also makes Oracle more difficult to use and it is advisable to use designated tools when tuning the database. In addition to Oracle itself, there are a number of third party vendors in the market who provide these tools. Tuning tools are available for Sybase as well but not in similar numbers. &XUVRUVDQGORFNLQJ Cursors reside either in server-side or in client-side. Client-side cursors are another source of delay as they point back to the database via network and the processing is affected by the related latency. This is the reason why ct-cursor class, that is the Open Client s cursor class, should be dealt with cautiously. However, as bandwidths have increased, client-side cursors can be regarded as an option. The locking problem, on the other hand, is not simply a problem with Sybase but something other DBMSs also have to deal with. The locking typically follows 1992 ANSI SQL standard isolation level conventions. The isolation levels are based on three serialization violations and how the system allows them to occur. The following table is from Kroenke 2002, p.309: 9LRODWLRQ 7\SH,VRODWLRQ/HYHO 5HDG 8QFRPPLWWHG 5HDG &RPPLWWHG 'LUW\5HDG Possible Not possible 1RQUHSHDWDEOH 5HDG 3KDQWRP 5HDG 5HSHDWDEOH 6HULDOL]DEOH 5HDG Not Not possible possible Possible Possible Not possible Not possible Possible Possible Possible Not possible As the table shows, the most restrictive level of locking is called Serializable since it does not allow any kind of violation to occur. At the same time it enables the least throughput, as the locking granularity typically has to grow since serialization is 36

37 enforced at transaction level. For comparison, in Read Committed level serialization is enforced only at statement level (Connolly et al. 2002, p.597). Cursors can be divided into four groups, Forward only, Static, Keyset and Dynamic (Kroenke 2002, p.310). The first is the simplest form and provides functionality for moving only forward the dataset. The later three, on the other hand, are scrollable. Furthermore, the last two are able to show updates dynamically. This, however, requires at least a dirty read level isolation to ensure consistency. Sybase ASE offers three isolation levels, 0, 1 and 3 (Sybase corporation). These are called no read lock, no hold lock and hold lock respectively in Sybase terminology. The first uses no locks and therefore does not block anything from other applications. The drawback is that cursors at this level are not updateable as they are read-only. Level 1 and 3 type cursors are updateable. Level 1 ASE locks pages and releases them when the cursor moves out of the page. This is the default. Level 3 is the strictest form of locking where all the base table pages that have been read during the transaction are locked and released only after the transaction ends. The isolation levels in Oracle are quite similar to those in ASE. Oracle implements two of the ISO isolation levels, Read Committed and Serializable and also a third isolation level Read Only that corresponds to Sybase no read lock. The first two use row-level locking (Connolly et al. 2002, p.597) and wait if there are uncommitted transactions that are locking the required rows. The difference is that when the earlier transaction releases the locks Read Committed proceeds with the update while Serializable returns an error since operations are not serializable that is, serially equivalent. Since Oracle records the locks into corresponding data block locks must never be escalated as is the case with Sybase. Lock escalation significantly reduces throughput while managing fewer locks requires less processing. 37

Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap.

Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap. Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap. 1 Oracle9i Documentation First-Semester 1427-1428 Definitions

More information

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter

More information

Real-time Data Replication

Real-time Data Replication Real-time Data Replication from Oracle to other databases using DataCurrents WHITEPAPER Contents Data Replication Concepts... 2 Real time Data Replication... 3 Heterogeneous Data Replication... 4 Different

More information

Oracle Architecture, Concepts & Facilities

Oracle Architecture, Concepts & Facilities COURSE CODE: COURSE TITLE: CURRENCY: AUDIENCE: ORAACF Oracle Architecture, Concepts & Facilities 10g & 11g Database administrators, system administrators and developers PREREQUISITES: At least 1 year of

More information

Oracle Database 11g Comparison Chart

Oracle Database 11g Comparison Chart Key Feature Summary Express 10g Standard One Standard Enterprise Maximum 1 CPU 2 Sockets 4 Sockets No Limit RAM 1GB OS Max OS Max OS Max Database Size 4GB No Limit No Limit No Limit Windows Linux Unix

More information

OLAP and OLTP. AMIT KUMAR BINDAL Associate Professor M M U MULLANA

OLAP and OLTP. AMIT KUMAR BINDAL Associate Professor M M U MULLANA OLAP and OLTP AMIT KUMAR BINDAL Associate Professor Databases Databases are developed on the IDEA that DATA is one of the critical materials of the Information Age Information, which is created by data,

More information

High-Volume Data Warehousing in Centerprise. Product Datasheet

High-Volume Data Warehousing in Centerprise. Product Datasheet High-Volume Data Warehousing in Centerprise Product Datasheet Table of Contents Overview 3 Data Complexity 3 Data Quality 3 Speed and Scalability 3 Centerprise Data Warehouse Features 4 ETL in a Unified

More information

Concepts of Database Management Seventh Edition. Chapter 9 Database Management Approaches

Concepts of Database Management Seventh Edition. Chapter 9 Database Management Approaches Concepts of Database Management Seventh Edition Chapter 9 Database Management Approaches Objectives Describe distributed database management systems (DDBMSs) Discuss client/server systems Examine the ways

More information

Oracle Database Concepts

Oracle Database Concepts Oracle Database Concepts Database Structure The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed

More information

MS-40074: Microsoft SQL Server 2014 for Oracle DBAs

MS-40074: Microsoft SQL Server 2014 for Oracle DBAs MS-40074: Microsoft SQL Server 2014 for Oracle DBAs Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills and experience as an Oracle

More information

MySQL 5.0 vs. Microsoft SQL Server 2005

MySQL 5.0 vs. Microsoft SQL Server 2005 White Paper Abstract This paper describes the differences between MySQL and Microsoft SQL Server 2000. Revised by Butch Villante, MCSE Page 1 of 6 Database engines are a crucial fixture for businesses

More information

BUSINESS INTELLIGENCE. Keywords: business intelligence, architecture, concepts, dashboards, ETL, data mining

BUSINESS INTELLIGENCE. Keywords: business intelligence, architecture, concepts, dashboards, ETL, data mining BUSINESS INTELLIGENCE Bogdan Mohor Dumitrita 1 Abstract A Business Intelligence (BI)-driven approach can be very effective in implementing business transformation programs within an enterprise framework.

More information

FIFTH EDITION. Oracle Essentials. Rick Greenwald, Robert Stackowiak, and. Jonathan Stern O'REILLY" Tokyo. Koln Sebastopol. Cambridge Farnham.

FIFTH EDITION. Oracle Essentials. Rick Greenwald, Robert Stackowiak, and. Jonathan Stern O'REILLY Tokyo. Koln Sebastopol. Cambridge Farnham. FIFTH EDITION Oracle Essentials Rick Greenwald, Robert Stackowiak, and Jonathan Stern O'REILLY" Beijing Cambridge Farnham Koln Sebastopol Tokyo _ Table of Contents Preface xiii 1. Introducing Oracle 1

More information

Client/server is a network architecture that divides functions into client and server

Client/server is a network architecture that divides functions into client and server Page 1 A. Title Client/Server Technology B. Introduction Client/server is a network architecture that divides functions into client and server subsystems, with standard communication methods to facilitate

More information

Oracle Database 10g: Backup and Recovery 1-2

Oracle Database 10g: Backup and Recovery 1-2 Oracle Database 10g: Backup and Recovery 1-2 Oracle Database 10g: Backup and Recovery 1-3 What Is Backup and Recovery? The phrase backup and recovery refers to the strategies and techniques that are employed

More information

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011 SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,

More information

Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led

Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Course Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills

More information

Introduction. Introduction: Database management system. Introduction: DBS concepts & architecture. Introduction: DBS versus File system

Introduction. Introduction: Database management system. Introduction: DBS concepts & architecture. Introduction: DBS versus File system Introduction: management system Introduction s vs. files Basic concepts Brief history of databases Architectures & languages System User / Programmer Application program Software to process queries Software

More information

Configuration and Development

Configuration and Development Configuration and Development BENEFITS Enables powerful performance monitoring. SQL Server 2005 equips Microsoft Dynamics GP administrators with automated and enhanced monitoring tools that ensure 24x7

More information

1. INTRODUCTION TO RDBMS

1. INTRODUCTION TO RDBMS Oracle For Beginners Page: 1 1. INTRODUCTION TO RDBMS What is DBMS? Data Models Relational database management system (RDBMS) Relational Algebra Structured query language (SQL) What Is DBMS? Data is one

More information

Lecture 7: Concurrency control. Rasmus Pagh

Lecture 7: Concurrency control. Rasmus Pagh Lecture 7: Concurrency control Rasmus Pagh 1 Today s lecture Concurrency control basics Conflicts and serializability Locking Isolation levels in SQL Optimistic concurrency control Transaction tuning Transaction

More information

Chapter 18: Database System Architectures. Centralized Systems

Chapter 18: Database System Architectures. Centralized Systems Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

Chapter 3. Database Environment - Objectives. Multi-user DBMS Architectures. Teleprocessing. File-Server

Chapter 3. Database Environment - Objectives. Multi-user DBMS Architectures. Teleprocessing. File-Server Chapter 3 Database Architectures and the Web Transparencies Database Environment - Objectives The meaning of the client server architecture and the advantages of this type of architecture for a DBMS. The

More information

Introduction: Database management system

Introduction: Database management system Introduction Databases vs. files Basic concepts Brief history of databases Architectures & languages Introduction: Database management system User / Programmer Database System Application program Software

More information

Data Management in the Cloud

Data Management in the Cloud Data Management in the Cloud Ryan Stern stern@cs.colostate.edu : Advanced Topics in Distributed Systems Department of Computer Science Colorado State University Outline Today Microsoft Cloud SQL Server

More information

The Revival of Direct Attached Storage for Oracle Databases

The Revival of Direct Attached Storage for Oracle Databases The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to

More information

MS SQL Performance (Tuning) Best Practices:

MS SQL Performance (Tuning) Best Practices: MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware

More information

Jet Data Manager 2012 User Guide

Jet Data Manager 2012 User Guide Jet Data Manager 2012 User Guide Welcome This documentation provides descriptions of the concepts and features of the Jet Data Manager and how to use with them. With the Jet Data Manager you can transform

More information

Oracle9i Data Warehouse Review. Robert F. Edwards Dulcian, Inc.

Oracle9i Data Warehouse Review. Robert F. Edwards Dulcian, Inc. Oracle9i Data Warehouse Review Robert F. Edwards Dulcian, Inc. Agenda Oracle9i Server OLAP Server Analytical SQL Data Mining ETL Warehouse Builder 3i Oracle 9i Server Overview 9i Server = Data Warehouse

More information

www.dotnetsparkles.wordpress.com

www.dotnetsparkles.wordpress.com Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.

More information

Cloud Computing at Google. Architecture

Cloud Computing at Google. Architecture Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale

More information

Performance Counters. Microsoft SQL. Technical Data Sheet. Overview:

Performance Counters. Microsoft SQL. Technical Data Sheet. Overview: Performance Counters Technical Data Sheet Microsoft SQL Overview: Key Features and Benefits: Key Definitions: Performance counters are used by the Operations Management Architecture (OMA) to collect data

More information

Big Data Analytics with IBM Cognos BI Dynamic Query IBM Redbooks Solution Guide

Big Data Analytics with IBM Cognos BI Dynamic Query IBM Redbooks Solution Guide Big Data Analytics with IBM Cognos BI Dynamic Query IBM Redbooks Solution Guide IBM Cognos Business Intelligence (BI) helps you make better and smarter business decisions faster. Advanced visualization

More information

Oracle and Sybase, Concepts and Contrasts

Oracle and Sybase, Concepts and Contrasts Oracle and Sybase, Concepts and Contrasts By Mich Talebzadeh Part 1 January 2006 In a large modern enterprise, it is almost inevitable that different portions of the organization will use different database

More information

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do

More information

Oracle Warehouse Builder 10g

Oracle Warehouse Builder 10g Oracle Warehouse Builder 10g Architectural White paper February 2004 Table of contents INTRODUCTION... 3 OVERVIEW... 4 THE DESIGN COMPONENT... 4 THE RUNTIME COMPONENT... 5 THE DESIGN ARCHITECTURE... 6

More information

The Sierra Clustered Database Engine, the technology at the heart of

The Sierra Clustered Database Engine, the technology at the heart of A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel

More information

ICOM 6005 Database Management Systems Design. Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001

ICOM 6005 Database Management Systems Design. Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001 ICOM 6005 Database Management Systems Design Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001 Readings Read Chapter 1 of text book ICOM 6005 Dr. Manuel

More information

The Classical Architecture. Storage 1 / 36

The Classical Architecture. Storage 1 / 36 1 / 36 The Problem Application Data? Filesystem Logical Drive Physical Drive 2 / 36 Requirements There are different classes of requirements: Data Independence application is shielded from physical storage

More information

SQL Server 2014 New Features/In- Memory Store. Juergen Thomas Microsoft Corporation

SQL Server 2014 New Features/In- Memory Store. Juergen Thomas Microsoft Corporation SQL Server 2014 New Features/In- Memory Store Juergen Thomas Microsoft Corporation AGENDA 1. SQL Server 2014 what and when 2. SQL Server 2014 In-Memory 3. SQL Server 2014 in IaaS scenarios 2 SQL Server

More information

A Grid Architecture for Manufacturing Database System

A Grid Architecture for Manufacturing Database System Database Systems Journal vol. II, no. 2/2011 23 A Grid Architecture for Manufacturing Database System Laurentiu CIOVICĂ, Constantin Daniel AVRAM Economic Informatics Department, Academy of Economic Studies

More information

CA IDMS Server r17. Product Overview. Business Value. Delivery Approach

CA IDMS Server r17. Product Overview. Business Value. Delivery Approach PRODUCT sheet: CA IDMS SERVER r17 CA IDMS Server r17 CA IDMS Server helps enable secure, open access to CA IDMS mainframe data and applications from the Web, Web services, PCs and other distributed platforms.

More information

Module 14: Scalability and High Availability

Module 14: Scalability and High Availability Module 14: Scalability and High Availability Overview Key high availability features available in Oracle and SQL Server Key scalability features available in Oracle and SQL Server High Availability High

More information

Data Warehousing Concepts

Data Warehousing Concepts Data Warehousing Concepts JB Software and Consulting Inc 1333 McDermott Drive, Suite 200 Allen, TX 75013. [[[[[ DATA WAREHOUSING What is a Data Warehouse? Decision Support Systems (DSS), provides an analysis

More information

Chapter 2 Why Are Enterprise Applications So Diverse?

Chapter 2 Why Are Enterprise Applications So Diverse? Chapter 2 Why Are Enterprise Applications So Diverse? Abstract Today, even small businesses operate in different geographical locations and service different industries. This can create a number of challenges

More information

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should

More information

Data warehouse and Business Intelligence Collateral

Data warehouse and Business Intelligence Collateral Data warehouse and Business Intelligence Collateral Page 1 of 12 DATA WAREHOUSE AND BUSINESS INTELLIGENCE COLLATERAL Brains for the corporate brawn: In the current scenario of the business world, the competition

More information

Outline. Failure Types

Outline. Failure Types Outline Database Management and Tuning Johann Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 11 1 2 Conclusion Acknowledgements: The slides are provided by Nikolaus Augsten

More information

Maximum Availability Architecture

Maximum Availability Architecture Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability

More information

Cloud Service Model. Selecting a cloud service model. Different cloud service models within the enterprise

Cloud Service Model. Selecting a cloud service model. Different cloud service models within the enterprise Cloud Service Model Selecting a cloud service model Different cloud service models within the enterprise Single cloud provider AWS for IaaS Azure for PaaS Force fit all solutions into the cloud service

More information

Transaction Management Overview

Transaction Management Overview Transaction Management Overview Chapter 16 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Transactions Concurrent execution of user programs is essential for good DBMS performance. Because

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

DATABASE MANAGEMENT SYSTEM

DATABASE MANAGEMENT SYSTEM REVIEW ARTICLE DATABASE MANAGEMENT SYSTEM Sweta Singh Assistant Professor, Faculty of Management Studies, BHU, Varanasi, India E-mail: sweta.v.singh27@gmail.com ABSTRACT Today, more than at any previous

More information

Database-driven library system

Database-driven library system Database-driven library system Key-Benefits of CADSTAR 12.1 Characteristics of database-driven library system KEY-BENEFITS Increased speed when searching for parts You can edit/save a single part (instead

More information

A Framework for Developing the Web-based Data Integration Tool for Web-Oriented Data Warehousing

A Framework for Developing the Web-based Data Integration Tool for Web-Oriented Data Warehousing A Framework for Developing the Web-based Integration Tool for Web-Oriented Warehousing PATRAVADEE VONGSUMEDH School of Science and Technology Bangkok University Rama IV road, Klong-Toey, BKK, 10110, THAILAND

More information

low-level storage structures e.g. partitions underpinning the warehouse logical table structures

low-level storage structures e.g. partitions underpinning the warehouse logical table structures DATA WAREHOUSE PHYSICAL DESIGN The physical design of a data warehouse specifies the: low-level storage structures e.g. partitions underpinning the warehouse logical table structures low-level structures

More information

Configuring Backup Settings. Copyright 2009, Oracle. All rights reserved.

Configuring Backup Settings. Copyright 2009, Oracle. All rights reserved. Configuring Backup Settings Objectives After completing this lesson, you should be able to: Use Enterprise Manager to configure backup settings Enable control file autobackup Configure backup destinations

More information

The IBM Cognos Platform for Enterprise Business Intelligence

The IBM Cognos Platform for Enterprise Business Intelligence The IBM Cognos Platform for Enterprise Business Intelligence Highlights Optimize performance with in-memory processing and architecture enhancements Maximize the benefits of deploying business analytics

More information

ORACLE DATABASE 10G ENTERPRISE EDITION

ORACLE DATABASE 10G ENTERPRISE EDITION ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.

More information

A McKnight Associates, Inc. White Paper: Effective Data Warehouse Organizational Roles and Responsibilities

A McKnight Associates, Inc. White Paper: Effective Data Warehouse Organizational Roles and Responsibilities A McKnight Associates, Inc. White Paper: Effective Data Warehouse Organizational Roles and Responsibilities Numerous roles and responsibilities will need to be acceded to in order to make data warehouse

More information

What Is Specific in Load Testing?

What Is Specific in Load Testing? What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing

More information

IBM Cognos 8 Business Intelligence Analysis Discover the factors driving business performance

IBM Cognos 8 Business Intelligence Analysis Discover the factors driving business performance Data Sheet IBM Cognos 8 Business Intelligence Analysis Discover the factors driving business performance Overview Multidimensional analysis is a powerful means of extracting maximum value from your corporate

More information

Eloquence Training What s new in Eloquence B.08.00

Eloquence Training What s new in Eloquence B.08.00 Eloquence Training What s new in Eloquence B.08.00 2010 Marxmeier Software AG Rev:100727 Overview Released December 2008 Supported until November 2013 Supports 32-bit and 64-bit platforms HP-UX Itanium

More information

Data Warehousing and OLAP Technology for Knowledge Discovery

Data Warehousing and OLAP Technology for Knowledge Discovery 542 Data Warehousing and OLAP Technology for Knowledge Discovery Aparajita Suman Abstract Since time immemorial, libraries have been generating services using the knowledge stored in various repositories

More information

Oracle Database 11g: New Features for Administrators DBA Release 2

Oracle Database 11g: New Features for Administrators DBA Release 2 Oracle Database 11g: New Features for Administrators DBA Release 2 Duration: 5 Days What you will learn This Oracle Database 11g: New Features for Administrators DBA Release 2 training explores new change

More information

Performance rule violations usually result in increased CPU or I/O, time to fix the mistake, and ultimately, a cost to the business unit.

Performance rule violations usually result in increased CPU or I/O, time to fix the mistake, and ultimately, a cost to the business unit. Is your database application experiencing poor response time, scalability problems, and too many deadlocks or poor application performance? One or a combination of zparms, database design and application

More information

GEOG 482/582 : GIS Data Management. Lesson 10: Enterprise GIS Data Management Strategies GEOG 482/582 / My Course / University of Washington

GEOG 482/582 : GIS Data Management. Lesson 10: Enterprise GIS Data Management Strategies GEOG 482/582 / My Course / University of Washington GEOG 482/582 : GIS Data Management Lesson 10: Enterprise GIS Data Management Strategies Overview Learning Objective Questions: 1. What are challenges for multi-user database environments? 2. What is Enterprise

More information

Towards Heterogeneous Grid Database Replication. Kemian Dang

Towards Heterogeneous Grid Database Replication. Kemian Dang Towards Heterogeneous Grid Database Replication Kemian Dang Master of Science Computer Science School of Informatics University of Edinburgh 2008 Abstract Heterogeneous database replication in the Grid

More information

ETL-EXTRACT, TRANSFORM & LOAD TESTING

ETL-EXTRACT, TRANSFORM & LOAD TESTING ETL-EXTRACT, TRANSFORM & LOAD TESTING Rajesh Popli Manager (Quality), Nagarro Software Pvt. Ltd., Gurgaon, INDIA rajesh.popli@nagarro.com ABSTRACT Data is most important part in any organization. Data

More information

Tier Architectures. Kathleen Durant CS 3200

Tier Architectures. Kathleen Durant CS 3200 Tier Architectures Kathleen Durant CS 3200 1 Supporting Architectures for DBMS Over the years there have been many different hardware configurations to support database systems Some are outdated others

More information

Application Performance Testing Basics

Application Performance Testing Basics Application Performance Testing Basics ABSTRACT Todays the web is playing a critical role in all the business domains such as entertainment, finance, healthcare etc. It is much important to ensure hassle-free

More information

Intellicyber s Enterprise Integration and Application Tools

Intellicyber s Enterprise Integration and Application Tools Intellicyber s Enterprise Integration and Application Tools The IDX product suite provides Intellicyber s customers with cost effective, flexible and functional products that provide integration and visibility

More information

Sybase Replication Server 15.6 Real Time Loading into Sybase IQ

Sybase Replication Server 15.6 Real Time Loading into Sybase IQ Sybase Replication Server 15.6 Real Time Loading into Sybase IQ Technical White Paper Contents Executive Summary... 4 Historical Overview... 4 Real Time Loading- Staging with High Speed Data Load... 5

More information

Backup and Recovery for SAP Environments using EMC Avamar 7

Backup and Recovery for SAP Environments using EMC Avamar 7 White Paper Backup and Recovery for SAP Environments using EMC Avamar 7 Abstract This white paper highlights how IT environments deploying SAP can benefit from efficient backup with an EMC Avamar solution.

More information

Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc.

Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc. Oracle BI EE Implementation on Netezza Prepared by SureShot Strategies, Inc. The goal of this paper is to give an insight to Netezza architecture and implementation experience to strategize Oracle BI EE

More information

SOLUTION BRIEF. JUST THE FAQs: Moving Big Data with Bulk Load. www.datadirect.com

SOLUTION BRIEF. JUST THE FAQs: Moving Big Data with Bulk Load. www.datadirect.com SOLUTION BRIEF JUST THE FAQs: Moving Big Data with Bulk Load 2 INTRODUCTION As the data and information used by businesses grow exponentially, IT organizations face a daunting challenge moving what is

More information

Oracle 11g Database Administration

Oracle 11g Database Administration Oracle 11g Database Administration Part 1: Oracle 11g Administration Workshop I A. Exploring the Oracle Database Architecture 1. Oracle Database Architecture Overview 2. Interacting with an Oracle Database

More information

SQL Server 2005 Features Comparison

SQL Server 2005 Features Comparison Page 1 of 10 Quick Links Home Worldwide Search Microsoft.com for: Go : Home Product Information How to Buy Editions Learning Downloads Support Partners Technologies Solutions Community Previous Versions

More information

When to consider OLAP?

When to consider OLAP? When to consider OLAP? Author: Prakash Kewalramani Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 03/10/08 Email: erg@evaltech.com Abstract: Do you need an OLAP

More information

Business Application Services Testing

Business Application Services Testing Business Application Services Testing Curriculum Structure Course name Duration(days) Express 2 Testing Concept and methodologies 3 Introduction to Performance Testing 3 Web Testing 2 QTP 5 SQL 5 Load

More information

Using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC)

Using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC) Using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC) Introduction In today's e-business on-demand environment, more companies are turning to a Grid computing infrastructure for

More information

Extraction Transformation Loading ETL Get data out of sources and load into the DW

Extraction Transformation Loading ETL Get data out of sources and load into the DW Lection 5 ETL Definition Extraction Transformation Loading ETL Get data out of sources and load into the DW Data is extracted from OLTP database, transformed to match the DW schema and loaded into the

More information

EII - ETL - EAI What, Why, and How!

EII - ETL - EAI What, Why, and How! IBM Software Group EII - ETL - EAI What, Why, and How! Tom Wu 巫 介 唐, wuct@tw.ibm.com Information Integrator Advocate Software Group IBM Taiwan 2005 IBM Corporation Agenda Data Integration Challenges and

More information

Rackspace Cloud Databases and Container-based Virtualization

Rackspace Cloud Databases and Container-based Virtualization Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many

More information

OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni

OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni Agenda Database trends for the past 10 years Era of Big Data and Cloud Challenges and Options Upcoming database trends Q&A Scope

More information

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress* Oracle Database 11 g Performance Tuning Recipes Sam R. Alapati Darl Kuhn Bill Padfield Apress* Contents About the Authors About the Technical Reviewer Acknowledgments xvi xvii xviii Chapter 1: Optimizing

More information

Data Warehouse as a Service. Lot 2 - Platform as a Service. Version: 1.1, Issue Date: 05/02/2014. Classification: Open

Data Warehouse as a Service. Lot 2 - Platform as a Service. Version: 1.1, Issue Date: 05/02/2014. Classification: Open Data Warehouse as a Service Version: 1.1, Issue Date: 05/02/2014 Classification: Open Classification: Open ii MDS Technologies Ltd 2014. Other than for the sole purpose of evaluating this Response, no

More information

StreamServe Persuasion SP5 Microsoft SQL Server

StreamServe Persuasion SP5 Microsoft SQL Server StreamServe Persuasion SP5 Microsoft SQL Server Database Guidelines Rev A StreamServe Persuasion SP5 Microsoft SQL Server Database Guidelines Rev A 2001-2011 STREAMSERVE, INC. ALL RIGHTS RESERVED United

More information

SharePlex for SQL Server

SharePlex for SQL Server SharePlex for SQL Server Improving analytics and reporting with near real-time data replication Written by Susan Wong, principal solutions architect, Dell Software Abstract Many organizations today rely

More information

BENEFITS OF AUTOMATING DATA WAREHOUSING

BENEFITS OF AUTOMATING DATA WAREHOUSING BENEFITS OF AUTOMATING DATA WAREHOUSING Introduction...2 The Process...2 The Problem...2 The Solution...2 Benefits...2 Background...3 Automating the Data Warehouse with UC4 Workload Automation Suite...3

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

Oracle 11g New Features - OCP Upgrade Exam

Oracle 11g New Features - OCP Upgrade Exam Oracle 11g New Features - OCP Upgrade Exam This course gives you the opportunity to learn about and practice with the new change management features and other key enhancements in Oracle Database 11g Release

More information

Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence

Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence Appliances and DW Architectures John O Brien President and Executive Architect Zukeran Technologies 1 TDWI 1 Agenda What

More information

University Data Warehouse Design Issues: A Case Study

University Data Warehouse Design Issues: A Case Study Session 2358 University Data Warehouse Design Issues: A Case Study Melissa C. Lin Chief Information Office, University of Florida Abstract A discussion of the design and modeling issues associated with

More information

INTRODUCTION ADVANTAGES OF RUNNING ORACLE 11G ON WINDOWS. Edward Whalen, Performance Tuning Corporation

INTRODUCTION ADVANTAGES OF RUNNING ORACLE 11G ON WINDOWS. Edward Whalen, Performance Tuning Corporation ADVANTAGES OF RUNNING ORACLE11G ON MICROSOFT WINDOWS SERVER X64 Edward Whalen, Performance Tuning Corporation INTRODUCTION Microsoft Windows has long been an ideal platform for the Oracle database server.

More information

How to Enhance Traditional BI Architecture to Leverage Big Data

How to Enhance Traditional BI Architecture to Leverage Big Data B I G D ATA How to Enhance Traditional BI Architecture to Leverage Big Data Contents Executive Summary... 1 Traditional BI - DataStack 2.0 Architecture... 2 Benefits of Traditional BI - DataStack 2.0...

More information

An Oracle White Paper March 2014. Best Practices for Real-Time Data Warehousing

An Oracle White Paper March 2014. Best Practices for Real-Time Data Warehousing An Oracle White Paper March 2014 Best Practices for Real-Time Data Warehousing Executive Overview Today s integration project teams face the daunting challenge that, while data volumes are exponentially

More information

1 File Processing Systems

1 File Processing Systems COMP 378 Database Systems Notes for Chapter 1 of Database System Concepts Introduction A database management system (DBMS) is a collection of data and an integrated set of programs that access that data.

More information

Making Open Source BI Viable for the Enterprise. An Alternative Approach for Better Business Decision-Making. White Paper

Making Open Source BI Viable for the Enterprise. An Alternative Approach for Better Business Decision-Making. White Paper Making Open Source BI Viable for the Enterprise An Alternative Approach for Better Business Decision-Making White Paper Aligning Business and IT To Improve Performance Ventana Research 6150 Stoneridge

More information

SOLUTION BRIEF. Advanced ODBC and JDBC Access to Salesforce Data. www.datadirect.com

SOLUTION BRIEF. Advanced ODBC and JDBC Access to Salesforce Data. www.datadirect.com SOLUTION BRIEF Advanced ODBC and JDBC Access to Salesforce Data 2 CLOUD DATA ACCESS In the terrestrial world of enterprise computing, organizations depend on advanced JDBC and ODBC technologies to provide

More information