Patel Group of Institutions Advance Database Management System MCA SEM V UNIT -1

Size: px
Start display at page:

Download "Patel Group of Institutions 650004- Advance Database Management System MCA SEM V UNIT -1"

Transcription

1 Patel Group of Institutions Advance Database Management System MCA SEM V UNIT -1 Q-1: Explain Three Schema Architecture of Database. Ans: The goal of the three-schema architecture, illustrated in Figure 2.2, is to separate the user applications from the physical database. In this architecture, schemas can be defined at the following three levels: The internal level has an internal schema, which describes the physical storage structure of the database. The internal schema uses a physical data model and describes the complete details of data storage and access paths for the Database. The conceptual level has a conceptual schema, which describes the structure of the whole database for a community of users. The conceptual schema hides the details of physical storage structures and concentrates on describing entities, data types, relationships, user operations, and constraints. Usually, a representational data model is used to describe the conceptual schema when a database system is implemented. This implementation conceptual schema is often based on a conceptual schema design in a high-level data model. The external or view level includes a number of external schemas or user views. Each external schema describes the part of the database that a particular user group is interested in and hides the rest of the database from that user group. As in the previous level, each external schema is typically implemented using a 1

2 representational data model, possibly based on an external schema design in a high-level data model. See Figure in book, Q-2: Component Modules of Database Systems, Ans: Note: See Figure in book The figure is divided into two parts. The top part of the figure refers to the various users of the database environment and their interfaces. The lower part shows the internals of the DBMS responsible for storage of data and processing of transactions Let us consider the top part of Figure 2.3 first. It shows interfaces for the DBA staff, casual users who work with interactive interfaces to formulate queries, application programmers who create programs using some host programming languages, and parametric users who do data entry work by supplying parameters to predefined transactions. The DBA staff works on defining the database and tuning it by making changes to its definition using the DDL and other privileged commands.. The DDL compiler processes schema definitions, specified in the DDL, and stores descriptions of the schemas (meta-data) in the DBMS catalog. The catalog includes information such as the names and sizes of files, names and data types of data items, storage details of each file, mapping information among schemas, and constraints. Casual users and persons with occasional need for information from the database interact using some form of interface, which we call the 2

3 interactive query interface We have not explicitly shown any menubased or form-based interaction that may be used to generate the interactive query automatically. These queries are parsed and validated for correctness of the query syntax, the names of files and data elements, and so on by a query compiler that compiles them into an internal form. the query optimizer is concerned with the rearrangement and possible reordering of operations, elimination of redundancies Application programmers write programs in host languages such as Java, C, or C++ that are submitted to a precompiler. The precompiler extracts DML commands from an application program written in a host programming language. These commands are sent to the DML compiler for compilation into object code for database Access. The rest of the program is sent to the host language compiler In the lower part of Figure 2.3, the runtime database processor executes (1) the privileged commands (2) the executable query plans (3) the canned transactions with runtime parameters. It works with the system catalog and may update it with statistics. It also works with the stored data manager, which in turn uses basic operating system services for carrying out low-level input/output (read/write) operations between the disk and main memory. The runtime database processor handles other aspects of data transfer, such as management of buffers in the main memory. Some DBMSs have their own buffer management module while others depend on the OS for buffer management.we have shown concurrency control 3

4 and backup and recovery systems separately as a module in this figure. They are integrated into the working of the runtime database processor for purposes of transaction management. Q-3: Database System Utilities, Ans: Common utilities have the following types of functions: Loading. A loading utility is used to load existing data files such as text files or sequential files into the database. Usually, the current (source) for mat of the data file and the desired (target) database file structure are specified to the utility, which then automatically reformats the data and stores it in the database. Backup. A backup utility creates a backup copy of the database usually by dumping the entire database onto tape or other mass storage medium. The backup copy can be used to restore the database in case of catastrophic disk failure. Incremental backups are also often used, where only changes since the previous backup are recorded. Incremental backup is more complex, but saves storage space. Database storage reorganization. This utility can be used to reorganize a set of database files into different file organizations, and create new access paths to improve performance. Performance monitoring. Such a utility monitors database usage and provides statistics to the DBA. The DBA uses the statistics in making decisions such as whether or not to reorganize files or whether to add or drop indexes to improve performance 4

5 Q-4: Buffering of Blocks, Ans: Note: See figure in Book: When several blocks need to be transferred from disk to main memory and all the block addresses are known, several buffers can be reserved in main memory to speed up the transfer. While one buffer is being read or written, the CPU can process data in the other buffer. (Figure)Processes A and B are running concurrently in an interleaved fashion, whereas processes C and D are running concurrently in a parallel fashion. When a single CPU controls multiple processes, parallel execution is not possible. However, the processes can still run concurrently in an interleaved way. Buffering is most useful when processes can run concurrently in a parallel fashion, either because a separate disk I/O processor is available or because multiple CPU processors exist. (Figure) How reading and processing can proceed in parallel when the time required to process a disk block in memory is less than the time required to read the next block and fill a buffer. The CPU can start processing a block once its transfer to main memory is completed; at the same time, the disk I/O processor can be reading and transferring the next block into a different buffer. This technique is called double buffering Q-5: Places File Record on Disk, Ans: 1. Records and Record Types Data is usually stored in the form of records. Each record consists of a collection of related data values or items, where 5

6 each value is formed of one or more bytes and corresponds to a particular field of the record. For example, an EMPLOYEE record type may be defined using the C programming language notation as the following structure: struct employee{ char name[30]; char ssn[9]; int salary; int job_code; char department[20]; } ; 2. Files, Fixed-Length Records, and Variable-Length Records A file is a sequence of records. In many cases, all records in a file are of the same record type. If every record in the file has exactly the same size (in bytes), the file is said to be made up of fixed-length records. If different records in the file have different sizes, the file is said to be made up of variablelength records.a file may have variable-length records for several reasons: a. The file records are of the same record type, but one or more of the fields are of varying size (variable-length 6

7 fields). For example, the Name field of EMPLOYEE can be a variable-length field. b. The file records are of the same record type, but one or more of the fields may have multiple values for individual records; such a field is called a repeating field and a group of values for the field is often called a repeating group. c. The file records are of the same record type, but one or more of the fields are optional; that is, they may have values for some but not all of the file records (optional fields) d. The file contains records of different record types and hence of varying size (mixed file). This would occur if related records of different types were clustered (placed together) on disk blocks 3. Record Blocking and Spanned versus Unspanned Records The records of a file must be allocated to disk blocks because a block is the unit of data transfer between disk and memory. When the block size is larger than the record size, each block will contain numerous records, although some files may have unusually large records that cannot fit in one block. A pointer at the end of the first block points to the block containing the remainder of the record in case it is not the next consecutive block on disk. This organization is called spanned 7

8 because records can span more than one block. Whenever a record is larger than a block, we must use a spanned organization. If records are not allowed to cross block boundaries, the organization is called unspanned Q-6: Explain Client/Server Architecture, Two Tier Architecture and Three Tier Architecture. Ans: See Figure in Book. The client/server architecture was developed to deal with computing environments in which a large number of PCs, workstations, file servers, printers, data base servers, Web servers, e- mail servers, and other software and equipment are connected via a network. The idea is to define specialized servers with specific functionalities. For example, it is possible to connect a number of PCs or small workstations as clients to a file server that maintains the files of the client machines. Another machine can be designated as a printer server by being connected to various printers; all print requests by the clients are forwarded to this machine. Web servers or servers also fall into the specialized server category. The resources provided by specialized servers can be accessed by many client machines. The client machines provide the user with the appropriate interfaces to utilize these servers, as well as with local processing power to run local applications. 8

9 Two-Tier Client/Server Architectures for DBMSs In such architecture, the server is often called a query server or transaction server because it provides these two functionalities. In an RDBMS, the server is also often called an SQL server. The user interface programs and application programs can run on the client side. When DBMS access is required, the program establishes a connection to the DBMS (which is on the server side); once the connection is created, the client program can communicate with the DBMS. A standard called Open Database Connectivity (ODBC) provides an application programming interface (API), which allows client-side programs to call the DBMS, as long as both client and server machines have the necessary software installed. Most DBMS vendors provide ODBC drivers for their systems. A client program can actually connect to several RDBMSs and send query and transaction requests using the ODBC API, which are then processed at the server sites. Any query results are sent back to the client program, which can process and display the results as needed. Three-Tier and n-tier Architectures Many Web applications use an architecture called the three-tier architecture, which adds an intermediate layer between the client and the database server This intermediate layer or middle tier is called the application server or the Web server, depending on the application. This server plays an intermediary role by running application programs and storing business rules (procedures or constraints) that are used to access data from the database server. It can also improve database 9

10 security by checking a client s credentials before forwarding a request to the database server. Clients contain GUI interfaces and some additional application-specific business rules. The intermediate server accepts requests from the client, processes the request and sends database queries and commands to the database server, and then acts as a conduit for passing (partially) processed data from the database server to the clients, where it may be processed further and filtered to be presented to users in GUI format. Thus, the user interface, application rules, and data access act as the three tiers. UNIT - 2 Q-1: Physical Database Design in Relational Database. Ans: Physical design is an activity where the goal is not only to create the appropriate structuring of data in storage, but also to do so in a way that guarantees good performance. There are many factor affect to physical database design in RDBMS which as follow. A. Analyzing the Database Queries and Transactions. Before undertaking the physical database design, we must have a good idea of the intended use of the database by defining in a high-l evel form the queries and transactions that are expected to run on the database. For each retrieval query, the following information about the query would be needed: 1. The files that will be accessed by the query.1 2. The attributes on which any selection conditions for the query are specified. 10

11 3. Whether the selection condition is an equality, inequality, or a range condition. 4. The attributes on which any join conditions or conditions to link multiple tables or objects for the query are specified. 5. The attributes whose values will be retrieved by the query. The attributes listed in items 2 and 4 above are candidates for the definition of access structures, such as indexes, hash keys, or sorting of the file. For each update operation or update transaction, the following information would be needed: 1. The files that will be updated. 2. The type of operation on each file (insert, update, or delete). 3. The attributes on which selection conditions for a delete or update are specified. 4. The attributes whose values will be changed by an update operation. Again, the attributes listed in item 3 are candidates for access structures on the files, because they would be used to locate the records that will be updated or deleted.on the other hand, the attributes listed in item 4 are candidates for avoiding an access structure, since modifying them will require updating the access structures. B. Analyzing the Expected Frequency of Invocation of Queries and Transactions. Besides identifying the characteristics of expected retrieval queries and update transactions, we must consider their expected rates of invocation. This frequency information, along with the attribute information collected on each query and transaction, is used to compile a cumulative list of the expected frequency of use for all queries and transactions. This is expressed as the expected 11

12 frequency of using each attribute in each file as a selection attribute or a join attribute, over all the queries and transactions. Generally, for large volumes of processing, the informal rule can be used: approximately 80 percent of the processing is accounted for by only 20 percent of the queries and transactions. C. Analyzing the Time Constraints of Queries and Transactions. Some queries and transactions may have stringent performance constraints. For example, a transaction may have the constraint that it should terminate within 5 seconds on 95 percent of the occasions when it is invoked, and that it should never take more than 20 seconds. D. Analyzing the Expected Frequencies of Update Operations. A minimum number of access paths should be specified for a file that is frequently updated, because updating the access paths themselves slows down the update operations. For example, if a file that has frequent record insertions has 10 indexes on 10 different attributes, each of these indexes must be updated whenever a new record is inserted. The overhead for updating 10 indexes can slow down the insert operations. E. Analyzing the Uniqueness Constraints on Attributes. Access paths should be specified on all candidate key attributes or sets of attributes that are either the primary key of a file or unique attributes. The existence of an index (or other access path) makes it sufficient to only search the index when checking this uniqueness constraint, since all values of the attribute will exist in the leaf nodes of the index. For example, when inserting a new record, if a key attribute value of the new record already exists in the index, the insertion of the new record should be rejected, since it would violate the uniqueness constraint on the attribute. 12

13 Q-2: Overview of Database Tuning and Relational Systems Ans: To monitor and revise the physical database design constantly an activity referred to as database tuning. The goals of tuning are as follows: To make applications run faster. To improve (lower) the response time of queries and transactions. To improve the overall throughput of transactions. DBMSs can internally collect the following statistics: Sizes of individual tables. Number of distinct values in a column. The number of times a particular query or transaction is submitted and executed in an interval of time. The times required for different phases of query and transaction processing (for a given set of queries or transactions). These and other statistics create a profile of the contents and use of the database. Other information obtained from monitoring the database system activities and processes includes the following: Storage statistics. Data about allocation of storage into tablespaces, indexspaces, and buffer pools. I/O and device performance statistics. Total read/write activity (paging) on disk extents and disk hot spots. 13

14 Query/transaction processing statistics. Execution times of queries and transactions, and optimization times during query optimization. Locking/logging related statistics. Rates of issuing different types of locks, transaction throughput rates, and log records activity. Index statistics. Number of levels in an index, number of noncontiguous leaf pages, and so on. Tuning a database involves dealing with the following types of problems: How to avoid excessive lock contention, thereby increasing concurrency among transactions. How to minimize the overhead of logging and unnecessary dumping of data. How to optimize the buffer size and scheduling of processes. How to allocate resources such as disks, RAM, and processes for most efficient utilization. Tuning Indexes The initial choice of indexes may have to be revised for the following reasons: Certain queries may take too long to run for lack of an index. Certain indexes may not get utilized at all. Certain indexes may undergo too much updating because the index is on an attribute that undergoes frequent changes. Most DBMSs have a command or trace facility, which can be used by the DBA to ask the system to show how a query was executed what operations were performed in what order and what secondary access structures (indexes) were used. By analyzing these execution plans, it is possible to diagnose the causes of the above problems. 14

15 Some indexes may be dropped and some new indexes may be created based on the tuning analysis The goal of tuning is to dynamically evaluate the requirements, which sometimes fluctuate seasonally or during different times of the month or week, and to reorganize the indexes and file organizations to yield the best overall performance. Dropping and building new indexes is an overhead that can be justified in terms of performance improvements. Updating of a table is generally suspended while an index is dropped or created; this loss of service must be accounted for. Besides dropping or creating indexes and changing from a nonclustered to a clustered index and vice versa, rebuilding the index may improve performance. Q-3: Database Security and its Issues Ans: 1. Types of Security. Various legal and ethical issues regarding the right to access certain information for example, some information may be deemed to be private and cannot be accessed legally by unauthorized organizations or persons. In the United States, there are numerous laws governing privacy of information. Policy issues at the governmental, institutional, or corporate level as to what kinds of information should not be made publicly available for example, credit ratings and personal medical records. 15

16 System-related issues such as the system levels at which various security functions should be enforced for example, whether a security function should be handled at the physical hardware level, the operating system level, or the DBMS level. The need in some organizations to identify multiple security levels and to categorize the data and users based on these classifications for example, top secret, secret, confidential, and unclassified. Threats to Databases Loss of integrity. Database integrity refers to the requirement that information be protected from improper modification. Modification of data includes creation, insertion, updating, changing the status of data, and deletion. Integrity is lost if unauthorized changes are made to the data by either intentional or accidental acts. Loss of availability. Database availability refers to making objects available to a human user or a program to which they have a legitimate right. Loss of confidentiality. Database confidentiality refers to the protection of data from unauthorized disclosure. The impact of unauthorized disclosure of confidential information can range from violation of the Data Privacy Act to the jeopardization of national security. Unauthorized, unanticipated, or unintentional disclosure could result in loss of public confidence, embarrassment, or legal action against the organization. It is now customary to refer to two types of database security mechanisms: 16

17 Discretionary security mechanisms. These are used to grant privileges to users, including the capability to access specific data files, records, or fields in a specified mode (such as read, insert, delete, or update). Mandatory security mechanisms. These are used to enforce multilevel security by classifying the data and users into various security classes (or levels) 2. Control Measures Four main control measures are used to provide security of data in databases: Access control Inference control Flow control Data encryption A security problem common to computer systems is that of preventing unauthorized persons from accessing the system itself, either to obtain information or to make malicious changes in a portion of the database. The security mechanism of a DBMS must include provisions for restricting access to the database system as a whole. This function, called access control, is handled by creating user accounts and passwords to control the login process by the DBMS. Statistical databases are used to provide statistical information or summaries of values based on various criteria. For example, a database for population statistics may provide statistics based on age groups, income levels, household size, education levels, and other 17

18 criteria. Statistical database users such as government statisticians or market research firms are allowed to access the database to retrieve statistical information about a population but not to access the detailed confidential information about specific individuals. Security for statistical databases must ensure that information about individuals cannot be accessed. It is sometimes possible to deduce or infer certain facts concerning individuals from queries that involve only summary statistics on groups; consequently, this must not be permitted either. This problem, called statistical database security or Inference Control. Another security issue is that of flow control, which prevents information from flowing in such a way that it reaches unauthorized users. A final control measure is data encryption, which is used to protect sensitive data (such as credit card numbers) that is transmitted via some type of communications network. Encryption can be used to provide additional protection for sensitive portions of a database as well. The data is encoded using some coding algorithm. 3. Database Security and the DBA 1. Account creation. This action creates a new account and password for a user or a group of users to enable access to the DBMS. 2. Privilege granting. This action permits the DBA to grant certain privileges to certain accounts. 18

19 3. Privilege revocation. This action permits the DBA to revoke (cancel) certain privileges that were previously given to certain accounts. 4. Security level assignment. This action consists of assigning user accounts tothe appropriate security clearance level. 4. Access Control, User Accounts, and Database Audits Whenever a person or a group of persons needs to access a database system, the individual or group must first apply for a user account. The DBA will then create a new account number and password for the user if there is a legitimate need to access the database. The user must log in to the DBMS by entering the account number and password whenever database access is needed. The DBMS checks that the account number and password are valid; if they are, the user is permitted to use the DBMS and to access the database. The database system must also keep track of all operations on the database that are applied by a certain user throughout each login session, which consists of the sequence of database interactions that a user performs from the time of logging in to the time of logging off. When a user logs in, the DBMS can record the user s account number and associate it with the computer or device from which the user logged in. All operations applied from that computer or device are attributed to the user s account until the user logs off. It is particularly important to keep track of update operations that are applied to the database so that, if the database is tampered with, the DBA can determine which user did the tampering. 19

20 To keep a record of all updates applied to the database and of particular users who applied each update, we can modify the system log system log includes an entry for each operation applied to the database that may be required for recovery from a transaction failure or system crash. We can expand the log entries so that they also include the account number of the user and the online computer or device ID that applied each operation recorded in the log. If any tampering with the database is suspected, a database audit is performed which consists of reviewing the log to examine all accesses and operations applied to the database during a certain time period.when an illegal or unauthorized operation is found, the DBA can determine the account number used to perform the operation. Q-4: Granting and Revoking Privileges or Discretionary Access Control Ans: 1. Types of Discretionary Privileges The account level. At this level, the DBA specifies the particular privileges that each account holds independently of the relations in the database. The relation (or table) level. At this level, the DBA can control the privilege to access each individual relation or view in the database. 20

21 The privileges at the account level apply to the capabilities provided to the account itself and can include the CREATE SCHEMA or CREATE TABLE privilege, the CREATE VIEW privilege; the ALTER privilege, the DROP privilege, the MODIFY privilege, and the SELECT privilege. The second level of privileges applies to the relation level, whether they are base relations or virtual (view) relations. These privileges are defined for SQL2. In the following discussion, the term relation may refer either to a base relation or to a view, unless we explicitly specify one or the other. Privileges at the relation level specify for each user the individual relations on which each type of command can be applied. Some privileges also refer to individual columns (attributes) of relations. SQL2 commands provide privileges at the relation and attribute level only. discretionary privileges known as the access matrix model 2. Specifying Privileges through the Use of Views o The mechanism of views is an important discretionary authorization mechanism in its own right. 3. Revoking of Privileges o In some cases it is desirable to grant a privilege to a user temporarily 4. Propagation of Privileges Using the GRANT OPTION Whenever the owner A of a relation R grants a privilege on R to another account B, the privilege can be given to B with or without the GRANT OPTION. If the GRANT OPTION is given, this means that B can also grant that privilege on R to other accounts. Suppose that B is given the GRANT OPTION by A and that B then grants the privilege on R to a third account C, also with the GRANT OPTION. In this way, privileges on R can propagate to 21

22 other accounts without the knowledge of the owner of R. If the owner account A now revokes the privilege granted to B, all the privileges that B propagated based on that privilege should automatically be revoked by the system. Q-5: Role Based Access Control for Multilevel Security or Mandatory Access Control. Ans: Mandatory access control (MAC), would typically be combined with the discretionary access control mechanisms It is important to note that most commercial DBMSs currently provide mechanisms only for discretionary access control. However, the need for multilevel security exists in government, military, and intelligence applications, as well as in many industrial and corporate applications. Typical security classes are top secret (TS), secret (S), confidential (C), and unclassified (U), where TS is the highest level and U the lowest. For simplicity, we will use the system with four security classification levels, where TS S C U, The commonly used model for multilevel security, known as the Bell-LaPadula model, classifies each subject (user, account, program) and object (relation, tuple, column, view, operation) into one of the security classifications TS, S, C, or U.We will refer to the clearance (classification) of a subject S as class(s) and to the classification of an object O as class(o). Two restrictions are enforced on data access based on the subject/object classifications: 22

23 1. A subject S is not allowed read access to an object O unless class(s) class(o). This is known as the simple security property. 2. A subject S is not allowed to write an object O unless class(s) class(o). This is known as the star property (or *-property). The first restriction is intuitive and enforces the obvious rule that no subject can read an object whose security classification is higher than the subject s security clearance. The second restriction is less intuitive. It prohibits a subject from writing an object at a lower security classification than the subject s security clearance. To incorporate multilevel security notions into the relational database model, it is common to consider attribute values and tuples as data objects. Hence, each attribute A is associated with a classification attribute C in the schema, and each attribute value in a tuple is associated with a corresponding security classification. In addition, in some models, a tuple classification attribute TC is added to the relation attributes to provide a classification for each tuple as a whole. The model we describe here is known as the multilevel model, because it allows classifications at multiple security levels. A multilevel relation schema R with n attributes would be represented as: R(A1, C1, A2, C2,..., An, Cn, TC) The apparent key of a multilevel relation is the set of attributes that would have formed the primary key in a regular (single-level) relation. it is possible to store a single tuple in the relation at a higher classification level and produce the corresponding tuples at a lowerlevel classification through a process known as filtering. 23

24 This leads to the concept of polyinstantiation,4 where several tuples can have the same apparent key value but have different attribute values for users at different clearance levels SEE Figure in Books Role-Based Access Control Role-based access control (RBAC) emerged rapidly in the 1990s as a proven technology for managing and enforcing security in large-scale enterprise-wide systems.its basic notion is that privileges and other permissions are associated with organizational roles, rather than individual users. Individual users are then assigned to appropriate roles. Roles can be created using the CREATE ROLE and DESTROY ROLE commands. For example, a company may have roles such as sales account manager, purchasing agent, mailroom clerk, department manager, and so on. Multiple individuals can be assigned to each role. RBAC can be used with traditional discretionary and mandatory access controls; it ensures that only authorized users in their specified roles are given access to certain data or resources Q-6: Encryption and PKI Ans: Encryption is the conversion of data into a form, called a ciphertext, which cannot be easily understood by unauthorized persons. It enhances security and privacy when access controls are bypassed, because in cases of data loss or theft, encrypted data cannot be easily understood by unauthorized persons. With this background, we adhere to following standard definitions 24

25 Ciphertext: Encrypted (enciphered) data. Plaintext (or cleartext): Intelligible data that has meaning and can be read or acted upon without the application of decryption. Encryption: The process of transforming plaintext into ciphertext. Decryption: The process of transforming ciphertext back into plaintext. 1. The Data Encryption and Advanced Encryption Standards The Data Encryption Standard (DES) is a system developed by the U.S. government for use by the general public DES can provide end-to-end encryption on the channel between sender A and receiver B The algorithm derives its strength from repeated application of these two techniques for a total of 16 cycles. Plaintext (the original form of the message) is encrypted as blocks of 64 bits. 2. Symmetric Key Algorithms A symmetric key is one key that is used for both encryption and decryption. By using a symmetric key, fast encryption and decryption is possible for routine use with sensitive data in the database. A message encrypted with a secret key can be decrypted only with the same secret key. Algorithms used for symmetric key encryption are called secret-key algorithms. Since secret-key algorithms are mostly used for encrypting the content of a message, they are also called contentencryption algorithms. 3. Public (Asymmetric) Key Encryption 25

26 A public key encryption scheme, or infrastructure, has six ingredients: 1. Plaintext. This is the data or readable message that is fed into the algorithm as input. 2. Encryption algorithm. This algorithm performs various transformations on the plaintext. 3. and 4. Public and private keys. These are a pair of keys that have been selected so that if one is used for encryption, the other is used for decryption. The exact transformations performed by the encryption algorithm depend on the public or private key that is provided as input. For example, if a message is encrypted using the public key, it can only be decrypted using the private key. 5. Ciphertext. This is the scrambled message produced as output. It depends on the plaintext and the key. For a given message, two different keys will produce two different ciphertexts. 6. Decryption algorithm. This algorithm accepts the ciphertext and the matching key and produces the original plaintext. As the name suggests, the public key of the pair is made public for others to use, whereas the private key is known only to its owner. A general-purpose public key cryptographic algorithm relies on one key for encryption and a different but related key for decryption. The essential steps are as follows: 26

27 1. Each user generates a pair of keys to be used for the encryption and decryption of messages. 2. Each user places one of the two keys in a public register or other accessible file. This is the public key. The companion key is kept private. 3. If a sender wishes to send a private message to a receiver, the sender encrypts the message using the receiver s public key. 4. When the receiver receives the message, he or she decrypts it using the receiver s private key. No other recipient can decrypt the message because only the receiver knows his or her private key. 4. Digital Signatures A digital signature is an example of using encryption techniques to provide authentication services in electronic commerce applications. Like a handwritten signature, a digital signature is a means of associating a mark unique to an individual with a body of text. The mark should be unforgettable, meaning that others should be able to check that the signature comes from the originator A digital signature consists of a string of symbols. If a person s digital signature were always the same for each message, then one could easily counterfeit it by simply copying the string of symbols. Thus, signatures must be different for each use. UNIT 3 27

28 Q-1: Recovery Techniques Based on Deferred Update. Ans: The idea behind deferred update is to defer or postpone any actual updates to the database on disk until the transaction completes its execution successfully and reaches its commit point. During transaction execution, the updates are recorded only in the log and in the cache buffers. After the transaction reaches its commit point and the log is forcewritten to disk, the updates are recorded in the database. If a transaction fails before reaching its commit point, there is no need to undo any operations because the transaction has not affected the database on disk in any way. We can state a typical deferred update protocol as follows: 1. A transaction cannot change the database on disk until it reaches its commit point. 2. A transaction does not reach its commit point until all its REDO-type log entries are recorded in the log and the log buffer is force-written to disk. Notice that step 2 of this protocol is a restatement of the write-ahead logging (WAL) protocol. Because the database is never updated on disk until after the transaction commits, there is never a need to UNDO any operations. REDO is needed in case the system fails after a transaction commits but before all its changes are recorded in the database on disk. In this case, the transaction operations are redone from the log entries during recovery. For multiuser systems with concurrency control, the concurrency control and recovery processes are interrelated. Consider a system in which concurrency control uses strict two-phase locking, so the locks on items remain in effect until the transaction reaches its commit point. After that, the locks can be released. This ensures strict and 28

29 serializable schedules. Assuming that [checkpoint] entries are included in the log, a possible recovery algorithm for this case, which we call RDU_M (Recovery using Deferred Update in a Multiuser environment) Q-2: Recovery Techniques Based on Immediate Update. Ans: In these techniques, when a transaction issues an update command, the database on disk can be updated immediately, without any need to wait for the transaction to reach its commit point. Notice that it is not a requirement that every update be applied immediately to disk; it is just possible that some updates are applied to disk before the transaction commits. Provisions must be made for undoing the effect of update operations that have been applied to the database by a failed transaction. This is accomplished by rolling back the transaction and undoing the effect of the transaction s write_item operations. Therefore, the UNDOtype log entries, which include the old value (BFIM) of the item,must be stored in the log. If the transaction is allowed to commit before all its changes are written to the database, we have the most general case, known as the UNDO/REDO recovery algorithm. Q-3: Recovery in Distributed Database, Ans: For concurrency control and recovery purposes, numerous problems arise in a distributed DBMS environment that are not encountered in a centralized DBMS environment. These include the following: 29

30 Dealing with multiple copies of the data items. The concurrency control method is responsible for maintaining consistency among these copies. The recovery method is responsible for making a copy consistent with other copies if the site on which the copy is stored fails and recovers later. Failure of individual sites. The DDBMS should continue to operate with its running sites, if possible, when one or more individual sites fail.when a site recovers, its local database must be brought up-to-date with the rest of the sites before it rejoins the system. Failure of communication links. The system must be able to deal with the failure of one or more of the communication links that connect the sites. An extreme case of this problem is that network partitioning may occur. This breaks up the sites into two or more partitions, where the sites within each partition can communicate only with one another and not with sites in other partitions. Distributed commit. Problems can arise with committing a transaction that is accessing databases stored on multiple sites if some sites fail during the commit process. The two-phase commit protocol (see Section 23.6) is often used to deal with this problem. Distributed deadlock. Deadlock may occur among several sites, so techniques for dealing with deadlocks must be extended to take this into account. 30

31 1. Distributed Concurrency Control Based on a Distinguished Copy of a Data Item To deal with replicated data items in a distributed database, a number of concurrency control methods have been proposed that extend the concurrency control techniques for centralized databases. A number of different methods are based on this idea, but they differ in their method of choosing the distinguished copies. In the primary site technique, all distinguished copies are kept at the same site. A modification of this approach is the primary site with a backup site. Another approach is the primary copy method, where the distinguished copies of the various data items can be stored in different sites. Primary Site Technique. In this method a single primary site is designated to be the coordinator site for all database items. Hence, all locks are kept at that site, and all requests for locking or unlocking are sent there. This method is thus an extension of the centralized locking approach The advantage of this approach is that it is a simple extension of the centralized approach and thus is not overly complex. However, it has certain inherent disadvantages. One is that all locking requests are sent to a single site, possibly overloading that site and 31

32 causing a system bottleneck. A second disadvantage is that failure of the primary site paralyzes the system, since all locking information is kept at that site. This can limit system reliability and availability. Primary Site with Backup Site. This approach addresses the second disadvantage of the primary site method by designating a second site to be a backup site All locking information is maintained at both the primary and the backup sites. In case of primary site failure, the backup site takes over as the primary site, and a new backup site is chosen This simplifies the process of recovery from failure of the primary site, since the backup site takes over and processing can resume after a new backup site is chosen and the lock status information is copied to that site. Primary Copy Technique. This method attempts to distribute the load of lock coordination among various sites by having the distinguished copies of different data items stored at different sites. Failure of one site affects any transactions that are accessing locks on items whose primary copies reside at that site, but other transactions are not affected. This method can also use backup sites to enhance reliability 2. Distributed Recovery The recovery process in distributed databases is quite involved.we give only a very brief idea of some of the issues here. In some cases it is quite difficult even to determine whether a site is down without exchanging numerous messages with other sites. For example, suppose that site X sends a message to site Y and expects a response 32

33 from Y but does not receive it. There are several possible explanations: The message was not delivered to Y because of communication failure. \ Site Y is down and could not respond. Site Y is running and sent a response, but the response was not delivered. Without additional information or the sending of additional messages, it is difficult to determine what actually happened. Q-4: Types of Single Level Ordered Indexes. Ans: There are several types of ordered indexes. 1. Primary Indexes A primary index is an ordered file whose records are of fixed length with two fields, and it acts like an access structure to efficiently search for and access the data records in a data file The first field is of the same data type as the ordering key field called the primary key of the data file, and the second field is a pointer to a disk block (a block address). The total number of entries in the index is the same as the number of disk blocks in the ordered data file. The first record in each block of the data file is called the anchor record of the block, or simply the block anchor. Indexes can also be characterized as dense or sparse. A dense index has an index entry for every search key value (and hence every 33

34 record) in the data file. A sparse (or nondense) index, on the other hand, has index entries for only some of the search values. A sparse index has fewer entries than the number of records in the file. Thus, a primary index is a nondense (sparse) index, since it includes an entry for each disk block of the data file and the keys of its anchor record rather than for every search value (or every record). The index file for a primary index occupies a much smaller space than does the data file, for two reasons. First, there are fewer index entries than there are records in the data file. Second, each index entry is typically smaller in size than a data record because it has only two fields; consequently, more index entries than data records can fit in one block. A major problem with a primary index as with any ordered file is insertion and deletion of records. With a primary index, the problem is compounded because if we attempt to insert a record in its correct position in the data file, we must not only move records to make space for the new record but also change some index entries, since moving records will change the anchor records of some blocks. 2. Clustering Indexes If file records are physically ordered on a nonkey field which does not have a distinct value for each record that field is called the clustering field and the data file is called a clustered file. We can create a different type of index, called a clustering index, to speed up retrieval of all the records that have the same value for the clustering field. A clustering index is also an ordered file with two fields; the first field is of the same type as the clustering field of the data file, and the second field is a disk block pointer. There is one entry in the clustering index for each distinct value of the clustering field, and it contains the value and a pointer to the first 34

35 block in the data file that has a record with that value for its clustering field. 3. Secondary Indexes A secondary index provides a secondary means of accessing a data file for which some primary access already exists. The secondary index may be created on a field that is a candidate key and has a unique value in every record, or on a nonkey field with duplicate values. The index is again an ordered file with two fields. The first field is of the same data type as some nonordering field of the data file that is an indexing field. The second field is either a block pointer or a record pointer. First we consider a secondary index access structure on a key (unique) field that has a distinct value for every record. Such a field is sometimes called a secondary key; in the relational model, this would correspond to any UNIQUE key attribute or to the primary key attribute of a table. A secondary index usually needs more storage space and longer search time than does a primary index, because of its larger number of entries. However, the improvement in search time for an arbitrary record is much greater for a secondary index than for a primary index, since we would have to do a linear search on the data file if the secondary index did not exist. Q-5: Multilevel Index Ans: A multilevel index considers the index file, which we will now refer to as the first (or base) level of a multilevel index, as an ordered file with a distinct value for each K(i). 35

36 Therefore, by considering the first-level index file as a sorted data file, we can create a primary index for the first level; this index to the first level is called the second level of the multilevel index. We can repeat this process for the second level. The third level, which is a primary index for the second level, has an entry for each second-level block, so the number of third-level entries is r3 = (r2/fo). Notice that we require a second level only if the first level needs more than one block of disk storage, and, similarly, we require a third level only if the second level needs more than one block. The multilevel scheme described here can be used on any type of index whether it is primary, clustering, or secondary. Unit 4 Q-1: Temporal Database Concepts. Ans: Temporal databases, in the broadest sense, encompass all database applications that require some aspect of time when organizing their information. 1. Time Representation, Calendars, and Time Dimensions For temporal databases, time is considered to be an ordered sequence of points in some granularity that is determined by the application. For example, suppose that some temporal application never requires time units that are less than one second. Then, each time point represents one second using this granularity. In reality, each second is 36

Chapter 23. Database Security. Security Issues. Database Security

Chapter 23. Database Security. Security Issues. Database Security Chapter 23 Database Security Security Issues Legal and ethical issues Policy issues System-related issues The need to identify multiple security levels 2 Database Security A DBMS typically includes a database

More information

Chapter 23. Database Security. Security Issues. Database Security

Chapter 23. Database Security. Security Issues. Database Security Chapter 23 Database Security Security Issues Legal and ethical issues Policy issues System-related issues The need to identify multiple security levels 2 Database Security A DBMS typically includes a database

More information

Database Security and Authorization

Database Security and Authorization Database Security and Authorization 1 Database Security and Authorization 1.1 Introduction to Database Security Issues 1.2 Types of Security 1.3 Database Security and DBA 1.4 Access Protection, User Accounts,

More information

ITM661 Database Systems. Database Security and Administration

ITM661 Database Systems. Database Security and Administration ITM661 Database Systems Database Security and Administration Outline Introduction to Database Security Issues Types of Security Threats to databases Database Security and DBA Access Protection, User Accounts,

More information

Chapter 2 Database System Concepts and Architecture

Chapter 2 Database System Concepts and Architecture Chapter 2 Database System Concepts and Architecture Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Outline Data Models, Schemas, and Instances Three-Schema Architecture

More information

Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap.

Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap. Database System Architecture & System Catalog Instructor: Mourad Benchikh Text Books: Elmasri & Navathe Chap. 17 Silberschatz & Korth Chap. 1 Oracle9i Documentation First-Semester 1427-1428 Definitions

More information

Database Security. Soon M. Chung Department of Computer Science and Engineering Wright State University schung@cs.wright.

Database Security. Soon M. Chung Department of Computer Science and Engineering Wright State University schung@cs.wright. Database Security Soon M. Chung Department of Computer Science and Engineering Wright State University schung@cs.wright.edu 937-775-5119 Goals of DB Security Integrity: Only authorized users should be

More information

Chapter 24. Database Security. Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley

Chapter 24. Database Security. Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 24 Database Security Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1 Introduction to Database Security Issues Types of Security Legal and ethical issues: privacy issues

More information

CHAPTER 2 DATABASE MANAGEMENT SYSTEM AND SECURITY

CHAPTER 2 DATABASE MANAGEMENT SYSTEM AND SECURITY CHAPTER 2 DATABASE MANAGEMENT SYSTEM AND SECURITY 2.1 Introduction In this chapter, I am going to introduce Database Management Systems (DBMS) and the Structured Query Language (SQL), its syntax and usage.

More information

DATABASE SYSTEM CONCEPTS AND ARCHITECTURE CHAPTER 2

DATABASE SYSTEM CONCEPTS AND ARCHITECTURE CHAPTER 2 1 DATABASE SYSTEM CONCEPTS AND ARCHITECTURE CHAPTER 2 2 LECTURE OUTLINE Data Models Three-Schema Architecture and Data Independence Database Languages and Interfaces The Database System Environment DBMS

More information

1 File Processing Systems

1 File Processing Systems COMP 378 Database Systems Notes for Chapter 1 of Database System Concepts Introduction A database management system (DBMS) is a collection of data and an integrated set of programs that access that data.

More information

CS377: Database Systems Data Security and Privacy. Li Xiong Department of Mathematics and Computer Science Emory University

CS377: Database Systems Data Security and Privacy. Li Xiong Department of Mathematics and Computer Science Emory University CS377: Database Systems Data Security and Privacy Li Xiong Department of Mathematics and Computer Science Emory University 1 Principles of Data Security CIA Confidentiality Triad Prevent the disclosure

More information

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File Allocation

More information

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File Allocation

More information

Chapter 3. Database Environment - Objectives. Multi-user DBMS Architectures. Teleprocessing. File-Server

Chapter 3. Database Environment - Objectives. Multi-user DBMS Architectures. Teleprocessing. File-Server Chapter 3 Database Architectures and the Web Transparencies Database Environment - Objectives The meaning of the client server architecture and the advantages of this type of architecture for a DBMS. The

More information

Physical Database Design and Tuning

Physical Database Design and Tuning Chapter 20 Physical Database Design and Tuning Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1. Physical Database Design in Relational Databases (1) Factors that Influence

More information

1. Physical Database Design in Relational Databases (1)

1. Physical Database Design in Relational Databases (1) Chapter 20 Physical Database Design and Tuning Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1. Physical Database Design in Relational Databases (1) Factors that Influence

More information

B.Sc (Computer Science) Database Management Systems UNIT-V

B.Sc (Computer Science) Database Management Systems UNIT-V 1 B.Sc (Computer Science) Database Management Systems UNIT-V Business Intelligence? Business intelligence is a term used to describe a comprehensive cohesive and integrated set of tools and process used

More information

4. Identify the security measures provided by Microsoft Office Access. 5. Identify the methods for securing a DBMS on the Web.

4. Identify the security measures provided by Microsoft Office Access. 5. Identify the methods for securing a DBMS on the Web. Topic 8 Database Security LEARNING OUTCOMES When you have completed this Topic you should be able to: 1. Discuss the important of database security to an organisation. 2. Identify the types of threat that

More information

COSC344 Database Theory and Applications. Lecture 23 Security and Auditing. COSC344 Lecture 23 1

COSC344 Database Theory and Applications. Lecture 23 Security and Auditing. COSC344 Lecture 23 1 COSC344 Database Theory and Applications Lecture 23 Security and Auditing COSC344 Lecture 23 1 Overview Last Lecture Indexing This Lecture Database Security and Auditing Security Mandatory access control

More information

Concepts of Database Management Seventh Edition. Chapter 7 DBMS Functions

Concepts of Database Management Seventh Edition. Chapter 7 DBMS Functions Concepts of Database Management Seventh Edition Chapter 7 DBMS Functions Objectives Introduce the functions, or services, provided by a DBMS Describe how a DBMS handles updating and retrieving data Examine

More information

Chapter 8: Structures for Files. Truong Quynh Chi tqchi@cse.hcmut.edu.vn. Spring- 2013

Chapter 8: Structures for Files. Truong Quynh Chi tqchi@cse.hcmut.edu.vn. Spring- 2013 Chapter 8: Data Storage, Indexing Structures for Files Truong Quynh Chi tqchi@cse.hcmut.edu.vn Spring- 2013 Overview of Database Design Process 2 Outline Data Storage Disk Storage Devices Files of Records

More information

Physical Data Organization

Physical Data Organization Physical Data Organization Database design using logical model of the database - appropriate level for users to focus on - user independence from implementation details Performance - other major factor

More information

DATABASE MANAGEMENT SYSTEM

DATABASE MANAGEMENT SYSTEM REVIEW ARTICLE DATABASE MANAGEMENT SYSTEM Sweta Singh Assistant Professor, Faculty of Management Studies, BHU, Varanasi, India E-mail: sweta.v.singh27@gmail.com ABSTRACT Today, more than at any previous

More information

Chapter 10. Backup and Recovery

Chapter 10. Backup and Recovery Chapter 10. Backup and Recovery Table of Contents Objectives... 1 Relationship to Other Units... 2 Introduction... 2 Context... 2 A Typical Recovery Problem... 3 Transaction Loggoing... 4 System Log...

More information

Practical Database Design and Tuning

Practical Database Design and Tuning Practical Database Design and Tuning 1. Physical Database Design in Relational Databases Factors that Influence Physical Database Design: A. Analyzing the database queries and transactions For each query,

More information

Chapter 12 File Management

Chapter 12 File Management Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 12 File Management Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Roadmap Overview File organisation and Access

More information

Chapter 12 File Management. Roadmap

Chapter 12 File Management. Roadmap Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 12 File Management Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Overview Roadmap File organisation and Access

More information

Database Security. Chapter 21

Database Security. Chapter 21 Database Security Chapter 21 Introduction to DB Security Secrecy: Users should not be able to see things they are not supposed to. E.g., A student can t see other students grades. Integrity: Users should

More information

Security and Authorization. Introduction to DB Security. Access Controls. Chapter 21

Security and Authorization. Introduction to DB Security. Access Controls. Chapter 21 Security and Authorization Chapter 21 Database Management Systems, 3ed, R. Ramakrishnan and J. Gehrke 1 Introduction to DB Security Secrecy: Users should not be able to see things they are not supposed

More information

INTRODUCTION TO DATABASE SYSTEMS

INTRODUCTION TO DATABASE SYSTEMS 1 INTRODUCTION TO DATABASE SYSTEMS Exercise 1.1 Why would you choose a database system instead of simply storing data in operating system files? When would it make sense not to use a database system? Answer

More information

Database and Data Mining Security

Database and Data Mining Security Database and Data Mining Security 1 Threats/Protections to the System 1. External procedures security clearance of personnel password protection controlling application programs Audit 2. Physical environment

More information

Chapter 14: Recovery System

Chapter 14: Recovery System Chapter 14: Recovery System Chapter 14: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Remote Backup Systems Failure Classification Transaction failure

More information

Security Digital Certificate Manager

Security Digital Certificate Manager System i Security Digital Certificate Manager Version 5 Release 4 System i Security Digital Certificate Manager Version 5 Release 4 Note Before using this information and the product it supports, be sure

More information

Database Management Systems

Database Management Systems Database Management Systems UNIT -1 1.0 Introduction and brief history to Database 1.1 Characteristics of database 1.2 Difference between File System & DBMS. 1.3 Advantages of DBMS 1.4 Functions of DBMS

More information

Database Management. Chapter Objectives

Database Management. Chapter Objectives 3 Database Management Chapter Objectives When actually using a database, administrative processes maintaining data integrity and security, recovery from failures, etc. are required. A database management

More information

Record Storage and Primary File Organization

Record Storage and Primary File Organization Record Storage and Primary File Organization 1 C H A P T E R 4 Contents Introduction Secondary Storage Devices Buffering of Blocks Placing File Records on Disk Operations on Files Files of Unordered Records

More information

Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation

Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation Objectives Distributed Databases and Client/Server Architecture IT354 @ Peter Lo 2005 1 Understand the advantages and disadvantages of distributed databases Know the design issues involved in distributed

More information

ICOM 6005 Database Management Systems Design. Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001

ICOM 6005 Database Management Systems Design. Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001 ICOM 6005 Database Management Systems Design Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001 Readings Read Chapter 1 of text book ICOM 6005 Dr. Manuel

More information

Overview. SSL Cryptography Overview CHAPTER 1

Overview. SSL Cryptography Overview CHAPTER 1 CHAPTER 1 Note The information in this chapter applies to both the ACE module and the ACE appliance unless otherwise noted. The features in this chapter apply to IPv4 and IPv6 unless otherwise noted. Secure

More information

Introduction to Database Systems. Chapter 1 Introduction. Chapter 1 Introduction

Introduction to Database Systems. Chapter 1 Introduction. Chapter 1 Introduction Introduction to Database Systems Winter term 2013/2014 Melanie Herschel melanie.herschel@lri.fr Université Paris Sud, LRI 1 Chapter 1 Introduction After completing this chapter, you should be able to:

More information

Security Digital Certificate Manager

Security Digital Certificate Manager IBM i Security Digital Certificate Manager 7.1 IBM i Security Digital Certificate Manager 7.1 Note Before using this information and the product it supports, be sure to read the information in Notices,

More information

Secure Network Communications FIPS 140 2 Non Proprietary Security Policy

Secure Network Communications FIPS 140 2 Non Proprietary Security Policy Secure Network Communications FIPS 140 2 Non Proprietary Security Policy 21 June 2010 Table of Contents Introduction Module Specification Ports and Interfaces Approved Algorithms Test Environment Roles

More information

Trusted RUBIX TM. Version 6. Multilevel Security in Trusted RUBIX White Paper. Revision 2 RELATIONAL DATABASE MANAGEMENT SYSTEM TEL +1-202-412-0152

Trusted RUBIX TM. Version 6. Multilevel Security in Trusted RUBIX White Paper. Revision 2 RELATIONAL DATABASE MANAGEMENT SYSTEM TEL +1-202-412-0152 Trusted RUBIX TM Version 6 Multilevel Security in Trusted RUBIX White Paper Revision 2 RELATIONAL DATABASE MANAGEMENT SYSTEM Infosystems Technology, Inc. 4 Professional Dr - Suite 118 Gaithersburg, MD

More information

File Management. Chapter 12

File Management. Chapter 12 Chapter 12 File Management File is the basic element of most of the applications, since the input to an application, as well as its output, is usually a file. They also typically outlive the execution

More information

Data Management Policies. Sage ERP Online

Data Management Policies. Sage ERP Online Sage ERP Online Sage ERP Online Table of Contents 1.0 Server Backup and Restore Policy... 3 1.1 Objectives... 3 1.2 Scope... 3 1.3 Responsibilities... 3 1.4 Policy... 4 1.5 Policy Violation... 5 1.6 Communication...

More information

Database Security. Sarajane Marques Peres, Ph.D. University of São Paulo www.each.usp.br/sarajane

Database Security. Sarajane Marques Peres, Ph.D. University of São Paulo www.each.usp.br/sarajane Database Security Sarajane Marques Peres, Ph.D. University of São Paulo www.each.usp.br/sarajane Based on Elsmari x Navathe / Silberschatz, Korth, Sudarshan s books Types of security Legal and ethical

More information

INFO/CS 330: Applied Database Systems

INFO/CS 330: Applied Database Systems INFO/CS 330: Applied Database Systems Introduction to Database Security Johannes Gehrke johannes@cs.cornell.edu http://www.cs.cornell.edu/johannes Introduction to DB Security Secrecy:Users should not be

More information

Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led

Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Course Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills

More information

Chapter 1 File Organization 1.0 OBJECTIVES 1.1 INTRODUCTION 1.2 STORAGE DEVICES CHARACTERISTICS

Chapter 1 File Organization 1.0 OBJECTIVES 1.1 INTRODUCTION 1.2 STORAGE DEVICES CHARACTERISTICS Chapter 1 File Organization 1.0 Objectives 1.1 Introduction 1.2 Storage Devices Characteristics 1.3 File Organization 1.3.1 Sequential Files 1.3.2 Indexing and Methods of Indexing 1.3.3 Hash Files 1.4

More information

Chapter 12 Databases, Controls, and Security

Chapter 12 Databases, Controls, and Security Systems Analysis and Design in a Changing World, sixth edition 12-1 Chapter 12 Databases, Controls, and Security Table of Contents Chapter Overview Learning Objectives Notes on Opening Case and EOC Cases

More information

Secure cloud access system using JAR ABSTRACT:

Secure cloud access system using JAR ABSTRACT: Secure cloud access system using JAR ABSTRACT: Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that

More information

B.Com(Computers) II Year DATABASE MANAGEMENT SYSTEM UNIT- V

B.Com(Computers) II Year DATABASE MANAGEMENT SYSTEM UNIT- V B.Com(Computers) II Year DATABASE MANAGEMENT SYSTEM UNIT- V 1 1) What is Distributed Database? A) A database that is distributed among a network of geographically separated locations. A distributed database

More information

Optimizing Performance. Training Division New Delhi

Optimizing Performance. Training Division New Delhi Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,

More information

Chapter 10. Cloud Security Mechanisms

Chapter 10. Cloud Security Mechanisms Chapter 10. Cloud Security Mechanisms 10.1 Encryption 10.2 Hashing 10.3 Digital Signature 10.4 Public Key Infrastructure (PKI) 10.5 Identity and Access Management (IAM) 10.6 Single Sign-On (SSO) 10.7 Cloud-Based

More information

Storage and File Structure

Storage and File Structure Storage and File Structure Chapter 10: Storage and File Structure Overview of Physical Storage Media Magnetic Disks RAID Tertiary Storage Storage Access File Organization Organization of Records in Files

More information

HIPAA Security COMPLIANCE Checklist For Employers

HIPAA Security COMPLIANCE Checklist For Employers Compliance HIPAA Security COMPLIANCE Checklist For Employers All of the following steps must be completed by April 20, 2006 (April 14, 2005 for Large Health Plans) Broadly speaking, there are three major

More information

æ A collection of interrelated and persistent data èusually referred to as the database èdbèè.

æ A collection of interrelated and persistent data èusually referred to as the database èdbèè. CMPT-354-Han-95.3 Lecture Notes September 10, 1995 Chapter 1 Introduction 1.0 Database Management Systems 1. A database management system èdbmsè, or simply a database system èdbsè, consists of æ A collection

More information

Chapter 8 A secure virtual web database environment

Chapter 8 A secure virtual web database environment Chapter 8 Information security with special reference to database interconnectivity Page 146 8.1 Introduction The previous three chapters investigated current state-of-the-art database security services

More information

Basic Concepts of Database Systems

Basic Concepts of Database Systems CS2501 Topic 1: Basic Concepts 1.1 Basic Concepts of Database Systems Example Uses of Database Systems - account maintenance & access in banking - lending library systems - airline reservation systems

More information

Reference Guide for Security in Networks

Reference Guide for Security in Networks Reference Guide for Security in Networks This reference guide is provided to aid in understanding security concepts and their application in various network architectures. It should not be used as a template

More information

DATABASDESIGN FÖR INGENJÖRER - 1DL124

DATABASDESIGN FÖR INGENJÖRER - 1DL124 1 DATABASDESIGN FÖR INGENJÖRER - 1DL124 Sommar 2005 En introduktionskurs i databassystem http://user.it.uu.se/~udbl/dbt-sommar05/ alt. http://www.it.uu.se/edu/course/homepage/dbdesign/st05/ Kjell Orsborn

More information

National Identity Exchange Federation (NIEF) Trustmark Signing Certificate Policy. Version 1.1. February 2, 2016

National Identity Exchange Federation (NIEF) Trustmark Signing Certificate Policy. Version 1.1. February 2, 2016 National Identity Exchange Federation (NIEF) Trustmark Signing Certificate Policy Version 1.1 February 2, 2016 Copyright 2016, Georgia Tech Research Institute Table of Contents TABLE OF CONTENTS I 1 INTRODUCTION

More information

SecureDoc Disk Encryption Cryptographic Engine

SecureDoc Disk Encryption Cryptographic Engine SecureDoc Disk Encryption Cryptographic Engine FIPS 140-2 Non-Proprietary Security Policy Abstract: This document specifies Security Policy enforced by SecureDoc Cryptographic Engine compliant with the

More information

Content Teaching Academy at James Madison University

Content Teaching Academy at James Madison University Content Teaching Academy at James Madison University 1 2 The Battle Field: Computers, LANs & Internetworks 3 Definitions Computer Security - generic name for the collection of tools designed to protect

More information

DBMS Questions. 3.) For which two constraints are indexes created when the constraint is added?

DBMS Questions. 3.) For which two constraints are indexes created when the constraint is added? DBMS Questions 1.) Which type of file is part of the Oracle database? A.) B.) C.) D.) Control file Password file Parameter files Archived log files 2.) Which statements are use to UNLOCK the user? A.)

More information

low-level storage structures e.g. partitions underpinning the warehouse logical table structures

low-level storage structures e.g. partitions underpinning the warehouse logical table structures DATA WAREHOUSE PHYSICAL DESIGN The physical design of a data warehouse specifies the: low-level storage structures e.g. partitions underpinning the warehouse logical table structures low-level structures

More information

Oracle. Brief Course Content This course can be done in modular form as per the detail below. ORA-1 Oracle Database 10g: SQL 4 Weeks 4000/-

Oracle. Brief Course Content This course can be done in modular form as per the detail below. ORA-1 Oracle Database 10g: SQL 4 Weeks 4000/- Oracle Objective: Oracle has many advantages and features that makes it popular and thereby makes it as the world's largest enterprise software company. Oracle is used for almost all large application

More information

VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wānanga o te Ūpoko o te Ika a Māui

VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wānanga o te Ūpoko o te Ika a Māui VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wānanga o te Ūpoko o te Ika a Māui School of Engineering and Computer Science Te Kura Mātai Pūkaha, Pūrorohiko PO Box 600 Wellington New Zealand Tel: +64 4 463

More information

HIPAA Privacy & Security White Paper

HIPAA Privacy & Security White Paper HIPAA Privacy & Security White Paper Sabrina Patel, JD +1.718.683.6577 sabrina@captureproof.com Compliance TABLE OF CONTENTS Overview 2 Security Frameworks & Standards 3 Key Security & Privacy Elements

More information

Database Security. The Need for Database Security

Database Security. The Need for Database Security Database Security Public domain NASA image L-1957-00989 of people working with an IBM type 704 electronic data processing machine. 1 The Need for Database Security Because databases play such an important

More information

OpenHRE Security Architecture. (DRAFT v0.5)

OpenHRE Security Architecture. (DRAFT v0.5) OpenHRE Security Architecture (DRAFT v0.5) Table of Contents Introduction -----------------------------------------------------------------------------------------------------------------------2 Assumptions----------------------------------------------------------------------------------------------------------------------2

More information

Chapter 16: Recovery System

Chapter 16: Recovery System Chapter 16: Recovery System Failure Classification Failure Classification Transaction failure : Logical errors: transaction cannot complete due to some internal error condition System errors: the database

More information

MS SQL Performance (Tuning) Best Practices:

MS SQL Performance (Tuning) Best Practices: MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware

More information

Introduction: Database management system

Introduction: Database management system Introduction Databases vs. files Basic concepts Brief history of databases Architectures & languages Introduction: Database management system User / Programmer Database System Application program Software

More information

Transaction Management Overview

Transaction Management Overview Transaction Management Overview Chapter 16 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Transactions Concurrent execution of user programs is essential for good DBMS performance. Because

More information

Chapter 11 I/O Management and Disk Scheduling

Chapter 11 I/O Management and Disk Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization

More information

6. Storage and File Structures

6. Storage and File Structures ECS-165A WQ 11 110 6. Storage and File Structures Goals Understand the basic concepts underlying different storage media, buffer management, files structures, and organization of records in files. Contents

More information

Module 7 Security CS655! 7-1!

Module 7 Security CS655! 7-1! Module 7 Security CS655! 7-1! Issues Separation of! Security policies! Precise definition of which entities in the system can take what actions! Security mechanism! Means of enforcing that policy! Distributed

More information

What is a database? COSC 304 Introduction to Database Systems. Database Introduction. Example Problem. Databases in the Real-World

What is a database? COSC 304 Introduction to Database Systems. Database Introduction. Example Problem. Databases in the Real-World COSC 304 Introduction to Systems Introduction Dr. Ramon Lawrence University of British Columbia Okanagan ramon.lawrence@ubc.ca What is a database? A database is a collection of logically related data for

More information

Topics. Introduction to Database Management System. What Is a DBMS? DBMS Types

Topics. Introduction to Database Management System. What Is a DBMS? DBMS Types Introduction to Database Management System Linda Wu (CMPT 354 2004-2) Topics What is DBMS DBMS types Files system vs. DBMS Advantages of DBMS Data model Levels of abstraction Transaction management DBMS

More information

REMOTE BACKUP-WHY SO VITAL?

REMOTE BACKUP-WHY SO VITAL? REMOTE BACKUP-WHY SO VITAL? Any time your company s data or applications become unavailable due to system failure or other disaster, this can quickly translate into lost revenue for your business. Remote

More information

Operating Systems CSE 410, Spring 2004. File Management. Stephen Wagner Michigan State University

Operating Systems CSE 410, Spring 2004. File Management. Stephen Wagner Michigan State University Operating Systems CSE 410, Spring 2004 File Management Stephen Wagner Michigan State University File Management File management system has traditionally been considered part of the operating system. Applications

More information

Introduction. Introduction: Database management system. Introduction: DBS concepts & architecture. Introduction: DBS versus File system

Introduction. Introduction: Database management system. Introduction: DBS concepts & architecture. Introduction: DBS versus File system Introduction: management system Introduction s vs. files Basic concepts Brief history of databases Architectures & languages System User / Programmer Application program Software to process queries Software

More information

Chapter 6: Physical Database Design and Performance. Database Development Process. Physical Design Process. Physical Database Design

Chapter 6: Physical Database Design and Performance. Database Development Process. Physical Design Process. Physical Database Design Chapter 6: Physical Database Design and Performance Modern Database Management 6 th Edition Jeffrey A. Hoffer, Mary B. Prescott, Fred R. McFadden Robert C. Nickerson ISYS 464 Spring 2003 Topic 23 Database

More information

Restore and Recovery Tasks. Copyright 2009, Oracle. All rights reserved.

Restore and Recovery Tasks. Copyright 2009, Oracle. All rights reserved. Restore and Recovery Tasks Objectives After completing this lesson, you should be able to: Describe the causes of file loss and determine the appropriate action Describe major recovery operations Back

More information

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Edward Bortnikov & Ronny Lempel Yahoo Labs, Haifa Indexing in Search Engines Information Retrieval s two main stages: Indexing process

More information

HIPAA Security Alert

HIPAA Security Alert Shipman & Goodwin LLP HIPAA Security Alert July 2008 EXECUTIVE GUIDANCE HIPAA SECURITY COMPLIANCE How would your organization s senior management respond to CMS or OIG inquiries about health information

More information

MOC 20462C: Administering Microsoft SQL Server Databases

MOC 20462C: Administering Microsoft SQL Server Databases MOC 20462C: Administering Microsoft SQL Server Databases Course Overview This course provides students with the knowledge and skills to administer Microsoft SQL Server databases. Course Introduction Course

More information

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching 1 Project Proposal Data Storage / Retrieval with Access Control, Security and Pre- Presented By: Shashank Newadkar Aditya Dev Sarvesh Sharma Advisor: Prof. Ming-Hwa Wang COEN 241 - Cloud Computing Page

More information

Key Management Interoperability Protocol (KMIP)

Key Management Interoperability Protocol (KMIP) (KMIP) Addressing the Need for Standardization in Enterprise Key Management Version 1.0, May 20, 2009 Copyright 2009 by the Organization for the Advancement of Structured Information Standards (OASIS).

More information

CLOUD COMPUTING SECURITY ARCHITECTURE - IMPLEMENTING DES ALGORITHM IN CLOUD FOR DATA SECURITY

CLOUD COMPUTING SECURITY ARCHITECTURE - IMPLEMENTING DES ALGORITHM IN CLOUD FOR DATA SECURITY CLOUD COMPUTING SECURITY ARCHITECTURE - IMPLEMENTING DES ALGORITHM IN CLOUD FOR DATA SECURITY Varun Gandhi 1 Department of Computer Science and Engineering, Dronacharya College of Engineering, Khentawas,

More information

Microsoft SQL Database Administrator Certification

Microsoft SQL Database Administrator Certification Microsoft SQL Database Administrator Certification Training for Exam 70-432 Course Modules and Objectives www.sqlsteps.com 2009 ViSteps Pty Ltd, SQLSteps Division 2 Table of Contents Module #1 Prerequisites

More information

Data Replication in Privileged Credential Vaults

Data Replication in Privileged Credential Vaults Data Replication in Privileged Credential Vaults 2015 Hitachi ID Systems, Inc. All rights reserved. Contents 1 Background: Securing Privileged Accounts 2 2 The Business Challenge 3 3 Solution Approaches

More information

The Classical Architecture. Storage 1 / 36

The Classical Architecture. Storage 1 / 36 1 / 36 The Problem Application Data? Filesystem Logical Drive Physical Drive 2 / 36 Requirements There are different classes of requirements: Data Independence application is shielded from physical storage

More information

Savitribai Phule Pune University

Savitribai Phule Pune University Savitribai Phule Pune University Centre for Information and Network Security Course: Introduction to Cyber Security / Information Security Module : Pre-requisites in Information and Network Security Chapter

More information

Introduction to Database Systems. Module 1, Lecture 1. Instructor: Raghu Ramakrishnan raghu@cs.wisc.edu UW-Madison

Introduction to Database Systems. Module 1, Lecture 1. Instructor: Raghu Ramakrishnan raghu@cs.wisc.edu UW-Madison Introduction to Database Systems Module 1, Lecture 1 Instructor: Raghu Ramakrishnan raghu@cs.wisc.edu UW-Madison Database Management Systems, R. Ramakrishnan 1 What Is a DBMS? A very large, integrated

More information