Combined units of BCA -II & BCA –III Presented by: Aakanksha jain
CONTENT Distributed Database Design1 Architecture of Distributed database processing system2 Data Communication concept3 Concurrency control and recovery4
Distributed database design Distributed Database Designs are nothing but multiple, logically related Database systems, physically distributed over several sites, using a Computer Network, which is usually under a centralized site control. Distributed database design refers to the following problem: Given a database and its workload, how should the database be split and allocated to sites so as to optimize certain objective function There are two issues: (i) Data fragmentation which determines how the data should be fragmented. (ii) Data allocation which determines how the fragments should be allocated.
Architecture of Distributed Processing system Distributed Processing architectures are generally developed depending on three parameters − Distribution − It states the physical distribution of data across the different sites. Autonomy − It indicates the distribution of control of the database system and the degree to which each constituent DBMS can operate independently. Heterogeneity − It refers to the uniformity or dissimilarity of the data models, system components and databases.
Architectural Models
Client - Server Architecture for DDBMS This is a two-level architecture where the functionality is divided into servers and clients. The server functions primarily encompass data management, query processing, optimization and transaction management. Client functions include mainly user interface. However, they have some functions like consistency checking and transaction man agement. The two different client - server architecture are − 1. Single Server Multiple Client 2. Multiple Server Multiple Client
Peer- to-Peer Architecture for DDBMS In these systems, each peer acts both as a client and a server for imparting database services. The peers share their resource with other p eers and co-ordinate their activities. This architecture generally has four levels of schemas − Global Conceptual Schema − Depicts the global logical view of data. Local Conceptual Schema − Depicts logical data organization at each site. Local Internal Schema − Depicts physical data organization at each site. External Schema − Depicts user view of data.
Continue..
Multi - DBMS Architectures This is an integrated database system formed by a collection of two or more autonomous database systems. Multi-DBMS can be expressed through six levels of schemas − 1. Multi-database View Level − Depicts multiple user views comprising of subsets of the integrated distributed database. 2. Multi-database Conceptual Level − Depicts integrated multi-databa se that comprises of global logical multi-database structure definitions 3. Multi-database Internal Level − Depicts the data distribution across different sites and multi-database to local data mapping. 4. Local database View Level − Depicts public view of local data. 5. Local database Conceptual Level − Depicts local data organization at each site. 6. Local database Internal Level − Depicts physical data organization at each site.
Design Alternatives The distribution design alternatives for the tables in a DDBMS are as follows − • Non-replicated and non-fragmented • Fully replicated • Partially replicated • Fragmented • Mixed
Design Alternatives The distribution design alternatives for the tables in a DDBMS are as follows − • Non-replicated and non-fragmented • Fully replicated • Partially replicated • Fragmented • Mixed
Non-replicated & Non-fragmented In this design alternative, different tables are placed at different sites. Data is placed so that it is at a close proximity to the site where it is used most. It is most suitable for database systems where the percentage of queries needed to join information in tables placed at different sites is low. If an appropriate distribution strategy is adopted, then this design alternative helps to reduce the communication cost during data processing.
Fully Replicated In this design alternative, at each site, one copy of all the database tables is stored. Since, each site has its own copy of the entire database, queries are very fast requiring negligible communication cost. On the contrary, the massive redundancy in data requires huge cost during update operations. Hence, this is suitable for systems where a large number of queries is required to be handled whereas the number of data base updates is low.
Partially Replicated Copies of tables or portions of tables are stored at different sites. The distribution of the tables is done in accordance to the frequency of access. This takes into consideration the fact that the frequency of accessing the tables vary considerably from site to site. The number of copies of the tables (or portions) depends on how frequently the access queries execute and the site which generate the access queries.
Fragmented In this design, a table is divided into two or more pieces referred to as fragments or partitions, and each fragment can be stored at different sites. This considers the fact that it seldom happens that all data stored in a table is required at a given site. Moreover, fragmentation increases parallelism and provides better disaster recovery. Here, there is only one copy of each fragment in the system, i.e. no redundant data. The three fragmentation techniques are − • Vertical fragmentation • Horizontal fragmentation • Hybrid fragmentation
Fragmented In this design, a table is divided into two or more pieces referred to as fragments or partitions, and each fragment can be stored at different sites. This considers the fact that it seldom happens that all data stored in a table is required at a given site. Moreover, fragmentation increases parallelism and provides better disaster recovery. Here, there is only one copy of each fragment in the system, i.e. no redundant data. The three fragmentation techniques are − • Vertical fragmentation • Horizontal fragmentation • Hybrid fragmentation
Mixed Distribution This is a combination of fragmentation and partial replications. Here, the tables are initially fragmented in any form (horizontal or vertical), and then these fragments are partially replicated across the different sites according to the frequency of accessing the fragments.
Fragmentation Fragmentation is the task of dividing a table into a set of smaller tables. The subsets of the table are called fragments. Fragmentation can be of three types: horizontal, vertical, and hybrid (combination of horizontal and vertical). Horizontal fragmentation can further be classified into two techniques: primary horizontal fragmentation and derived horizontal fragmentation. Fragmentation should be done in a way so that the original table can be reconstructed from the fragments. This is needed so that the original table can be reconstructed from the fragments whenever required. This requirement is called “constructiveness.”
Advantages of Fragmentation • Since data is stored close to the site of usage, efficiency of the database system is increased. • Local query optimization techniques are sufficient for most queries since data is locally available. • Since irrelevant data is not available at the sites, security and privacy of the database system can be maintained. • When data from different fragments are required, the access speeds may be very high. • In case of recursive fragmentations, the job of reconstruction will need expensive techniques. • Lack of back-up copies of data in different sites may render the database ineffective in case of failure of a site. Disadvantages of Fragmentation
Vertical Fragmentation In vertical fragmentation, the fields or columns of a table are grouped into fragments. In order to maintain constructiveness, each fragment should contain the primary key field(s) of the table. Vertical fragmentation can be used to enforce privacy of data. For example, let us consider that a University database keeps records of all registered students in a Student table having the following schema. Regd_No Name Course Address Semester Fees Marks Now, the fees details are maintained in the accounts section. In this case, th e designer will fragment the database as follows −
Vertical Fragmentation CREATE TABLE STD_FEES AS SELECT Regd_No, Fees FROM STUDENT; Reconstruction of vertical fragmentation is performed by using Full Outer Join operation on fragments.
Horizontal Fragmentation Horizontal fragmentation groups the tuples of a table in accordance to values of one or more fields. Horizontal fragmentation should also confirm to the rule of constructiveness. Each horizontal fragment must have all columns of the origin al base table. For example, in the student schema, if the details of all students of Computer Science Course needs to be maintained at the School of Computer Science, then the designer will horizontally fragment the database as follows − CREATE COMP_STD AS SELECT * FROM STUDENT WHERE COURSE = "Computer Science"; Reconstruction of horizontal fragmentation can be performed using UNION operation on fragments.
Hybrid Fragmentation In hybrid fragmentation, a combination of horizontal and vertical fragmentation techniques are used. This is the most flexible fragmentation technique since it generates fragments with minimal extraneous information. However, reconstruction of the original table is often an expensive task. Hybrid fragmentation can be done in two alternative ways − • At first, generate a set of horizontal fragments; then generate vertical fragments from one or more of the horizontal fragments. • At first, generate a set of vertical fragments; then generate horizontal fragments from one or more of the vertical fragments.
Hybrid Fragmentation In hybrid fragmentation, a combination of horizontal and vertical fragmentation techniques are used. This is the most flexible fragmentation technique since it generates fragments with minimal extraneous information. However, reconstruction of the original table is often an expensive task. Hybrid fragmentation can be done in two alternative ways − • At first, generate a set of horizontal fragments; then generate vertical fragments from one or more of the horizontal fragments. • At first, generate a set of vertical fragments; then generate horizontal fragments from one or more of the vertical fragments.
Hybrid Fragmentation Distribution transparency is the property of distributed databases by the virtue of which the internal details of the distribution are hidden from the users. The DDBMS designer may choose to fragment tables, replicate the fragments and store them at different sites. However, since users are oblivious of these details, they find the distributed database easy to use like any centralized database. The three dimensions of distribution transparency are − • Location transparency • Fragmentation transparency • Replication transparency
Hybrid Fragmentation Emp_ID Emp_Name Emp_Address Emp_Age Emp_Salary 101 Surendra Baroda 25 15000 102 Jaya Pune 37 12000 103 Jayesh Pune 47 10000 •Hybrid fragmentation can be achieved by performing horizontal and vertical partition together. •Mixed fragmentation is group of rows and columns in relation. Example: Consider the following table which consists of employee information.
Hybrid Fragmentation Fragmentation1: SELECT * FROM Emp_Name WHERE Emp_Age < 40 Fragmentation2: SELECT * FROM Emp_Id WHERE Emp_Address= 'Pune' AND Salary < 14 000 Reconstruction of Hybrid Fragmentation: The original relation in hybrid fragmentation is reconstructed by performin g UNION and FULL OUTER JOIN.
Data communication concepts Data communication refers to the exchange of data between a source and a receiver via form of transmission media such as a wire cable. Data communication is said to be local if communicating devices are in the same building or a similarly restricted geographical area. A data communication system may collect data from remote locations through data transmission circuits, and then outputs processed results to remote locations. The different data communication techniques which are presently in widespread use evolved gradually either to improve the data communication techniques already existing or to replace the same with better options and features.
Infographic Style Insert the title of your subtitle Here Modern PowerPoint Presentation Get a modern PowerPoint Presentation that is beautifully designed. Easy to change colors, photos and Text. You can simply impress your audience and add a unique zing and appeal to your Presentations. Easy to change colors, photos and Text. Get a modern PowerPoint Presentation that is beautifully designed. Easy to change colors, photos and Text. You can simply impress your audience and add a unique zing and appeal to your Presentations. Your Text Here
Components of data communication system A Communication system has following components: 1. Message: It is the information or data to be communicated. It can consist of text, numbers, pictures, sound or video or any combination of these. 2. Sender: It is the device/computer that generates and sends that message 3. Receiver: It is the device or computer that receives the message. The location of receiver computer is generally different from the sender computer. The distance between sender and receiver depends upon the types of network used in between. 4. Medium: It is the channel or physical path through which the message is carried from sender to the receiver. The medium can be wired like twisted pair wire, coaxial cable, fiber-optic cable or wireless like laser, rad io waves, and microwaves.
Concurrency Control and Recovery Concurrency control (CC) is a process to ensure that data is updated correctly and appropriately when multiple transactions are concurrently executed in DBMS (Connolly & Begg, 2015). Distributed Databases encounter a number of concurrency control and recovery problems which are not present in centralized databases. Some of them are listed below: • Dealing with multiple copies of data items • Failure of individual sites • Communication link failure • Distributed commit • Distributed deadlock
Concurrency Control and Recovery Concurrency control (CC) is a process to ensure that data is updated correctly and appropriately when multiple transactions are concurrently executed in DBMS (Connolly & Begg, 2015). Distributed Databases encounter a number of concurrency control and recovery problems which are not present in centralized databases. Some of them are listed below: • Dealing with multiple copies of data items • Failure of individual sites • Communication link failure • Distributed commit • Distributed deadlock
Concurrency Control 1. Dealing with multiple copies of data items: The concurrency control must maintain global consistency. Likewise the recovery mechanism must recover all copies and maintain consistency after recovery. 2. Failure of individual sites: Database availability must not be affected due to the failure of one or two sites and the recovery scheme must recover them before they are available for use. 3. Communication link failure: This failure may create network partition which would affect database availability e ven though all database sites may be running. 4. Distributed commit: A transaction may be fragmented and they may be executed by a number of sites. This require a two or three-phase commit approach for transaction commit.
Concurrency Control 5. Distributed deadlock: Since transactions are processed at multiple sites, two or more sites may get involved in deadlock. This must be resolved in a distributed manner. Concurrency control protocols can be broadly divided into two categories − • Lock based protocols • Time stamp based protocols
Concurrency Control Protocol 1. Lock-based Protocols Database systems equipped with lock-based protocols use a mechanism by which any transaction cannot read or write data until it acquires an appropriate lock on it. Locks are of two kinds − • Binary Locks − A lock on a data item can be in two states; it is either locked or unlocked. • Shared/exclusive − This type of locking mechanism differentiates the locks based on their uses. If a lock is acquired on a data item to perform a write operation, it is an exclusive lock. Allowing more than one transaction to write o n the same data item would lead the database into an inconsistent state. Read locks are shared because no data value is being changed.
Continue.. 1. Binary Locks: A lock is kind of a mechanism that ensures that the integrity of data is maintained. A binary lock can have two states or values: locked and unlocked (or 1 and 0, for simplicity). A distinct lock is associated with each database item X. If the value of the lock on X is 1, item X cannot be accessed by a database operation that requests the item. If the value of the lock on X is 0, the item can be accessed when requested. We refer to the current value (or state) of the lock associated with item X as LOCK(X). There are 2 operation in binary locking: (i) Lock_item(X): (ii) Unlock_item (X):
Continue.. 1. Lock_item(X): A transaction requests access to an item X by first issuing a lock_item(X) operation. If LOCK(X) = 1, the transaction is forced to wait. If LOCK(X) = 0, it is set to 1 (the transaction locks the item) and the transaction is allowed to access item X. 2. Unlock_item (X): When the transaction is through using the item, it issues an unlock_item(X) operation, which sets LOCK(X) to 0 (unlocks the item) so that X may be accessed by other transactions. Hence, a binary lock enforces mutual exclusion on the data item ; i.e., at a time only one transaction can hold a lock.
Continue.. 2. Shared / Exclusive Locking : Shared lock : Shared lock is placed when we are reading the data, multiple shared locks can be placed on the data but when a shared lock is placed no exclusive lock can be placed. These locks are referred as read locks, and denoted by 'S'. If a transaction T has obtained Shared-lock on data item X, then T can read X, but cannot write X. Multiple Shared lock can be placed simultaneously on a data item. For example, when two transactions are reading Steve’s account balance, let them read by placing shared lock but at the same time if another transaction wants to update the Steve’s account balance by placing Exclusive lock, do not allow it until reading is finished.
Continue.. Exclusive lock : Exclusive lock is placed when we want to read and write the data. This lock allows both the read and write operation, Once this lock is placed on the data no other lock (shared or Exclusive) can be placed on the data until Exclusive lock is released. For example, when a transaction wants to update the Steve’s account balance, let it do by placing X lock on it but if a second transaction wants to read the data ( S lock) don’t allow it, if another transaction wants to write the data(X lock) don’t allow that either. These Locks are referred as Write locks, and denoted by 'X'. If a transaction T has obtained Exclusive lock on data item X, then T can be read as well as write X. Only one Exclusive lock can be placed on a data item at a time. This means multiples transactions does not modify the same data simultaneously.
Continue.. Lock Compatibility Matrix _________________ | | S | X | |----------------------------- | S | True | False | |----------------------------- | X | False | False | ----------------------------- How to read this matrix?: There are two rows, first row says that when S lock is placed, another S lock can be acquired so it is marked true but no Exclusive locks can be acquired so marked False. In second row, When X lock is acquired neither S nor X lock can be acquired so both marked false
TIME STAMP BASED PROTOCOL Time stamp is used to link time with some event or in more particular say transaction. To ensure serializability, we associate transaction with the time called as time stamp. In simple words we order the transaction based on the time of arrival and there is no deadlock. For each data item, two time stamp are maintained. Read time stamp – time stamp of youngest transaction which has performed o peration read on the data item. Write time stamp – time stamp of youngest transaction which has performed o peration write on the data item. Let the transaction T’s time-stamp be denoted by TS(T), Read time-stamp of d ata-item be denoted by R-timestamp(X), and Write time-stamp of data-item be denoted by W-timestamp(X).
TIMESTAMP BASED PROTOCOL The protocol works as follows- • If a transaction issues read operation If Ts(T) < W-timestamp(X) then read request is rejected else execute the transaction and update the time-stamp. • If a transaction operates write operation If Ts(T) < R-timestamp(X) or If TS(T) <W-timestamp(X) then write request is rejected else transaction gets executed and update the time-stamp.
TIMESTAMP BASED PROTOCOL Thomas' Write Rule This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and Ti is rolled back. Time-stamp ordering rules can be modified to make the schedule view serializable. Instead of making Ti rolled back, the 'write' operation itself is ignored.
Need of Recovery Media failure, e.g. disc-head crash. Part of persistent store is lost – need to restore it. Transactions in progress may be using this area –abort uncommitted transactions System failure e.g. crash - main memory lost. Persistent store is not lost but may have been changed by uncommitted transactions. Also, committed transactions’ effects may not yet have reached persistent objects. Transaction abort Need to undo any changes made by the aborted transaction.
Need of Recovery When a DBMS recovers from a crash, it should maintain the following − • It should check the states of all the transactions, which were being executed. • A transaction may be in the middle of some operation; the DBMS must ensure the atomicity of the transaction in this case. • It should check whether the transaction can be completed now or it needs to be rolled back. • No transactions would be allowed to leave the DBMS in an inconsistent state.
Recovery with Concurrent Transactions Checkpoint Keeping and maintaining logs in real time and in real environment may fill out all the memory space available in the system. As time passes, the log file may grow too big to be handled at all. Checkpoint is a mechanism where all the previous lo gs are removed from the system and stored permanently in a storage disk. Checkpoint declares a point before which the DBMS was in consistent state, and all the transactions were committed. Recovery When a system with concurrent transactions crashes and recovers, it behaves in the following manner − • The recovery system reads the logs backwards from the end to the last check point. • It maintains two lists, an undo-list and a redo-list.
Recovery with Concurrent Transactions • If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>, it puts the transaction in the redo-list. • If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it puts the transaction in undo-list. All the transactions in the undo-list are then undone and their logs are removed. All the transactions in the redo-list and their previous logs are removed and then redone before saving their logs.
Reference • https://www.tutorialspoint.com. • Concurrency Control and Recovery in Database Systems First Edition Philip Bernstein, Vassos Hadzilacos, Nathan Goodman . • Database System concept ,design and application © Pearson Education Limited 1995, 2005. • https://tutorialink.com. • https://www.geeksforgeeks.org.
Thank you

Distributed Database Management System

  • 1.
    Combined units ofBCA -II & BCA –III Presented by: Aakanksha jain
  • 2.
    CONTENT Distributed Database Design1 Architectureof Distributed database processing system2 Data Communication concept3 Concurrency control and recovery4
  • 3.
    Distributed database design DistributedDatabase Designs are nothing but multiple, logically related Database systems, physically distributed over several sites, using a Computer Network, which is usually under a centralized site control. Distributed database design refers to the following problem: Given a database and its workload, how should the database be split and allocated to sites so as to optimize certain objective function There are two issues: (i) Data fragmentation which determines how the data should be fragmented. (ii) Data allocation which determines how the fragments should be allocated.
  • 4.
    Architecture of Distributed Processingsystem Distributed Processing architectures are generally developed depending on three parameters − Distribution − It states the physical distribution of data across the different sites. Autonomy − It indicates the distribution of control of the database system and the degree to which each constituent DBMS can operate independently. Heterogeneity − It refers to the uniformity or dissimilarity of the data models, system components and databases.
  • 5.
  • 6.
    Client - ServerArchitecture for DDBMS This is a two-level architecture where the functionality is divided into servers and clients. The server functions primarily encompass data management, query processing, optimization and transaction management. Client functions include mainly user interface. However, they have some functions like consistency checking and transaction man agement. The two different client - server architecture are − 1. Single Server Multiple Client 2. Multiple Server Multiple Client
  • 7.
    Peer- to-Peer Architecturefor DDBMS In these systems, each peer acts both as a client and a server for imparting database services. The peers share their resource with other p eers and co-ordinate their activities. This architecture generally has four levels of schemas − Global Conceptual Schema − Depicts the global logical view of data. Local Conceptual Schema − Depicts logical data organization at each site. Local Internal Schema − Depicts physical data organization at each site. External Schema − Depicts user view of data.
  • 8.
  • 9.
    Multi - DBMSArchitectures This is an integrated database system formed by a collection of two or more autonomous database systems. Multi-DBMS can be expressed through six levels of schemas − 1. Multi-database View Level − Depicts multiple user views comprising of subsets of the integrated distributed database. 2. Multi-database Conceptual Level − Depicts integrated multi-databa se that comprises of global logical multi-database structure definitions 3. Multi-database Internal Level − Depicts the data distribution across different sites and multi-database to local data mapping. 4. Local database View Level − Depicts public view of local data. 5. Local database Conceptual Level − Depicts local data organization at each site. 6. Local database Internal Level − Depicts physical data organization at each site.
  • 10.
    Design Alternatives The distributiondesign alternatives for the tables in a DDBMS are as follows − • Non-replicated and non-fragmented • Fully replicated • Partially replicated • Fragmented • Mixed
  • 11.
    Design Alternatives The distributiondesign alternatives for the tables in a DDBMS are as follows − • Non-replicated and non-fragmented • Fully replicated • Partially replicated • Fragmented • Mixed
  • 12.
    Non-replicated & Non-fragmented Inthis design alternative, different tables are placed at different sites. Data is placed so that it is at a close proximity to the site where it is used most. It is most suitable for database systems where the percentage of queries needed to join information in tables placed at different sites is low. If an appropriate distribution strategy is adopted, then this design alternative helps to reduce the communication cost during data processing.
  • 13.
    Fully Replicated In thisdesign alternative, at each site, one copy of all the database tables is stored. Since, each site has its own copy of the entire database, queries are very fast requiring negligible communication cost. On the contrary, the massive redundancy in data requires huge cost during update operations. Hence, this is suitable for systems where a large number of queries is required to be handled whereas the number of data base updates is low.
  • 14.
    Partially Replicated Copies oftables or portions of tables are stored at different sites. The distribution of the tables is done in accordance to the frequency of access. This takes into consideration the fact that the frequency of accessing the tables vary considerably from site to site. The number of copies of the tables (or portions) depends on how frequently the access queries execute and the site which generate the access queries.
  • 15.
    Fragmented In this design,a table is divided into two or more pieces referred to as fragments or partitions, and each fragment can be stored at different sites. This considers the fact that it seldom happens that all data stored in a table is required at a given site. Moreover, fragmentation increases parallelism and provides better disaster recovery. Here, there is only one copy of each fragment in the system, i.e. no redundant data. The three fragmentation techniques are − • Vertical fragmentation • Horizontal fragmentation • Hybrid fragmentation
  • 16.
    Fragmented In this design,a table is divided into two or more pieces referred to as fragments or partitions, and each fragment can be stored at different sites. This considers the fact that it seldom happens that all data stored in a table is required at a given site. Moreover, fragmentation increases parallelism and provides better disaster recovery. Here, there is only one copy of each fragment in the system, i.e. no redundant data. The three fragmentation techniques are − • Vertical fragmentation • Horizontal fragmentation • Hybrid fragmentation
  • 17.
    Mixed Distribution This isa combination of fragmentation and partial replications. Here, the tables are initially fragmented in any form (horizontal or vertical), and then these fragments are partially replicated across the different sites according to the frequency of accessing the fragments.
  • 18.
    Fragmentation Fragmentation is thetask of dividing a table into a set of smaller tables. The subsets of the table are called fragments. Fragmentation can be of three types: horizontal, vertical, and hybrid (combination of horizontal and vertical). Horizontal fragmentation can further be classified into two techniques: primary horizontal fragmentation and derived horizontal fragmentation. Fragmentation should be done in a way so that the original table can be reconstructed from the fragments. This is needed so that the original table can be reconstructed from the fragments whenever required. This requirement is called “constructiveness.”
  • 19.
    Advantages of Fragmentation •Since data is stored close to the site of usage, efficiency of the database system is increased. • Local query optimization techniques are sufficient for most queries since data is locally available. • Since irrelevant data is not available at the sites, security and privacy of the database system can be maintained. • When data from different fragments are required, the access speeds may be very high. • In case of recursive fragmentations, the job of reconstruction will need expensive techniques. • Lack of back-up copies of data in different sites may render the database ineffective in case of failure of a site. Disadvantages of Fragmentation
  • 20.
    Vertical Fragmentation In verticalfragmentation, the fields or columns of a table are grouped into fragments. In order to maintain constructiveness, each fragment should contain the primary key field(s) of the table. Vertical fragmentation can be used to enforce privacy of data. For example, let us consider that a University database keeps records of all registered students in a Student table having the following schema. Regd_No Name Course Address Semester Fees Marks Now, the fees details are maintained in the accounts section. In this case, th e designer will fragment the database as follows −
  • 21.
    Vertical Fragmentation CREATE TABLESTD_FEES AS SELECT Regd_No, Fees FROM STUDENT; Reconstruction of vertical fragmentation is performed by using Full Outer Join operation on fragments.
  • 22.
    Horizontal Fragmentation Horizontal fragmentationgroups the tuples of a table in accordance to values of one or more fields. Horizontal fragmentation should also confirm to the rule of constructiveness. Each horizontal fragment must have all columns of the origin al base table. For example, in the student schema, if the details of all students of Computer Science Course needs to be maintained at the School of Computer Science, then the designer will horizontally fragment the database as follows − CREATE COMP_STD AS SELECT * FROM STUDENT WHERE COURSE = "Computer Science"; Reconstruction of horizontal fragmentation can be performed using UNION operation on fragments.
  • 23.
    Hybrid Fragmentation In hybridfragmentation, a combination of horizontal and vertical fragmentation techniques are used. This is the most flexible fragmentation technique since it generates fragments with minimal extraneous information. However, reconstruction of the original table is often an expensive task. Hybrid fragmentation can be done in two alternative ways − • At first, generate a set of horizontal fragments; then generate vertical fragments from one or more of the horizontal fragments. • At first, generate a set of vertical fragments; then generate horizontal fragments from one or more of the vertical fragments.
  • 24.
    Hybrid Fragmentation In hybridfragmentation, a combination of horizontal and vertical fragmentation techniques are used. This is the most flexible fragmentation technique since it generates fragments with minimal extraneous information. However, reconstruction of the original table is often an expensive task. Hybrid fragmentation can be done in two alternative ways − • At first, generate a set of horizontal fragments; then generate vertical fragments from one or more of the horizontal fragments. • At first, generate a set of vertical fragments; then generate horizontal fragments from one or more of the vertical fragments.
  • 25.
    Hybrid Fragmentation Distribution transparencyis the property of distributed databases by the virtue of which the internal details of the distribution are hidden from the users. The DDBMS designer may choose to fragment tables, replicate the fragments and store them at different sites. However, since users are oblivious of these details, they find the distributed database easy to use like any centralized database. The three dimensions of distribution transparency are − • Location transparency • Fragmentation transparency • Replication transparency
  • 26.
    Hybrid Fragmentation Emp_ID Emp_NameEmp_Address Emp_Age Emp_Salary 101 Surendra Baroda 25 15000 102 Jaya Pune 37 12000 103 Jayesh Pune 47 10000 •Hybrid fragmentation can be achieved by performing horizontal and vertical partition together. •Mixed fragmentation is group of rows and columns in relation. Example: Consider the following table which consists of employee information.
  • 27.
    Hybrid Fragmentation Fragmentation1: SELECT *FROM Emp_Name WHERE Emp_Age < 40 Fragmentation2: SELECT * FROM Emp_Id WHERE Emp_Address= 'Pune' AND Salary < 14 000 Reconstruction of Hybrid Fragmentation: The original relation in hybrid fragmentation is reconstructed by performin g UNION and FULL OUTER JOIN.
  • 28.
    Data communication concepts Datacommunication refers to the exchange of data between a source and a receiver via form of transmission media such as a wire cable. Data communication is said to be local if communicating devices are in the same building or a similarly restricted geographical area. A data communication system may collect data from remote locations through data transmission circuits, and then outputs processed results to remote locations. The different data communication techniques which are presently in widespread use evolved gradually either to improve the data communication techniques already existing or to replace the same with better options and features.
  • 29.
    Infographic Style Insert thetitle of your subtitle Here Modern PowerPoint Presentation Get a modern PowerPoint Presentation that is beautifully designed. Easy to change colors, photos and Text. You can simply impress your audience and add a unique zing and appeal to your Presentations. Easy to change colors, photos and Text. Get a modern PowerPoint Presentation that is beautifully designed. Easy to change colors, photos and Text. You can simply impress your audience and add a unique zing and appeal to your Presentations. Your Text Here
  • 30.
    Components of datacommunication system A Communication system has following components: 1. Message: It is the information or data to be communicated. It can consist of text, numbers, pictures, sound or video or any combination of these. 2. Sender: It is the device/computer that generates and sends that message 3. Receiver: It is the device or computer that receives the message. The location of receiver computer is generally different from the sender computer. The distance between sender and receiver depends upon the types of network used in between. 4. Medium: It is the channel or physical path through which the message is carried from sender to the receiver. The medium can be wired like twisted pair wire, coaxial cable, fiber-optic cable or wireless like laser, rad io waves, and microwaves.
  • 31.
    Concurrency Control andRecovery Concurrency control (CC) is a process to ensure that data is updated correctly and appropriately when multiple transactions are concurrently executed in DBMS (Connolly & Begg, 2015). Distributed Databases encounter a number of concurrency control and recovery problems which are not present in centralized databases. Some of them are listed below: • Dealing with multiple copies of data items • Failure of individual sites • Communication link failure • Distributed commit • Distributed deadlock
  • 32.
    Concurrency Control andRecovery Concurrency control (CC) is a process to ensure that data is updated correctly and appropriately when multiple transactions are concurrently executed in DBMS (Connolly & Begg, 2015). Distributed Databases encounter a number of concurrency control and recovery problems which are not present in centralized databases. Some of them are listed below: • Dealing with multiple copies of data items • Failure of individual sites • Communication link failure • Distributed commit • Distributed deadlock
  • 33.
    Concurrency Control 1. Dealingwith multiple copies of data items: The concurrency control must maintain global consistency. Likewise the recovery mechanism must recover all copies and maintain consistency after recovery. 2. Failure of individual sites: Database availability must not be affected due to the failure of one or two sites and the recovery scheme must recover them before they are available for use. 3. Communication link failure: This failure may create network partition which would affect database availability e ven though all database sites may be running. 4. Distributed commit: A transaction may be fragmented and they may be executed by a number of sites. This require a two or three-phase commit approach for transaction commit.
  • 34.
    Concurrency Control 5. Distributeddeadlock: Since transactions are processed at multiple sites, two or more sites may get involved in deadlock. This must be resolved in a distributed manner. Concurrency control protocols can be broadly divided into two categories − • Lock based protocols • Time stamp based protocols
  • 35.
    Concurrency Control Protocol 1.Lock-based Protocols Database systems equipped with lock-based protocols use a mechanism by which any transaction cannot read or write data until it acquires an appropriate lock on it. Locks are of two kinds − • Binary Locks − A lock on a data item can be in two states; it is either locked or unlocked. • Shared/exclusive − This type of locking mechanism differentiates the locks based on their uses. If a lock is acquired on a data item to perform a write operation, it is an exclusive lock. Allowing more than one transaction to write o n the same data item would lead the database into an inconsistent state. Read locks are shared because no data value is being changed.
  • 36.
    Continue.. 1. Binary Locks: Alock is kind of a mechanism that ensures that the integrity of data is maintained. A binary lock can have two states or values: locked and unlocked (or 1 and 0, for simplicity). A distinct lock is associated with each database item X. If the value of the lock on X is 1, item X cannot be accessed by a database operation that requests the item. If the value of the lock on X is 0, the item can be accessed when requested. We refer to the current value (or state) of the lock associated with item X as LOCK(X). There are 2 operation in binary locking: (i) Lock_item(X): (ii) Unlock_item (X):
  • 37.
    Continue.. 1. Lock_item(X): A transactionrequests access to an item X by first issuing a lock_item(X) operation. If LOCK(X) = 1, the transaction is forced to wait. If LOCK(X) = 0, it is set to 1 (the transaction locks the item) and the transaction is allowed to access item X. 2. Unlock_item (X): When the transaction is through using the item, it issues an unlock_item(X) operation, which sets LOCK(X) to 0 (unlocks the item) so that X may be accessed by other transactions. Hence, a binary lock enforces mutual exclusion on the data item ; i.e., at a time only one transaction can hold a lock.
  • 38.
    Continue.. 2. Shared /Exclusive Locking : Shared lock : Shared lock is placed when we are reading the data, multiple shared locks can be placed on the data but when a shared lock is placed no exclusive lock can be placed. These locks are referred as read locks, and denoted by 'S'. If a transaction T has obtained Shared-lock on data item X, then T can read X, but cannot write X. Multiple Shared lock can be placed simultaneously on a data item. For example, when two transactions are reading Steve’s account balance, let them read by placing shared lock but at the same time if another transaction wants to update the Steve’s account balance by placing Exclusive lock, do not allow it until reading is finished.
  • 39.
    Continue.. Exclusive lock : Exclusivelock is placed when we want to read and write the data. This lock allows both the read and write operation, Once this lock is placed on the data no other lock (shared or Exclusive) can be placed on the data until Exclusive lock is released. For example, when a transaction wants to update the Steve’s account balance, let it do by placing X lock on it but if a second transaction wants to read the data ( S lock) don’t allow it, if another transaction wants to write the data(X lock) don’t allow that either. These Locks are referred as Write locks, and denoted by 'X'. If a transaction T has obtained Exclusive lock on data item X, then T can be read as well as write X. Only one Exclusive lock can be placed on a data item at a time. This means multiples transactions does not modify the same data simultaneously.
  • 40.
    Continue.. Lock Compatibility Matrix _________________ || S | X | |----------------------------- | S | True | False | |----------------------------- | X | False | False | ----------------------------- How to read this matrix?: There are two rows, first row says that when S lock is placed, another S lock can be acquired so it is marked true but no Exclusive locks can be acquired so marked False. In second row, When X lock is acquired neither S nor X lock can be acquired so both marked false
  • 41.
    TIME STAMP BASEDPROTOCOL Time stamp is used to link time with some event or in more particular say transaction. To ensure serializability, we associate transaction with the time called as time stamp. In simple words we order the transaction based on the time of arrival and there is no deadlock. For each data item, two time stamp are maintained. Read time stamp – time stamp of youngest transaction which has performed o peration read on the data item. Write time stamp – time stamp of youngest transaction which has performed o peration write on the data item. Let the transaction T’s time-stamp be denoted by TS(T), Read time-stamp of d ata-item be denoted by R-timestamp(X), and Write time-stamp of data-item be denoted by W-timestamp(X).
  • 42.
    TIMESTAMP BASED PROTOCOL Theprotocol works as follows- • If a transaction issues read operation If Ts(T) < W-timestamp(X) then read request is rejected else execute the transaction and update the time-stamp. • If a transaction operates write operation If Ts(T) < R-timestamp(X) or If TS(T) <W-timestamp(X) then write request is rejected else transaction gets executed and update the time-stamp.
  • 43.
    TIMESTAMP BASED PROTOCOL Thomas'Write Rule This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and Ti is rolled back. Time-stamp ordering rules can be modified to make the schedule view serializable. Instead of making Ti rolled back, the 'write' operation itself is ignored.
  • 44.
    Need of Recovery Mediafailure, e.g. disc-head crash. Part of persistent store is lost – need to restore it. Transactions in progress may be using this area –abort uncommitted transactions System failure e.g. crash - main memory lost. Persistent store is not lost but may have been changed by uncommitted transactions. Also, committed transactions’ effects may not yet have reached persistent objects. Transaction abort Need to undo any changes made by the aborted transaction.
  • 45.
    Need of Recovery Whena DBMS recovers from a crash, it should maintain the following − • It should check the states of all the transactions, which were being executed. • A transaction may be in the middle of some operation; the DBMS must ensure the atomicity of the transaction in this case. • It should check whether the transaction can be completed now or it needs to be rolled back. • No transactions would be allowed to leave the DBMS in an inconsistent state.
  • 46.
    Recovery with ConcurrentTransactions Checkpoint Keeping and maintaining logs in real time and in real environment may fill out all the memory space available in the system. As time passes, the log file may grow too big to be handled at all. Checkpoint is a mechanism where all the previous lo gs are removed from the system and stored permanently in a storage disk. Checkpoint declares a point before which the DBMS was in consistent state, and all the transactions were committed. Recovery When a system with concurrent transactions crashes and recovers, it behaves in the following manner − • The recovery system reads the logs backwards from the end to the last check point. • It maintains two lists, an undo-list and a redo-list.
  • 47.
    Recovery with ConcurrentTransactions • If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>, it puts the transaction in the redo-list. • If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it puts the transaction in undo-list. All the transactions in the undo-list are then undone and their logs are removed. All the transactions in the redo-list and their previous logs are removed and then redone before saving their logs.
  • 48.
    Reference • https://www.tutorialspoint.com. • ConcurrencyControl and Recovery in Database Systems First Edition Philip Bernstein, Vassos Hadzilacos, Nathan Goodman . • Database System concept ,design and application © Pearson Education Limited 1995, 2005. • https://tutorialink.com. • https://www.geeksforgeeks.org.
  • 49.