Ticker

6/recent/ticker-posts

Objective of Distributed Database Design - DDBMS

 

Objective of Distributed Database Design

In the design of data distribution the following objectives should be taken into account:

Processing Locality: Distributing data to maximize processing locality corresponds to the simple principle of placing data as close as possible to the application which use them.

Designing data distribution for maximizing locality can be done by adding the number of local and remote reference corresponding to each candidate fragmentation and fragment allocation and selecting the best solution among them.

Availability & Reliability of Distributed Data: A high degree of availability for read-only application is achieved by storing multiple copies of the same information; the system must be able to switch to an alternative copy when the one that should be accessed under normal condition is not available.

Reliability also achieved by storing multiple choices of some information since it is possible to recover from crashes or from the physical distribution of one of the copies by using the other, still available copies.

Workload Distribution: The workload distribution over the sites is an important feature of distributed computer system. Workload distribution is done in order to take advantage of the different powers or utilisation of computers at each site and to maximize the degree of parallelism of execution of application since workload distribution might negatively affect processing locality, it is necessary to consider the trade-off between them in the design of data distribution.

Storage Costs and Availability: Database distribution should reflect have Cost and availability of storage at the different sites. It is possible to have specialized sites in the network for data storage, or conversely to have sites which do not support mass storage at all.