No it’s not Virus replication am referring to here, its Database replication. For those who don't know what Replication is, in sharing information, the process of keeping the duplicated sources of data in sync is called Replication.
Let’s say I have a team updating a database and another team who must work on the same database but unfortunately not in the office, they are on the field.
The concept is easy, make a copy the database on the field team handheld/mobile devices and let them update it, when they come back to the office we detect only the changes between the office database and the field database and import only those changes. The changes are usually called the delta records.
This process is called replication. This process has caused some problems previously
DBMSs allow you to replicate a part of the database or the entire database.
Back to the Roots
So Replication was invented by the database vendors for this main reason, keeping things insync. Why field users don't update the database directly by connecting to the Internet and save all this delta changes hassle. Performance is one reason, security and consistency are other reasons.
Lets Enhance this Architecture
Let’s go back to the roots and try to re-invent the wheel here. Why updating the office database from the field is slow? Because whatever Database software you are using, is designed in a way that allows high interactivity between the client and the server that work on LAN network perfectly. Thus it’s slow for limited Internet connection.
If we redesigned this software or at least created an interface for thin clients, depending on compressing heavy objects or sending named objects instead of heavy ones or even serializing the objects. All this summed would create a more convenient environment for field users and will also create a centralized up to date database.
Let’s say I have a team updating a database and another team who must work on the same database but unfortunately not in the office, they are on the field.
The concept is easy, make a copy the database on the field team handheld/mobile devices and let them update it, when they come back to the office we detect only the changes between the office database and the field database and import only those changes. The changes are usually called the delta records.
This process is called replication. This process has caused some problems previously
DBMSs allow you to replicate a part of the database or the entire database.
Back to the Roots
So Replication was invented by the database vendors for this main reason, keeping things insync. Why field users don't update the database directly by connecting to the Internet and save all this delta changes hassle. Performance is one reason, security and consistency are other reasons.
Lets Enhance this Architecture
Let’s go back to the roots and try to re-invent the wheel here. Why updating the office database from the field is slow? Because whatever Database software you are using, is designed in a way that allows high interactivity between the client and the server that work on LAN network perfectly. Thus it’s slow for limited Internet connection.
If we redesigned this software or at least created an interface for thin clients, depending on compressing heavy objects or sending named objects instead of heavy ones or even serializing the objects. All this summed would create a more convenient environment for field users and will also create a centralized up to date database.
Well, replication can be also used to validate changes before they're applied on the main database, so basically employee A did some additions, his additions are not directly reflected on the database and instead has to be approved by a QA member, replication is still needed in that case :)
ReplyDeleteTrue. Thats another valid use for replication,
ReplyDeleteAlso if you are talking GIS/SDE this is solved through the concept of Versioning too..
so employees don't post their version to the DEFAULT unless it is verified by QA team.