November 19, 2016
Fog of… Data? Researchers are looking into ways to make data more secure and more distributed by breaking off data shards and encoding them on various end-point-type equipment.
My problem with this is two-fold: First, in order to be secure and reliable, each bit of data that is out on the Fog-of-Data system needs to be encoded by itself, versus encoding entire rows of data. This will ensure that specific data points cannot be traced to specific individuals. Also, to maintain accessibility, the data bits need to be replicated several, several times. So we are talking about a distributed database that is about 100 times the size of a similar, all-in-one database. Database, Table, Column, and Datacheck data needs to be stored with each bit of data, basically increasing the raw amount of data by an order of magnitude, or a multiplication of 10. Then, you need about 10 copies of that piece of data in multiple areas, so that various failures do not prevent the retrieval of the data. This is another multiplication of 10, giving you the total multiplication of 100, or two different orders of magnitude to give the distributed database enough spread and resiliency to provide the data. The original database would be replaced by a much larger reference database that describes where all the data can be found for retrieval, along with the data on how to reintegrate it into data reports and other needed items.
However, with that said, there is plenty of left-over data storage space out in public, and if the data is properly distributed, split up, and available for retrieval.
I think other brilliant solutions, including public key infrastructure (PKI) and using detailed instructions will make for a secure future.