Blockchain Solves Big Data Deficiency

Monday, October 17, 2016

Blockchain technology is widely accepted as a type of distributed ledger, which is decentralized, permissionless, transparent, and unchangeable. These features allow the blockchain technology to be employed as a digital asset transaction, and gradually used in the big data analysis field.

 

Transactional and analytics processing tasks constitute very different workloads involves with high volumes, diverse velocity, fast changing and massive data. There is a rising demand to identify and extract the relevant information from the ocean of data.

 

So far, blockchain data can be classified into digital asset production information, digital asset flow information, digital assert over the counter transaction information…etc. Enterprises gain access to data mining, classification, analysis based on blockchain transactions, in order to meet security, market status and forecast, preventive maintenance, and competitiveadvantage estimation. Traditionally, a relational database is inherentlydeficient in data analysis projects mentioned above, but blockchain technologycan solve these issues accordingly.

 

Comparison between traditional relational database and blockchain data.

 

The primary problem: security - Traditional relational database passwords are easy to be cracked.

 

Data on blockchain employs secure hash algorithm and multi-signature, its storageis classified as decentralized and a distributed storing type. After the data is sliced for processing, they are sent to different servers, and thepermissions computed through multiple nodes, which means there is no thirdparty can obtain the data.

 

Technical demanding servers and hardware has a high loss of MDB.

 

Most blockchain data are stored in a cloud, while the cloud storage technology keeps developing. Traditional cloud storage reduce the hardware cost, but other problems like saving on increasing data volume storage is still an issue; such as renting a data center. Blockchain cloud storage technology can graduallycomplete automatic operation organization and non-organization data type, withno need to establish data centers in advance, and automatically configure withhandy APIs based on users’ real time situations.

 

Data loss caused in high throughput environment.

 

Recently, blockchain protocols use the new consensus model to exchange vast data volumein a few seconds with scalable data models to reduce the operating load. Withthe advantage of the distributed ledger, blockchain data’s integrity won’t beaffected by shutting down one node, since there are other nodes to restore thesame data. Normally, big data analysis includes data collection, data storage,data processing, information retrieval, and accuracy of evaluation. Obviously,blockchain helps users to solve issues caused by data quality, data storage,and management.

 

Let’s talk about other two critical steps:

 

1. Data Mining

Data miningis a process of Knowledge Discovery in Database (KDD). First, preparing thedata based on requirements, then select samples from a database. Next, evaluatingselected samples’ integrity and consistency, then cleaning the redundant dataand noise. In addition, using statistical methods to fill in the loss data, andreduce the data volume with database projection and other strategies. At theend, choose the right KDD algorithm to select parameters and construct a model,in order to achieve the following automatic data selection and accumulationwith this model.

 

2. Predictive Analysis. 

PredictiveAnalysis Predictive analysis encompasses statistical techniques, modeling, anddata mining to analyze current and historical facts to make predictions aboutfuture.

 

Predictive analysisis classified into two types.

 

(1) Finding out a right model with historical experience or highly precise mathematicalderivation, which directly process the data and make prediction.

 

(2) Randomly select samples from database, and conduct sample training and testingwith a machine learning model. Meanwhile, continuing to optimize the parametersfor the best result. At the end, find the best fit predictive model based onthe test error.

 

Author Bio:

Hong: data Science in Silicon Valley

Share on Facebook
Share on Twitter
Please reload

Follow Us
  • LinkedIn Social Icon
  • Facebook Social Icon
  • Twitter Social Icon
Recent Posts

Tuesday, May 16, 2017

Tuesday, March 28, 2017

Please reload