Hadoop Distributed File Systems

HDFS is based on the idea that the most efficient data processing pattern is write-once write-many times.

Can I give a real-time example of how HDFS records once and is ready many times? I wanted to deeply understand these basic concepts.

+4
source share
1 answer

HDFS applications need a write-once read-only access model for files. The file that was created, written, and closed does not need to be changed. This assumption simplifies data consistency issues and provides access to high-performance data. The MapReduce application or web crawler application is ideal for this model. (Source: HDFS Design )

HDFS , . , , , , . , DFS, , . (: )

. HDFS ?

+1

All Articles