In a Hadoop cluster, how to contribute limited/specific amount of storage as slave to the cluster?

We can easily do this by using the Concept of Partitions on the storage device of the Data Node with a specific amount of storage we want to contribute to master(NameNode).

rishabhsharma
Nov 3, 2020
Hadoop Cluster

🎯Step 1: In this task first we will attach external 10 GB Harddisk to DataNode(Slave)

As you can see above we have added 10 GB HD to Slave.

We can also confirm it by running the command

#fdisk -l 

🎯Step 2 : As the storage device which I attached is new , So to use these we have to follow three steps : -

  1. Create Physical Partition
  2. Format
  3. Mount

✔️Creating Physical Partition :

So we have created partition /dev/sdb1

✔️Formatting the partition /dev/sdb1

✔️Mount the partition /dev/sbd1 to directory /dn

We can clearly see that it is successfully mounted on/dn directory

🎯Step 3 : Now open the hdfs-site.xml file and provide these /dn directory which is to be contributed to Master

🎯Step 4 : Start the datanode daemon Service

#hadoop-daemon.sh start datanode

🎯Step 5 : To Check the Report how much capacity is configured

#hadoop dfsadmin -report

We can clearly see that the limited 10 GB storage is Configured to the Master.

These simple way we can use to contribute limited/specific amount of storage.

THANKYOU😇

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

rishabhsharma
rishabhsharma

Written by rishabhsharma

Data Engineer | Azure Databricks | AWS | PySpark | DevOps | Machine Learning 🧠 | Kubernetes ☸️ | SQL 🛢

No responses yet

Write a response