Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

Vanshita Mittal
3 min readNov 3, 2020

In this task 7.1- a :

I , Vanshita Mittal and Ashish Ranjan create a setup to integrate LVM with Hadoop and also provide Elasticity to Datanode Storage .

LVM :

LVM stands for Logical volume management. Using LVM, we can allocate many hard drives to one or more physical volumes. These physical volumes combine and form volume group and act as a new hard drive. And we can create partition in this volume group, that will be known as logical volume.

volume group don’t exist physically but, works same as hard drive. We can extend or reduce these logical volumes as per our needs.

✨ TASK :

Integrating LVM with Hadoop and providing Elasticity to DataNode Storage .

✨STEPS :

step 1 : Configure the master node and start the services of master(namenode).

Currently no datanode is connected.

step 2 : Now in slave node attach one extraa harddisk .

Now , to check

step 3 : Now , create physical volume of this

step 4 : Now create volume group of this pv

Now , we can see that pv is allocated .

step 5 : Create Logical Volume of Size 20GiB

Now , display the logical volume using -

step 6 : Now , format this LV

step 7 : Mount the Logical Volume with the DataNode directory and also check it using command -

df -h

step 8 : Now , start datanode services and see almost 20 GiB storage is shared to namenode

step 9 : Now for Elasticity , extend the size of lv

so, we have increased the size of 5GiB

But mounted storage size is still 20GiB , So we need to resize the formated storage , for this -

Now , Mounted size is 25G .

step 10 : Now , again check the shared storage to namenode

Now , shared storage is almost 25 GB.

🎉 Task is successfully completed !!

Thank You !!

--

--