RAID5 Array Setup on Gentoo using mdadm and recovery

RAID is all set up and working! Finally! Very happy with it.

Sorry, this is going to look a lot of rubbish to most people, but here’s how I got my RAID to work and the diag output!

I’m pretty sure it’d be useful to someone.

If you don’t find the below interesting, I think you’d like this at least:

RAID as seen from Windows XP

# genkernel --no-clean --lvm all
# echo dm-mod >> /etc/modules.autoload.d/kernel-2.6
# echo raid5 >> /etc/modules.autoload.d/kernel-2.6

# blockdev --report /dev/sd*
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0   1000204886016   /dev/sda
rw   256   512  1024         63   1000202241024   /dev/sda1
rw   256   512  4096          0   1000204886016   /dev/sdb
rw   256   512  1024         63   1000202241024   /dev/sdb1
rw   256   512  4096          0   1000204886016   /dev/sdc
rw   256   512  1024         63   1000202241024   /dev/sdc1
rw   256   512  4096          0   1000204886016   /dev/sdd
rw   256   512  1024         63   1000202241024   /dev/sdd1
rw   256   512  4096          0   1000204886016   /dev/sde
rw   256   512  1024         63   1000202241024   /dev/sde1
rw   256   512  4096          0   1000204886016   /dev/sdf
rw   256   512  1024         63   1000202241024   /dev/sdf1

# cfdisk /dev/sda
# cfdisk /dev/sdb
# cfdisk /dev/sdc
# cfdisk /dev/sdd
# cfdisk /dev/sde
# cfdisk /dev/sdf

# cd /dev/
# mkdir /dev/md
# for i in `seq 0 11`; do mknod /dev/md/$i b 9 $i; ln -s md/$i md$i; done
# 

# mdadm --create /dev/md0
        --level=5
        --bitmap=internal
        --raid-devices=6
         /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

## This command will show you the health of your entire RAID

# watch cat /proc/mdstat        
Every 2.0s: cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[6] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
      4883799680 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
      [>....................]
      recovery =  0.8% (8015744/976759936) finish=287.3min speed=56179K/sec
      bitmap: 0/233 pages [0KB], 2048KB chunk

unused devices: <none>

## This command will give you detailed info on a specific RAID

# mdadm --detail /dev/md0 
/dev/md0:
        Version : 0.90
  Creation Time : Mon Jul 20 02:09:49 2009
     Raid Level : raid5
     Array Size : 4883799680 (4657.55 GiB 5001.01 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 0
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jul 20 02:11:50 2009
          State : active, degraded, recovering
 Active Devices : 5
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 653762f3:3e562b56:1499062a:2e44beb5
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1
       6       8       81        5      spare rebuilding   /dev/sdf1

# mdadm --detail --scan >> /etc/mdadm.conf

-------- LVM --------

## Scan for existing volume groups (will result in nothing found)

# vgscan

## Activate Volume Groups (VG)

# vgchange -a y
# pvcreate /dev/md0
# vgcreate vg00 /dev/md0

# lvcreate -l 1192333 -n vg vg00
# mke2fs -j /dev/vg00/vg

df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/hda1               65G    39G    23G  64% /
udev                    11M   205k    11M   2% /dev
shm                    1.6G      0   1.6G   0% /dev/shm
/dev/mapper/vg00-vg    5.0T   201M   4.7T   1% /raid

## That’s it!

## to add a failed drive back in, use:

# mdadm --manage /dev/md0 -a /dev/sdc1

Comments are closed.