Showing posts with label software raid. Show all posts
Showing posts with label software raid. Show all posts

Monday, December 12, 2011

EC2 raid10 for mongo db

Running mongo db on a raid10(software raid) into ec2 is done via the ebs volumes. I'll show you how to

  • create the raid10 on 8 ebs volumes
  • (re) start the mdadm on the raid device
  • mount the raid10 device and start using
I didn't use any config files for the raid devices so you will need to know how the devices are mapped and what uuid has the raid10.

Initial Creation of the raid

# you will need to have your ebs volumes attached to the server
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=8 /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq

# now create a file system 
mkfs.xfs /dev/md0

#mount the drive
mount /dev/md0 /mnt/mongo/data


# Obtain information about the array
mdadm --detail /dev/md0 # query detail

/dev/md0:
        Version : 0.90
  Creation Time : Wed Oct 26 19:37:16 2011
     Raid Level : raid10
     Array Size : 104857344 (100.00 GiB 107.37 GB)
  Used Dev Size : 26214336 (25.00 GiB 26.84 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Dec 12 15:56:48 2011
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

           UUID : 144894cd:3b083374:1fa88d23:e4200572
         Events : 0.30

    Number   Major   Minor   RaidDevice State
       0       8      144        0      active sync   /dev/sdj
       1       8      160        1      active sync   /dev/sdk
       2       8      176        2      active sync   /dev/sdl
       3       8      192        3      active sync   /dev/sdm
       4       8      208        4      active sync   /dev/sdn
       5       8      224        5      active sync   /dev/sdo
       6       8      240        6      active sync   /dev/sdp
       7      65        0        7      active sync   /dev/sdq

# note the UUID and the devices
# Start the mongo database
/etc/init.d/mongod start

Shutdown(reboot) the server 

# restart the array device - you need to have the ebs volumes re-attached!
mdadm -Av /dev/md0 --uuid=144894cd:3b083374:1fa88d23:e4200572  /dev/sd*
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdr: Device or resource busy
mdadm: /dev/sdr has wrong uuid.
mdadm: cannot open device /dev/sds: Device or resource busy
mdadm: /dev/sds has wrong uuid.
mdadm: /dev/sdj is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdk is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdl is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdm is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdn is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdo is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdp is identified as a member of /dev/md0, slot 6.
mdadm: /dev/sdq is identified as a member of /dev/md0, slot 7.
mdadm: added /dev/sdk to /dev/md0 as 1
mdadm: added /dev/sdl to /dev/md0 as 2
mdadm: added /dev/sdm to /dev/md0 as 3
mdadm: added /dev/sdn to /dev/md0 as 4
mdadm: added /dev/sdo to /dev/md0 as 5
mdadm: added /dev/sdp to /dev/md0 as 6
mdadm: added /dev/sdq to /dev/md0 as 7
mdadm: added /dev/sdj to /dev/md0 as 0
mdadm: /dev/md0 has been started with 8 drives.

# now you can mount the array
mount /dev/md0 /mnt/mongo/data/

# start the mongo database
/etc/init.d/mongod start