Find the answer to your Linux question:
Results 1 to 3 of 3
Hello everyone! I have searched and searched and can't find an answer to this. There are lots of suggestions but so far, nothing has helped. The problem: df -l is ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Aug 2012
    Posts
    2

    df -l is incorrect after mdadm --grow


    Hello everyone!

    I have searched and searched and can't find an answer to this. There are lots of suggestions but so far, nothing has helped.

    The problem: df -l is incorrect after mdadm --grow

    The symptom:

    Code:
    -> cat /proc/mdstat
    
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid1 sdf1[0] sdg1[1]
          244138944 blocks [2/2] [UU]
          
    md1 : active raid6 sde1[3] sdc1[4] sdb1[2] sda1[1] sdd1[0]
          2930282304 blocks level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
    Code:
    -> df -l
    
    Filesystem           1K-blocks      Used Available Use% Mounted on
    /dev/md0              48060232  29107612  16511256  64% /
    none                   4055604       352   4055252   1% /dev
    none                   4096748         0   4096748   0% /dev/shm
    none                   4096748       400   4096348   1% /var/run
    none                   4096748         0   4096748   0% /var/lock
    /dev/sdg2             48128344    184344  45499200   1% /tmp
    /dev/md1             1769073308 271594200 1407615332  17% /export

    Things I have tried already:
    • rebooting (of course)
    • sync
    • checking /etc/mdadm/mdadm.conf (copied output from mdadm --detail --scan)


    I am seriously at a loss because this is the first time I have ever grown a RAID (two of them actually) so I don't know what else to try. I will keep searching of course but hope someone here has an answer or something new for me to try.

    Oh, will this info being wrong cause the system to think the system is full when it isn't really? (it thinks md0 is 64%)

  2. #2
    Linux Newbie hagfish52's Avatar
    Join Date
    Dec 2011
    Location
    Asheville, NC
    Posts
    225
    Just guessing, but maybe the discrepancy is caused by the swap partition on each array, and there is not actually anything wrong.

  3. #3
    Just Joined!
    Join Date
    Aug 2012
    Posts
    2
    Quote Originally Posted by hagfish52 View Post
    Just guessing, but maybe the discrepancy is caused by the swap partition on each array, and there is not actually anything wrong.
    What do you mean by "swap partition" in this case? I don't think you mean the actual Linux swap because that is a partition all to itself and not part of any RAID array. Same with /tmp which is also a partition of its own because I don't want to waste processor power with either of those in a RAID.

    I didn't think anything was actually wrong after a little more research of my system. Everything I look at reports the correct size for these devices except for the df command. For some strange reason it still thinks there are many fewer blocks available on the md0 and md1 devices than there really are. Very strange.

  4. $spacer_open
    $spacer_close

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •