Find the answer to your Linux question:
Results 1 to 2 of 2
How do I repair a bad, missing, or corrupt XFS superblock on Buffalo*LS-W1.0TGL/R1 RAID0? Backstory: A User has lost access to data stored on a NAS device drive Buffalo*LS-W1.0TGL/R1.**THe unit ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Oct 2012
    Posts
    1

    Buffalo XFS data rescue project. Advice wanted.


    How do I repair a bad, missing, or corrupt XFS superblock on Buffalo*LS-W1.0TGL/R1 RAID0?

    Backstory:

    A User has lost access to data stored on a NAS device drive Buffalo*LS-W1.0TGL/R1.**THe unit has a default filessystem setup, using the entire available drive space for storage RAID0

    Drive contains the primary and only copies of digital photos of her children since their birth, and other sentimental items. She thought bysaving it on the device she was performing a "backup". This is the only copy of the date in existence if the data still resides on the drives.

    The partition (table listing is below) uses an XFS file system.

    I have been using Ubuntu12.04 Secure Remix, installed to a computer, and also Rescue Remix as a LiveCD

    ATTEMPTS THUS FAR:

    After plugging the drives into Windows7 and MacOS, just to see if I got lucky, I ran Matrox Kryptonite which balked since XFS support is weak. XFS and Mac BSD Unix made my eyes bleed.

    So, I rolled up my sleeves and used my trusty Ubuntu laptop.

    DDrepair ubuntu and related tools have been used. I rigged up a direct cabling SATA to USB to one of the two drives inside the NAS, and connected via USB.*

    sudo ddrescue -r 3 -C /dev/sdx /media/rescuedata/image1.img /media/rescuedate/logfile

    The results were not useful having tried to run ddrescue as follows: one drive at a time, and rescue files individually from each image, and also trying to append one drive to another (-C) . All images generated by ddrescue were not mountable. I tried to run mmls imagefile -b
    but, the mount command would not run.. I tried
    sudo mount -o loop,offset=16384 /media/rescuedata/image1.img /media/rescuedata2
    <and>
    sudo mount -t xfs -o loop,offset=16384 /media/rescuedata/image1.img /media/rescuedata2

    The problem is that mmls appears to only support: dos mac, bsd, sun, gpt
    mmls -t gpt -i raw /media/rescuedata/rescue1.img
    yields:
    "Invalid Magic Value" and the rest varies per filesystem type (bsd/gpt/dos/sun etc)
    so I cant really see the offset size.. and therefore cant really mount the image.

    I am running FOREMOST right now.. and that is going to take a while.. so I thought i was ask the forums their thoughts. I will update posts if Foremost does the job, otherwise, assume this is an open issue.


    XFS_repair/XFS_check utilities yielded no joy..
    The output of xfs_check is:
    xfs_check /dev/sdx
    xfs_check: unexpected XFS SB magic number 0x4449534b
    xfs_check: WARNING filesystem V1 dirs, limited functionality provided.
    xfs_check: read failed: Invalid argument
    xfs_check: data size check failed
    cache_node_purge:refcount was 1, not zero (node=0xa0a4900)
    xfs_check: cannot read root inode (22)
    bad superblock magic number 4449534b, giving up

    And when I try xfs_repair I get:
    xfs_repair -n /dev/sdx (runs for a hours)
    Phase 1 - find and verify superblock...
    bad primary superblock - bad magic number !!!
    attempting to find secondary superblock...
    ...found candidate secondary superblock... error reading superblock 115 -- seek to offset 493913702400 failed unable to verify superblock, continuing...
    [or on another pass...]
    ...found candidate secondary superblock... error reading superblock 117 -- seek to offset 502503505920 failed unable to verify superblock, continuing...
    [etc etc the above appears a few times, until finally]
    ...Sorry, could not find valid secondary superblock
    Exiting now.



    Here is a sample of one of the two drives (both are identical):

    Disk /dev/sdc: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xf1bab5ab

    * Device Boot***** Start******** End***** Blocks** Id* System
    /dev/sdc1************* 63**** 2008124**** 1004031** 83* Linux
    /dev/sdc2******** 2008125*** 12016619**** 5004247+* 83* Linux
    /dev/sdc4******* 12016620** 976768064** 482375722+** 5* Extended
    /dev/sdc5******* 12016683*** 14024744**** 1004031** 82* Linux swap / Solaris
    /dev/sdc6******* 14024808** 974984849** 480480021** 83* Linux
    .
    TestDisk 6.13, Data Recovery Utility, November 2011
    Christophe GRENIER

    Disk /dev/sdc - 500 GB / 465 GiB - CHS 60801 255 63

    The harddisk (500 GB / 465 GiB) seems too small! (< 1261 GB / 1175 GiB)
    Check the harddisk size: HD jumpers settings, BIOS detection...

    The following partitions can't be recovered:
    *** Partition************** Start******* End*** Size in sectors
    * Linux***************** 873** 1* 1 120506 248 38 1921919744
    >* Linux*************** 33769** 1* 1 153402 248 38 1921919744
    [ Continue ]
    XFS 6.2+ - bitmap version, 984 GB / 916 GiB


    [Next I will be assembling a mock up RAID1 or RAID0 array with USB cables after I return from the Holy GeekStube, MicroCenter. I thought I would try ufsexplorer on various OS Platforms to see if it makes a difference.]

    [I have a couple of other ideas, but it is becoming a question of time invested etc.. at this point... this is a reasonably slow process and some of it is necessarily trial and error.]

    Bryan Grant Atlanta GA

    PS:
    Why does Buffalo use XFS on consumer devices? Most home users never think to buy and use a UPS, so.. why set them up for failure and data loss by selecting XFS?! I am just saying
    "Due to balance of speed and safety, there is the possibility to lose data on abrupt power loss (outdated journal metadata), but not filesystem consistence"

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, or in a galaxy far, far away.
    Posts
    11,175
    I don't know if Ubuntu has xfs support out-of-the-box so you may have to install the drivers for it. RAID-0, unfortunately, is not a safe format - it stripes data across 2 drives - no redundancy, so if one is munged, both are! You might be able to restore the data if you remove the drives from the Buffalo box, and install them (are they sata, or are they ide drives?) in an appropriate array/enclosure/docking bay, and then use the LInux drive management tool (Disk Utility) to mount then, and then run fsck.xfs, or xfs_repair to fix them. Without the drives in-hand, I am just swagging this...

    Sorry, but I don't think you have a magic bullet here - just a lot of hard work to restore her data!
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •