Welcome to Linux Forums! With a comprehensive Linux Forum, information on various types of Linux software and many Linux Reviews articles, we have all the knowledge you need a click away, or accessible via our knowledgeable members.
Find the answer to your Linux question:
Site Navigation
Linux Forums
Linux Articles
Product Showcase
Linux Downloads
Linux Hosting
Free Magazines
Job Board
IRC Chat
RSS Feeds
Free Publications

For many years I used Solaris, OpenSolaris and OpenIndiana and was always very impressed with the Zettabyte File System, ZFS. Recently I shifted my 3 older workstations to Ubuntu 11.0.4 or Zorin 5 OS. I built a new machine with Zorin and I wanted to utilize the ZFS and RAID from the other OpenIndiana workstation on this new one. The new build would take the 4 disks from the file server.

There seems to be considerable confusion in some circles regarding the use of ZFS on platforms other than Oracle's Solaris or Solaris Express.  The licensing as explained by the developers atZFS on Linux allows one to use the ZFS but it cannot be distributed.  

For my new machine I had a 64 Gb SSD and 4 2Tb drives that I wanted to utilize as storage.  What I discovered was just how easy it is to install Zorin on the SSD and use ZFS to create  a RAIDZ1-0 storage pool managed by ZFS.

The Zorin installation was straightforward and uneventful.  For those of you unfamiliar with Zorin it is an Ubuntu variant with a gnome interface.  The pleasant surprise was that since the new motherboard was capable of supporting the 6Gb/s transfer rate of the SSD.  The OS was very responsive and snappy.  This is double the previous speeds from the SATA II only MB that was replaced.

Wtih Zorin in place the next issue was how to utilize the 4 disk array effectively.  I tried to find drivers in linux to utilize the disks in a hardware RAID configuration since the new motherboard claimed support for this.  Google could not help me and I gave up on the idea of hardware RAID.  I was looking at the disk utility application and trying to figure out how to assemble the 4 drives into an array and use it that way.  This was not intuitive.  That is a kinder way to say, it was not obvious and I did not like all the steps involved, despite the fact that I had done this before and made it work.

So I took a look at the ZFS on Linux site and decided to give that a try.  I had enough experience using zfs and storage pools to know this was in fact the ultimate and as the name implies the last word in storage systems.  If anyone cares to debate this claim I would really like to learn more about any of the high end alternatives available on linux.

Again I was pleasantly surprised and quite impressed with exactly how far the ZFS on Linux project has come.  I was able to download the ZFS source, compile and install it exactly as noted in the documentation.  Once ZFS is installed it was pretty much standard zfs and zpool commands to get this setup in a way that I think effectively takes advantage of the SSD and the 4 disk array.  Even though my /home/ivan directory is on the SSD, I wanted all storage on the slower but much larger capacity drive array.

I created the initial storage pool by invoking the zpool create, using -f for force, and the standard pool name of tank.  Since I had 4 disks I could choose raidz and then listed the devices to be part of the pool.

zpool create -f tank raidz /dev/sda /dev/sdb /dev/sdd /dev/sde

I verified the status...

zpool status tank
  pool: tank
  state: ONLINE
  scan: none requested

tank        ONLINE       0     0     0
  raidz1-0  ONLINE       0     0     0
    sda     ONLINE       0     0     0
    sdb     ONLINE       0     0     0
    sdd     ONLINE       0     0     0
    sde     ONLINE       0     0     0
errors: No known data errors

I created a file system tank/downloads.  This was to replace the Downloads directory typically found in a users home directory.  I used the zfs mount command to set the mount point.  ZFS would automatically perform the mount operation but since I wanted a slightly different location it was necessary to override the default.

zfs create tank/downloads
zfs mount /tank/downloads /home/ivan/Downloads

Since the Downloads directory was empty ZFS did not complain.  If the /home/ivan/Downloads dir had not been empty ZFS would not perform the operation.  I created a few more and decided to do a few checks.

df -h -t zfs

Filesystem              Size      Used Avail   Use% Mounted on
tank                        5.4T      0       5.4T   0%      /tank
tank/repository         5.4T      0       5.4T   0%      /tank/repository
tank/downloads        5.4T  476M     5.4T   1%      /home/ivan/Downloads
tank/documents       5.4T     0        5.4T   0%      /home/ivan/Documents tank/documents/professional
                              5.4T     0        5.4T   0%      /home/ivan/Documents/professional tank/documents/tessaract
                              5.4T     0        5.4T   0%      /home/ivan/Documents/tessaract
tank/workspace       5.4T     0        5.4T   0%      /home/ivan/workspace
tank/software          5.4T     0        5.4T   0%      /home/ivan/software
tank/pictures           5.4T     0        5.4T   0%      /home/ivan/pictures
tank/notes              5.4T     0       5.4T   0%       /home/ivan/notes

Some of the file systems listed were not in the exact configuration in this listing that I wanted.  fortunately it is very easy to adjust them to obtain the desired configuration in the end. 

ZFS includes a facility called scrub that does a very detailed check of the data integrity and fixes it if required.  The time involved in this operation increases with the amount of data involved.  In this case since the disks, indeed the entire system was brand new it only takes a few milliseconds as shown in the output from scrub.  Note that the use of scrub should be automated and performed on a weekly basis or so.

zpool scrub tank
root@zaarexx:~# zpool status
  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Tue Jul 12 22:41:20 2011

tank        ONLINE       0     0     0
  raidz1-0  ONLINE       0     0     0
    sda     ONLINE       0     0     0
    sdb     ONLINE       0     0     0
    sdd     ONLINE       0     0     0
    sde     ONLINE       0     0     0

errors: No known data errors  

From this exercise I have the following results.  A RAIDZ1-0 array of 4 2Tb disks giving a total storage capacity of 5.4 Terabytes.  The data is striped and mirrored across all four disks so I can lose one disk in the array and still have all my data.  The SSD holds the OS so in case of a major upgrade it should be a matter of re-installing a new OS, with a standard home directory and then to add in the ZFS storage system.  

Any comments or corrections are appreciated.

Rate This Article: poor excellent
Comments about this article
writen by: ivank2139 on 2011-07-18 20:14:21
After plugging in an external multi card reader and a 500 Gb external drive zfs got confused. zpool status reported one drive as having lost its label. After some work, I did a backup of the RAID to the external drive. I reinstalled the OS with all hardware plugged in and recreated the pool as a raidz2 with the same devices. Once the restore of the backup was finished I have a system that should be fairly reliable and stable. I used Deja-Dup for the backup and it seems to be working fine.

I experimented with some of the zfs send and receive commands for backups as well as java archive tools.
RE: Update written by ivank2139:

Comment title: * please do not put your response text here