Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

I’m a storage noob setting up RAID for my first home NAS. Looking for some answers to questions.


​So, I’ve got a desktop that I don’t use anymore that I’m going to try to turn into a NAS.

It’s got a core i5 and 8GB of ram (might upgrade), I believe this should be acceptable. I just bought two 7200RPM 14TB WD Drives that I’d like to put in either RAID 1, 5, 6 (Also planning to expand in the future) . And I’m planning to run OpenMediaVault on this NAS. (OMV uses linux mdadm software raid. The NAS will be primarily used for local cloud storage and in home streaming.

So here are some of my questions:

1. Desktop spec wise it should be okay right?
2. Let’s say I have Drive A and Drive B. If Drive A fails in RAID 5 and I rebuild the array and it get functioning again. If Drive B fails later, I can rebuild it again? From my understanding this is correct. Because RAID 5 only can’t be rebuilt if multiple drives fail at the same time?
3. Considering my plans to expand my storage later, if I choose say RAID 5. And later down the road want to change to RAID 6, is that possible? Is it possible with other versions of RAID?
4. OpenMediaVault seems to have support for multiple filesystems. I was thinking of doing ext4, but I also see it has support for BTRFS, XFS, and ZFS (through and addon). Do these other filesystems have pros or cons over standard ext4?

Thanks for helpful answers and ideas


Thank you all for the great info everyone. I think I’m a little in over my head for RAID at this point in time. I think I will go with a more simple approach for some data redundancy until I learn more. Maybe running 1 of my drives and weekly backups to my second drive in case the main drive fails.

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

4 Yorumları

  1. >Edit: Thank you all for the great info everyone. I think I’m a little in over my head for RAID at this point in time.

    Just in case, you can read up basic information about RAID types with PROs and CONs to each of them. [https://www.starwindsoftware.com/blog/back-to-basics-raid-types](https://www.starwindsoftware.com/blog/back-to-basics-raid-types)

  2. Regarding 2, Yes, in fact I’m doing a 4x10TB to a 4x18TB rebuild of a RAID5 and that’s exactly how it works: hard swap disk 1, wait 15 hours, hard swap disk 2, wait 15 hours, etc until you swap all 4 drives.

    Regarding 1, TBH, I’d recommend you don’t recycle that old PC but buy a nice dedicated NAS enclosure from Synology or QNAP. I did like you the first time, build a beautiful custom NAS, started a large read of many files, it overheated, burned several disks beyond repair, I lost 10 years of slowly collected data. It was from when I was 12 to when I was 24, so all my life at the time. Or at least, backup somehow 😀 This choice of doing it myself basically was the biggest data event of my life and changed my attitude towards hoarding completely. I basically gave up the idea that we can keep data forever.

    Regarding 4, Do not do btrfs, my second lifetime crush was when my btrfs was perfectly readable but corrupt to the point of no writing and no repair was possible. I had around 6TB at the time, of everything collected since the first crash (24 to 31yo lol) and well at least I could read it but btrfs is insane in the fact the corruption of 1 bit somewhere can render the entire array unwritable. I understand their argument, but it just can’t be right in the real world that perfect consistency supersede going around 1bit corruptions when detected. If you listen to the bit rot fanatics, it’s like the worst thing in the world and happens all the time, but in 2 decades of storage, I had one overheat, and one anti-bitrot lock, and never spotted a file not being what I wanted it to be… so cool your drive and ditch anti-bitrot 😀

    So now I do ext4 RAID5 on a 4-bay Synology with almost no custom apps installed to avoid large scanning events with a chance to overheat, and with important stuff on auto backup on an external drive. Keep it simple and standard, be clear on the criticality of your data, and be ready to lose some.

  3. Edit:

    As u/Armbian_Werner stated, you should really run ECC memory with ZFS to prevent bit rot.


    Yeah you definitely jumped into the deep here without doing your research.


    RAID5 requires a minimum of three hard drive (one drive is parity, you lose that for actual storage). RAID5 should not be used on drives this large due the possibility of failure during a rebuild

    RAID6 requires a minimum of four hard drive, two of which are parity data, meaning you only get to use two disks for actual storage. That means if you buy 4x 14TB, with RAID6 you only have 28TB usable and really it’s closer to 25TB usable.

    That’s your first problem. You’re not building any RAID5 or RAID6 array with two hard drives.


    **ERC / TLER**

    Your second problem is it seems like the 10TB + Easystore drives do not support ERC / TLER.

    Unfortunately r/datahoarder hasn’t maintained a compendium documenting if any drives found inside recent Easystore products support ERC / TLER

    ERC / TLER is typically reserved for NAS branded drives (WD RED, Seagate Ironwolf) and Enterprise products. When supported and enabled by the drive firmware ERC / TLER limits drive error recovery to 7 seconds which permits software RAID or hardware RAID controller to handle error recovery. Without ERC /TLER support, drives will drop out of an array leading to long rebuild times or a failed array if drives fail completely or if bad sectors exist on multiple drives.



    What does this mean? With the exception of ZFS and BTRFS, most implementations of software raid and hardware RAID do not control both the block device layer and filesystem layer and require hard drives with ERC / TLER.

    ZFS and BTRFS are different in that both of those filesystems control both the block layer and filesystem layer.

    BTRFS is not stable or mature, many have experienced data loss. You should avoid it like the plague.

    ZFS is stable, but as of today you can not gradually add more hard drives to grow a vdev underlying a ZFS RAID-Z array. The hidden cost is discussed here


    ZFS hidden cost is felt by home users who do not have the budget to buy 8, 10, 12, 16+ hard drives up front. Matt Ahrens has been working to implement ZFS online raid expansion at IX systems [for 3+ years](https://www.youtube.com/watch?v=Njt82e_3qVo). It’s still alpha/beta and won’t be merged into OpenZFS until sometime around August 2022. When it is finally implemented there will be a performance penalty with the stripe width not increasing until you clear up all previously used space (moving data completely off the array to another device, deleting existing files on array, moving data back to array, etc)

    If you plan to go the route of XFS or EXT4 for your filesystem you will be running those filesystems on top of a block device controlled by linux mdadm or a hardware RAID controller. XFS and EXT4 filesystems write to the block layer but have no idea how it is controlled and vice versa linux mdadm or a raid controller do not know what the filesystem is doing. This means your hard drives must include ERC / TLER support to play nice with the block layer and subsequently the filesystem layer.

  4. Raid5 needs at least three disks (with same size). One to store parity information and two for data. If one fails it can be rebuilt with replacement disk.

    With two disks only Raid0, 1 and JBOD are possible.

    Changing raid levels is a tricky/difficult/impossible(?) task so you might save yourself some headaches if you plan long-term.

    If your data is some kind of important I would not go for Raid5 but for Raid6 since there is still a chance that a second drive fails while rebuild is not complete (take not this can take up to days depending on amount of data) and all your data WILL be lost.

    ZFS is pretty cool but works best combined with ECC memory AFAIK. Also has built-in raid levels (RAIDZ2 and so on) so mdadm is not needed anymore. Even built-in encryption is possible.