Kaydol

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

Server upgrade! Using ZFS for the first time with 6x12TB RAIDZ2 array.

Server upgrade! Using ZFS for the first time with 6x12TB RAIDZ2 array.

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

21 Yorumları

  1. Lately, I have been moving away from ZFS. I have went from 2TB to 3TB to 4TB sets over the years.

    With 8TB and 10TB, I am relooking at data failure. Erasure code plus syncing remotely is the current thought process although I haven’t finalize the process yet.

    Erasure code works fine in the Usenet days and recover data quite well. In this space I have par2 and minio in mind.

    For syncing, I have syncthing and rclone testing on docker-compose.

    Overall, trying to keep the setup simple for the long haul and have the ability to access it on the go. Setup is incomplete at this point.

    Moving away from ZFS mainly due to overall experience. Some Weakness being

    – that all drives have to be online to access subset of data
    – portability
    – time, ZFS strategy of scrubbing, snapshots, and backup takes a good amount of time.
    – energy, disk scrubbing consume time and energy
    – money, upgrade is steep money-wise

  2. It’s ok until it’s not i agree but I would take the chance

  3. Check for firmware updates. SC60 on the 10TB IronWolf drives is **bad**.

  4. Dang dude! That’s REALLY similar to the build I just completed.

    Virtualization/NAS build:
    Fractal Node 804 case,
    AsRock x470d4u w/ ryzen 3900x,
    Samsung 970 Evo,
    6x WD red 4TB (in raidz2)

  5. May I recommend doubling up on the NVME disk. You do *not* want to run ZFS without a log device, and you REALLY do not want to deal with what happens to a ZFS array if the log device fails during a recovery.

    You want something that looks like this:

    tank
    mirror
    /dev/sda
    /dev/sdb
    mirror
    /dev/sdc
    /dev/sdd
    mirror
    /dev/sde
    /dev/sdf
    cache
    /dev/nvme01p2
    /dev/nvme02p2
    log
    mirror
    /dev/nvme01p1
    /dev/nvme02p1

    or like this

    tank
    raid-z
    /dev/sda
    /dev/sdb
    /dev/sdc
    /dev/sdd
    /dev/sde
    /dev/sdf
    cache
    /dev/nvme01p2
    /dev/nvme02p2
    log
    mirror
    /dev/nvme01p1
    /dev/nvme02p1

    depending on how performance oriented you want to be. You could also go raid z2, which would fall somewhere in between, but probably isn’t worth it with only 6 disks.

    The log device only needs to be quite small. Never larger than your total quantity of RAM, but actually usually a lot less. The formula is “pool write speed in mb/s * sync frequency” (defaults to 5s)

    The cache can improve read performance quite a bit, particularly on small-file reads, and especially if you have highly parallel reads going on (multi-user NAS is the obvious case) If this is just a plex server that probably doesn’t actually matter, but _really_ don’t skip out on the log device.

  6. I’ve been seeing these 12TB ironwolf drives posted everywhere, are they currently the “best” drives for data hoarders?

  7. For media files i would not use ZFS, for it is inflexible (you can’t easily enlarge it), all disks have to run all the time, and in case 3 disks fail, your data is lost completely.

    For a few years now i recommend snapraid, in your case with two parity disks. Here is how it works: you format the disks to NTFS, Ext4, XFS or whatever, put you data on them, leave two disks empty, create a config file and snapraid sync.

    Your have now two parity disks for your data disks, but the data disks are all independent.

    Say you watch a movie from disk 2, the other five disks can be idled down.

    You still want a view as if all your data disks are one large volume? Use mergerfs. You want to write on the disk with the most space, automatically? MMergerfs will redirect your writes accordingly.

    Now something happens, you lost a disk. You add a new one, but while recreating it, another one bites the dust. Then, unlikely as it is, another one! With RAID you would have left nothing salvageable. With snapraid you still have independent disks with independent filesystems.

    A drawback is that you have to snapraid sync to add new files into the parity, it’s not running in the background. Also, read speeds are dependend on the disk you read from.

    I have 8x 8TB data disks in 2 4bay JBOD USB3 enclosures. And i have two 8TB parity disks in single USB3 enclosures.

    Every now and then i mount the parity disk, snapraid sync or snapraid scrub, and unmount them again.

    Usually only one drive of the 10 drives is spinning at a time.