Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

I Blame Each and Every One of You….

I Blame Each and Every One of You….

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

43 Yorumları

  1. How much power does something like this use?

  2. I for one do not share everyone’s excitement with Backblaze storage pods. I do understand that for their business model to scale they needed to manufacture their own hardware. But for a home user to drool over it – I just do not get it. Quality of the first generations of the pods was pretty poor. Maybe the 4th generation and onward is on par with the likes of SuperMicro but it is still a far cry from the big leagues.

    For the past couple of years we live in the golden age of home data hoarding. Enterprise has moved its storage systems to SAS 12Gbps and beyond. The secondary market is filled with “older” 6Gbps hardware which can be had for literally pennies. We used to have a somewhat similar situation with used hardware availability after industry wide upgrade from 3Gbps to 6Gbps. Only this current 6Gbps generation “legacy” hardware is more “future proof”. It does not have all those issues plaguing the previous 3Gbps generation devices like lacking support for 2Tb+ drive sizes, etc.


    Personally I happen to like Hitachi, so my personal progression went from 8 drives inside the server, to one Hitachi DF-F800-RKAK 15-bay enclosure, then I added another one, and when I needed to add more drives – I moved everything to a Hitachi DF-F800-RKAKX high density 48-bay unit.


    DF-F800-RKAK regularly sells for under $150 shipped

    DF-F800-RKAKX is less common but if you are willing to wait a few weeks – it can be found around $300 shipped

  3. Now do the same thing with the new 1 TB micro SD card

  4. You have a fever, and the only prescription, is more ~~cowbell~~ hard drives.

  5. I accept your blame, and as an act of contrition, am willing to take this burden from your shoulders. Send it to me, oh ye that I have wronged, so that I may bare the consequences of my unforgivable actions.

  6. When I see a case with huge numbers of bays I always think it would let me wring every last bit of life out of some old drives. Then it occurred to me, that if I fully populated one of these with 250 GB drives (of which I still have a pile) that it would only add up to 11.25 TB – so I could back it all up to a single 12 TB external.
    Sounds like a plan! ^^^A ^^^really ^^^uneconomic ^^^one ^^^admittedly

  7. Lot of questions, I’ll see if I can answer a bunch of them.

    Original / Current Hardware

    – I originally paid a bit over $500 for the case a while back with a ‘Best Offer’ on eBay and then another $100 or so to get it shipped here. The unit originally came with the case, 6 120mm fans, 9 AC-SAB-5PMBP SATA Backplane Expanders, a SuperMicro X9SCM-F Mobo, a i3-2100 CPU, 8 GB RAM, 3 Syba PCI Express Controller Cards and a pair of crazy ‘mini redundant’ (there are 2 PSU’s crammed in a normal PSU, so there’s really 4 PSU’s) Zippy MRG-5800V4V power supplies.

    – I pulled pretty much everything other than the backplane expanders and the power supplies. I planned to replace the power supplies originally, but found they did not have the normal PSU wiring and had some proprietary wiring needed to connect to all the fans and backplanes. Still have the proper connections to run those and the motherboard, so I just left them. The original fans were _insanely_ loud (42dBA each) and moved a crazy 96 CFM each. I pulled those and replaced them with some Apevia AF512S fans that only hit 24 dBA and move close to 60 CFM. Even with the drop in CFM, I’ve not had any heat issues. I swapped in a Gigabyte Z370P D3 Mobo ($90), 240 GB NVMe for the OS ($60), 16GB DDR4 (Had Spare), i7-8700K CPU ($360), and a pair of RocketRaid 2740 RAID Cards ($150 Each Used) to connect up all the expanders. I had to also pick up SATA Male to Male adapters ($30) to connect the Mini SAS to SATA cables from the 2740’s to long SATA leads going to each expander. Probably all in for the server without the drives I’m at about $1000. Not counting the full cost of the case, because I resold most of the original parts I pulled out of it and got back a good chunk of that initial $500.

    – For the drives, I didn’t just pull the trigger on all of them at once. Over the last 18 months or so, every time I’d see some of the WD/Seagate External 8 TB drives go on sale for ~$130 I’d pick up a handful and shuck them. Ended up with pretty much all WD80EMAZ/WD80EFAX’s with some Seagate ST8000DM004’s thrown in as well. The Seagates are SMR, which is fine for my current use case, but could be a problem if I decide to do something else with the system. As I see WD external 10TB’s go on sale, I’ve started picking up one or two of them at a time to swap them in for the SMR drives and will then resell the ST8000’s on eBay to get back a good portion of the 10TB cost.

    Power Usage

    – Even when scanning all drives at once, it’s peaking around 350-400w or so watts, idle operation is a bit under 200w. If I had to average power draw, it’s probably about 300w 24×7. At my power rate (0.12/kWh), I’m probably using about $26/month worth of power. But I have solar panels on my roof covering more than that, so I’m not really sweating it. 🙂

    Drive Labeling/Identification

    – I wrestled with this a bit, but you may notice some tape with an arrow and ‘A1’ on that front hard drive row rail. I did the same with B1/C1 for the 2nd and 3rd rails and then simply named the drives in each row A01-A15, B01-B15, C01-C15 based on the slot I put them in. This way I can easily tell which drive(s) are missing / acting up when I need to.


    – After replacing the fans, noise level isn’t that bad. I can actually sit in that room and watch TV without it bothering me. Before I changed out the fans though, forget it. As for heat, not really a problem so far. Raises the ambient temp in the room a few degrees. Even with the case buttoned up and everything running full tilt I’ve not seen any drives hit above 40C, and they’re usually sitting in the low 30C range.

    What do I need all that Storage for?

    – If I’m honest to myself, I don’t really. It started off to see if I could consolidate a few systems worth of drives I was using for ‘Proof of Capacity/Crypto mining’ something call [‘BurstCoin’](https://www.burst-coin.org/) into a single machine and then it just morphed into this beast. BurstCoin mining is actually pretty low impact on the hardware. Basically you fill up the drive with ‘mining solutions’ once, and then every time a new block is up, you just have to do a full scan of the drive looking for the fastest solutions. With my setup, scanning takes about 40 seconds to scan _all_ the drives in parallel, and with BurstCoin the blocks happen on average once every 4 minutes. So about 80% of the time everything is sitting idle, waiting for the next block. I’ll keep doing that for now, but initial (and ultimate) plan is to use the machine for [IPFS] (https://github.com/ipfs/ipfs) / [Filecoin](https://filecoin.io/).

    – Right now the backplane expanders are the primary bottleneck, limiting me to about 50MB/s of read speed on each drive. I can’t complain though because it reads them all in parallel at a whopping 2000 MB/s, and is scanning 340TB in 40 seconds. I _could_ probably see if I could find some SATA III backplane expanders and get more speed, but I really think that’s even more overkill than I’ve gone already. 🙂

    Are you making any money off BurstCoin?

    – I wish I could I was making enough money for sports cars and mansions, but I’d be lying. I probably make about 1800-2000 Burst a week, which may sound like a lot, but each is only worth about 0.004 cents. If you factor in power costs and I’m only making a ‘profit’ of a few bucks a month. But a lot of you may know, that the entire crypto currency market is a crapshow right now, and everything is way down from past highs (BurstCoin was about 0.10 cents each about a year ago.) But for me, as long as I’m covering my power costs, I’m good. I don’t really look at it as a money maker, but more a fun (if borderline crazy) hobby that makes me a few $’s a month that could possibly have more value in the future.

    – Since I’ve been pretty keen on only buying parts at the absolute lowest price I could find, or waiting till things were on sale, worst case, I dismantle everything and eBay it all. I’d probably recoup 1/3-1/2 my costs.

    RAID/Redundancy/Storage Pool/OS

    – Right now I don’t have any RAID/Redundancy built in, since it’s not needed for my use case. The mining actually is more efficient, and easier to setup, addressing every drive individually. If any drives do fail (none have yet, knock on wood), I can pull it without impacting anything else. Eventually if I do want to do anything on the RAID/JBOD front, the RocketRaid 2740’s have a ton of capabilities I can leverage.
    – As for the OS, since I’m using this to mine burst, and there are Windows based miners, I just have Win10 Pro running on it at the moment. I did have to do a bit of finagling to make all the drives accessible through folder mount points on windows since there aren’t 45 drive letters available, but everything is working fine.

    Any major issues I ran into?

    – The heat/noise isn’t the best, but I’ve minimized it enough with my use case that’s it not really an issue and I can be in that room without it bothering me.
    – Weight – This thing is ridiculously heavy, to the point I don’t think I could move it without pulling a good portion of the drives out. Made sure I had a sturdy table underneath it.
    – There’s not a whole of room for the motherboard in the case, and with the power supply setup I had to use a really low profile cooler to fit everything in there.
    – Need to clean up the cabling a bit, but there’s not a whole lot of room.

    Would I do it again?

    – Probably, because I love messing with hardware and had a blast building this thing out slowly over the last year or so. I certainly will never likely need that much storage for normal ‘home’ purposes, and I’ll likely never ‘mine’ enough BurstCoin or FileCoin to cover all the costs. But to me, it’s not about that, the journey is just as fun as the final product, and the amount of time I got to spend having fun buried in this think more than makes up for it. 🙂

    Happy to answer any other questions folks have…

  8. Wowser. Would love one of those myself. What type of storage pool(s) you‘re gonna use? JBOD? Multiple ZFS arrays?

  9. Very cool OP!!

    Please think about resiliency in how you set this up. Backblaze deploys these 20 at a time with custom Reed-Solomon code to mange it. Early storagepods had different software and assumed **the entire pod was a failure unit**, meaning they didn’t swap dead drives but would swap an entire node when a certain number of drives had failed.

    What I’m getting at is that having just one of these, you need to really consider how you are going to protect that amount of data and restore it in case of failure.

    But it looks cool as hell and will yield you tons of usable space! Good luck and keep us all updated!!

  10. That is some zombie apocalypse level shit.

  11. Welcome to the club, I ordered a 24bay cheapo chassis from newegg then found this sub later that week.. returned it unopened and got me a 48bay chenbro off ebay for pretty much the same money.. life’s been good, went up to half capacity PDQ and im glad I have ample room to grow before I need to start retiring perfectly good drives.

  12. Looks nice! This is possibly the dream OP. What’s going on on the software side?

  13. Pardon my ignorance, but are there special server motherboards that allow for that many sata connections, or do servers just work differently as far as connecting them?

    I’ve only ever worked with regular motherboards and stuff…so I don’t really know about this stuff…lol

  14. Label each drive after every datahoarder that caused you to go this far.

  15. looks like a good use of lots of hard drives.

    lots of people have old hard drives which could get included in arrays like this.

  16. This will likely be me in a few years when my FreeNAS is full.

  17. > I got off ebay…

    I keep looking for these using every possible keyword I can think of on ebay without any success. I need to find good solution to replace my 2x full SC846 chassis.

  18. So how do these work as far as backup, heat, and performance?

    Which RAID do you run on it?

    I don’t see any StoragePod on ebay (?)

  19. What’s the power draw on something like that during usage?

  20. once you get it running solid, i’d love to know about power usage and loudness numbers.

  21. What is your usecase for that? That is an awful lot of storage.

    edit: I asked almost instinctively. I know, I know, he’s storing ISOs. Disregard.

  22. I purposely limit my self (aside from my wallet) to not let it get this far or further.

    But its a thing of beauty, Symmetry with a little asymmetry.

  23. Used StoragePod (I think 3.0 or 4.5) I got off eBay that I replaced the bulk of hardware in and then filled with 40 x 8 TB & 5 x 10 TB…

    This is certainly more than I planned on. But you guys are just very convincing. 🙂

    EDIT: Responded to a bunch of questions throughout the thread here: https://www.reddit.com/r/DataHoarder/comments/avmeb8/i_blame_each_and_every_one_of_you/ehgt5qs/