Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

My pi based experimental ceph storage cluster

My pi based experimental ceph storage cluster

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

19 Yorumları

  1. Here is some code for deploying ceph-rook to a kubernetes cluster with Traefik 2.

    Do not use this as it won’t work unless you have a specific setup:


    Just chucking it out there as it may have some snips of things that might be useful to some here.

  2. Is it possible to set this up to provide features like a NAS (Windows Shares, folder permissions etc)?

  3. A 2nd hand workstation computer or server with lots of cores and lots of ram running lots of VMs would be cheaper and faster than a pile of pies.

  4. Definitely appreciate you posting this here. I’ve been considering trying to figure out a pi filer server setup to put in my basement where it is cooler than my living room where my desk is right at a window. Just had not gone digging very far yet.

    Off to watch some videos!

  5. Annnnnnnd I want to build one now. Order up the PIs !

  6. That case bottom right is exactly like my external WD 1.5 TB i bought 2009 and is still working

  7. Would this work in a VM like VirtualBox? Not everything does as easy as we would like. Also, are the files shipped up between physical disks or are they kept whole?

    Cool project!

  8. cool homelab setup. if you’re interested in learning about configuration and setup of ceph, I might recommend using the same hardware setup to learn the same about k8s using micro k8s. this is a pretty great little learning platform for cluster stuff… clearly unsuitable for production, but a great working model for learning and troubleshooting.

  9. Neat. Can I get some info on the tower chassis you’re using for the Pi please?

  10. I just decommissioned my experimental Pi Ceph cluster. It sounds like mine was quite a bit more performant than yours as I could saturate a gigabit connection with it. I used 4/8GB Pi 4s and had 6 OSDs and a total of 9 nodes. I found that Monitors and Managers really needed for /var to be mounted to an SSD for performance to be reasonable. CephFS was basically a no-go. I had it working but performance for anything metadata heavy was just awful, even if the MDS machines were on faster x64 machines with SSDs. For RBD it was quite usable though. I decommissioned it ultimately because stability was not great. I had OSDs going down every few days, even though about 75% of the time they recovered on their own. The thing was constantly in recovery.

    It was fun and I got some good experience with Ceph but now I just want some stable and fast storage so I’ve moved everything over to a ZFS fileserver. Now what do I do with 9 RPi4s…

  11. I’ve wanted to try this to learn about ceph. I do have one question and I think I know the answer but do the nodes have to be homogeneous, like the same OS, disk size, software version… that sorta thing? Or can it be just all over the place (within reason)? I’d figure the software need to be either the same exact version or close enough. Or does an outdated node just restrict the feature set?

    Thanks in advance.

  12. So a colleague tried this but they simply didn’t have enough RAM – how much do those Pi’s have and have you had issues?

  13. what have you noticed due to the memory constraint of the pi?

    very interested since all the literature says ‘ram’ is the bottleneck.

    overall specs and test scenarios… like removing a node. replacing a node. replacing a disk. i say node… i mean osd. (it is only one physical server though… so multi server-node is not on the table )

  14. me no like stacked or smashed together spinning disks. get your spread on!

  15. Are those Pi3s?

    Isn’t the storage/networking simply too slow for this to be useful?

  16. Do you have a short explanation for my friend who doesn’t understand this?