Kaydol

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

Pleas help mirror this climate data before it vanishes!

Pleas help mirror this climate data before it vanishes!

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

22 Yorumları

  1. So this is still a sticky… yet nothing has come from it?

    Can we chalk this up to a giant wasted effort, partisan paranoia and move on?

  2. If someone created a torrent this would be a much nicer, easier, scalable way of storing the data in a distributed secure manner. Let me know if someone does 🙂

  3. is this still a thing? im surprised this politically driven stuff is still a sticky in this sub.

  4. Why would we waste HDD Space on “Weather” lol! Dull as dish water xD

  5. In terms of storing this stuff, is anyone trying/using de-duplication? It probably won’t give much since a lot of the data blocks would be fairly unique, but it just might help a bit and every bit counts with what we’re trying to achieve.

  6. In terms of storing this stuff, had anyone tried using any of the various implementations of dedupe? It might only go so far since most data blocks will be unique but every bit counts

  7. Okay so anyone at uni with an unlimited google drive account the thing to do is open up a free google cloud instance at [Google Cloud Compute](https://cloud.google.com/compute/). Then ssh in and grab [rclone](https://rclone.org/) use rclone to connect to google drive then use [fuser](https://www.reddit.com/r/DataHoarder/comments/598pb2/tutorial_how_to_make_an_encrypted_acd_backup_on/) to connect it. Then cd into the folder that is mounted as the drive folder and then use screen to run wget in the background so you can do other stuff. Then start using wget to get the files. [Progress](http://nathaneaston.com/pr.png)

  8. Oh look, another sub turned political and the mods are actually embracing it rather than doing their jobs. Time to unsubscribe & block.

  9. What is the plan after we have grabbed the data?

    I would love to help get by getting a cloud storage solution or something. But we can’t store and host this forever. (At least I can’t :))

    We really need like a University or other large organization to consolidate all of these files.

  10. I don’t know much about data mirroring/storage but I’ve got about 2 TB free and basic knowledge of wget etc. If someone gives me detailed instructions I can save about that much.

  11. I’m just a lurker, I haven’t got the capacity, but I poked some people who do. http://imgur.com/RDo90ki
    Can anybody get this going?

  12. Just a question. Is there a way someone could make a browser app that connects to a server and can tell if someone mirrors a webpage or not?

    To make it further complex, you could design the files to be encrypted and archived. When someone mirrors a page, it then uses p2p software and basically tells everyone someone has mirrored the page. If you ever attempt to visit that webpage and it is now offline, the app could check to see if any peers archived it and allow you to download it to view. At the same time, you could use the app to see if there is any pages not archived.

    With an app like this, the pros would be easier archiving from a community based effort, it would allow webpages to virtually operate as if they’re still up thanks to the app and users currently streaming. Because they’re encrypted, they cannot be tampered if that would ever be an issue.

    The cons would be users continue to stream webpages and it would be like running torrent software, it would use bandwidth even if not a lot (people won’t be streaming off you 24/7). However, if there were some conspiracy to sabotage the project, it would be a trivial one for someone to create fake streams and get around the encryption if they so have the tools to do that (this means the app has to be fairly complex and I am not knowledgeable in this area).

    If you were to have such a software, you could even have it so users also scan webpages of currently desired websites to globally announce existing webpages, this way even individuals not archiving can contribute by listing existing webpages and others could basically use bots to archive the pages for them until they meet a data limit.

    Don’t mind me, just taking a shit and have plenty of time to over think things.

  13. Anyone got a rough guess at the total size they are looking for?

  14. Why can’t someone just make torrents of these? That’s what it’s there for. And would make it incredibly easy for others to contribute.

  15. I have an unlimited google drive from my uni how can I help out with this?

  16. What magnitude of data-sizes are we talking about here? I’m just asking to find out whether I even have the resources to make any effort on my behalf worthwhile.

  17. ##I will **NOT** be cowed by the globalists **ANY LONGER**.

    ##ENOUGH **IS** ENOUGH, NO MORE SPACE FOR THE HDJ!!