Kaydol

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

Curious if anyone has suggestions on how to merge two large video collections

I have a fellow data hoarder friend who was notified recently that G Suite storage would no longer offer unlimited storage next year so we’re looking at replicating our video collections together so we have some form of redundancy if we either have a total loss (we both run UNRAID and have setup a VPN). We can read and write to each other’s collection, but, the challenge is, there’s a good bit of overlap among our collections and we have over 120TBs to filter through. Manually, we’ve dumped our file info to a spreadsheet that we’re sorting through, but it’s so bloody time consuming. Any suggestions on apps that we could run to index our collections and pick the best version of duplicate movie files (work smart over working hard on this)?

We can run a windows VM if needed based on the app recommendation, but would prefer to run something native to Linux in either a VM or as a docker container. What would you do in our shoes? Also, thought about doing a local data transfer of our collections, but we don’t have the available storage to do that yet. Hoping to snag some more 16TB WD easy store drives for black Friday! 🙂

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

2 Yorumları

  1. Was there a standard naming format between the two data sets?

    seems like a trivial task to identify the duplicates and then compare them based on criteria like filesize or resolution or bitrate. This is the type of thing spreadsheet apps like excel or google sheets were designed to handle.

  2. If you can get the files to consider in the same directory tree (with same parent directory), on LAN or local storage, then you can use my software “[cbird](https://github.com/scrubbbbs/cbird)” to do a deep index to look for dups, which will find exact copies, and near-dups like resolution/quality differences. Be warned it will take a long time to index (rate is about 600 fps for 1080p on 8-core machine)