Kaydol

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza kaydolun.

Oturum aç

Flood göndermek, insanların floodlarını okumak ve diğer insanlarla bağlantı kurmak için sosyal Floodlar ve Flood Yanıtları Motorumuza giriş yapın.

Şifremi hatırlamıyorum

Şifreni mi unuttun? Lütfen e-mail adresinizi giriniz. Bir bağlantı alacaksınız ve e-posta yoluyla yeni bir şifre oluşturacaksınız.

3 ve kadim dostu 1 olan sj'yi rakamla giriniz. ( 31 )

Üzgünüz, Flood yazma yetkiniz yok, Flood girmek için giriş yapmalısınız.

Lütfen bu Floodun neden bildirilmesi gerektiğini düşündüğünüzü kısaca açıklayın.

Lütfen bu cevabın neden bildirilmesi gerektiğini kısaca açıklayın.

Please briefly explain why you feel this user should be reported.

how to migrate all my archives from encrypted Mac FS to encrypted linux FS without making a giant mess? (And what FS on linux?)

**TLDR:** I’ve decided to finally be done with Mac OS and go back to Linux. 🙂 How do I move my files?

————-

Problem is that all my files are on encrypted HFS and APFS containers/volumes, which as far as I can tell, there is no chance of anything on Linux being able to read directly.

And of course barring a very convincing reason to do otherwise I will be re encrypting everything via I believe LUKS which Mac is not interested in.

So how to reliably get TBs of data reliably from one to the other?

Last time I moved a lot of stuff it was from Google Drive to local machine. Admittedly much less in terms of storage though there was a high number of individual files to deal with. I had a lot of problems at that time which I do not want to repeat. I was not aware of rclone back then so I used the “takeout” method to get the files and there were many corrupted downloads because there is no way to verify anything. Also *all* my file names got severely truncated which resulted in data loss (multiple items with same name) and inability to find things. Of course meta data evaporated. It was a big cluster fuck 2 or 3 years ago and I still have some very messy, redundant corners of storage as a result.

Now that I understand things a bit better I would very much appreciate some sort of verification step that I can *see* was done not like “well the task is done and there are no errors so I guess everything was OK”. Because this is the path that leads me to make another copy just to be sure and then I have total fucking chaos with so many copies. (Copies are not backups, they are merely confusing.)

So in service of this task, I have a new 8TB HDD that can be used to rotate files and a raspberry pi that can be set to conduct long running tasks. I am guessing some sort of network-based clone/transfer is in order but not sure how to go about it. `rsync` comes to mind first but I am not extremely confident in my mastery of it. I really don’t want to end up like I did last time, with excessive copies whose integrity I do not trust.

All of my storage is plain USB enclosures, just like god made them. No NAS or anything. It was never planned, it sort of just happened that way. I’ll at least be putting some of it on the pi in a permanent way so I don’t have all these HDDs just hanging off my computer as I do now. I do none of the fancy file maintenance stuff I read about here. I’m not adverse to it just have never gotten to it on the list of things to learn about.

I also need to consider the matter of file systems. Maybe you have picked up on it but I am not super knowledgeable and to be honest I don’t have it in me to do a deep dive on disk architecture any time soon. Should I stick with `ext4` or so for something more modern? I see Manjaro (which I’m planning on) has some native support for `zfs` but between that, `openzsf` and `btrfs` it’s hard for me to know if any of them would be a rational choice and if so which.

Excluding regular personal documents and such, the *hoarding* component of my storage is about 15 tb which doesn’t sound like much but because I am mostly interested in smaller files there is a *lot* going on in there.

* Backups of personal files, things stored online etc. Of course.

* There is some video, music — this is my lowest importance. Easily replaceable. If this process is slow I might just delete a lot of it; would be fast enough to re-torrent.

* Photos – mostly archival/junk but I do want to keep. I really do *not* want to loose the metadata. These require deduplication and if there was some way to work that in it would be great.

* **semi archival**: Ebooks, text files, pdfs, dataset, zips, databases, scrapes and some multimedia here.. small in volume but very numerous and again metadata is crucial if I am ever to make use of it. This is about half my storage; it has nothing by way of backup and that makes me anxious. Difficult/impossible to replace to the extent that I even know what I have. I have been better at collecting than cataloguing. A few different projects/interests I dip into periodically; these files spend most of their lives at rest. Once or twice a year I catch a wind and work on them, by which I mean accumulate thousands more files. This section, luckily, does not suffer from massive duplication issues. 🙂 Every 20kb text file is unique.

I am a casual torrenter and a casual self-hoster but have no high demands related to either of these. Once I get a server set up properly I expect I might be torrenting more due to convenience. But I will never be a plex god like some of the folks here.

So I am pretty overwhelmed here; not sure what do do or how to plan.

– How do I think of this?

– Is it reasonable to incorporate depulication, *minor* sorting and creating a backup plan for my, um, archives in system migration?

It feels like a never ending task that will keep me stuck on this end-of-life Mac for the rest of time.

And what file system should I go to?

Benzer Yazılar

Yorum eklemek için giriş yapmalısınız.

2 Yorumları

  1. A similar situation has happened to me. I wanted to sell my Macbook Pro and buy a newer model. All my information was stored on two external hard drives that were encrypted. I was really concerned that my new laptop would not be able to read these files.

    So I got a Linux machine with enough space to rsync my laptop to this.

    rysnc -arP /Volumes/Name_of_External_Disk username@IP_Address:/Path/To/Directory/To/Save/All/These/Data/Name_of_External_Disk/

    -a is archive

    -r is recursive

    -P is including both showing progress and to keep partial files for resuming later if the transfer gets broken

    On my Linux machine, I kept track of the data on my two external hard drives using two separate directories.

    All of these took me a few days to complete. After I got the new Macbook, I realized that one external drive could be read but the other could not be accessed. Fortunately, I had backed up these data to my Linux machine.

  2. Why not set up a file share on either the Mac or Linux machine, connect to it with the other one, then copy the files?

    Maybe hash all the files first and verify they’re all good after the transfer. I believe rhash is compatible with both Mac and Linux and supports recursively hashing every file in a directory.