I finally got the initial download of 16TB(207,7K Addons) done(took a few weeks) and now the continuous download and indexing of the workshop is working without supervision.
Im working on a GMAD-Parser (special addon archive file) to generate a fileindex+sha256 hashes for every Addon, to look for dupes.(prob a lot of repacked addons) but its going to take some more time.
PART2 : [https://www.reddit.com/r/DataHoarder/comments/p0jb8m/garrys_mod_workshop_archiving_part_2/](https://www.reddit.com/r/DataHoarder/comments/p0jb8m/garrys_mod_workshop_archiving_part_2/)
Anyone wanting to access it, feel free.
File Structure is: (AddonID for example is 1337420) 1/3/3/7/1337420.<UpdateTimestamp>.gma/bin
The Images are just stored once, they have the same schema but without the Timestamp.some of the Addon Files(especially the older ones) are GMAD archives, BUT!! theyre layered with LZMA compression. (just `cat input.gma | lzma -d – > output.gma`)
Im storing metadata about every Addon+Version in a MongoDB(yeah its not the best, but i needed a fast migration to dump json from steamwebapi into).
I am still thinking about making a front-end for people to look for and download deleted Addons.