So I started the Google Takeout process because I want to move my photos from Google Photos to Immich.

Google emailed me saying the archives are ready… uh… I have to download 81 zip files, each 2GB big… 😬

Is there an easy way to download all of these files? Or do I have to click “download” 81 times and hope the downloads don’t get interrupted?

  • TrumpetX@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    13 hours ago

    I just clicked download on 3 at a time until I was done. Could it be automated? Sure. But clicking 3 at a time is way faster than figuring it out.

    I HIGHLY recommend immich-go if you haven’t found it yet: https://github.com/simulot/immich-go

    Enjoy immich!! Great self hosting project.

  • Sidyctism II.@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    I recently went through the same process. Luckily only 6 zip files, and all but one (the one with my emails) were pretty much empty.

    What really pissed me off: because i didnt log in with my current device to google yet, after i let google create the takeout it came to me like “ehh i dont know this phone yet, better wait a week to download”
    A week later: “ohh seems like we only store takeouts for a week, guess you gotta do it again :(”
    Rinse, repeat, and after 3 weeks, i could finally get the only thing i cared about anyway: my mails

  • taxon@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    This worked pretty well for me, although constructing the cli command took a little elbow grease. This video proved to be very helpful.

    Here’s the commands I ended up using:

    Testing Upload Process

    Before performing the actual upload, test the process with this command:

    immich-go.exe -server http://[server.ip:port} -key [put your apikey here w/o brackets} upload -dry-run -google-photos takout-*.zip
    

    Actual Upload

    Once testing is successful, perform the full upload:

    immich-go.exe -server http://[server.ip:port} -key [put your apikey here w/o brackets} upload -google-photos takout-*.zip
    

    Remove Duplicates

    If you’ve previously uploaded photos before syncing your phone, remove duplicates with:

    immich-go.exe -server http://[server.ip:port} -key [apikey} duplicate -yes
    
    • paequ2@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Remove Duplicates

      Excellent! This is my next question.

      I’ve already partially synced my Google Photos library by installing Immich on my Android phone and enabling Immich backups. But I see that the oldest photo in Google Photos is way older than what Immich has.

      So now I’m worried that when I run immich-go with the full takeout archives, I’m going to get a ton of duplicates because half of my library is already on immich.

      What’s the duplicate command? I can’t find it in the CLI…

      $ immich-go duplicate --help
      Error: unknown command "duplicate" for "immich-go"
      Run 'immich-go --help' for usage.
      unknown command "duplicate" for "immich-go"
      
      $ immich-go  --help
      An alternative to the immich-CLI command that doesn't depend on nodejs installation. It tries its best for importing google photos takeout archives.
      
      Usage:
        immich-go [command]
      
      Available Commands:
        archive     Archive various sources of photos to a file system
        completion  Generate the autocompletion script for the specified shell
        help        Help about any command
        stack       Update Immich for stacking related photos
        upload      Upload photos to an Immich server from various sources
        version     Give immich-go version
      
      Flags:
        -h, --help               help for immich-go
        -l, --log-file string    Write log messages into the file
            --log-level string   Log level (DEBUG|INFO|WARN|ERROR), default INFO (default "INFO")
            --log-type string    Log formatted  as text of JSON file (default "text")
        -v, --version            version for immich-go
      
      Use "immich-go [command] --help" for more information about a command.
      
      $ immich-go version
      immich-go version:0.27.0,  commit:64221e90df743148a8795994af51552d9b40604f, date:2025-06-29T06:22:46Z
      
  • walden@wetshav.ing
    link
    fedilink
    English
    arrow-up
    17
    ·
    22 hours ago

    Do it again, but select 50GB chunks. This will produce fewer files.

    Use immich-go to do the importing.

  • deadbeef79000@lemmy.nz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    19 hours ago

    The asinine route I took was to authorise Google to deliver the file(s) to my OneDrive and then use one drive sync to download them.

    There are benefits, the files are ‘backed up’ on another cloud; the process it entirely independent of you having a browser session.

  • skoberlink@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    22 hours ago

    I have tried to solve this many times as I want to regularly back up my Google content - mostly the images for the same purpose you mention.

    Unfortunately there is no good solution I’ve ever come up with or found. I even looked into scripting with something like puppeteer. It requires regular confirmation of your authentication and I just haven’t found a good way to solve that since there’s no API access. It also won’t let you use any cli tools like wget. You could probably figure out how to pull some token or cookie to give to the cli but you’d have to do it so often that its more of a pain than just manually downloading.

    My solution currently is to run a firefox browser in a container on my server to download them. It acts as sort of a session manager (like tmux or zellij for command line) so that if the PC I’m using goes to sleep or something, the downloads continue. Then I just check in occasionally through the day. Plus I wanted them on the server anyway, at the end of the day. Downloading them there directly saves me having to then transfer to the server.

    Switching to .tgz will let you make up to 50GB files which at least means fewer iterations and longer time between interactions (so I can actually do something useful in the meantime).

    I sincerely hope someone proves me wrong and has a way to do this but I’ve searched a lot. I know other people want to solve it but I’ve never seen anyone with a solution.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 hours ago

    Also, you will need to do some preprocessing of your files before importing to immich. Something like this to fix the metadata. I can’t remember which one I used, because there are a few out there.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 hours ago

    Well, you won’t like it. If you have very fast internet and a managed downloader, then you may be able to get all of the files. Google seems to throttle the speeds to make large takeouts almost impossible to download in the limited time allowed.

    For this size of download, your best option is to get a subscription to a compatible service (Dropbox, etc.) To transfer the files, which will happen much more quickly than downloading yourself. Then download the files from that service at your leisure, and then cancel the service.

    It’s pretty backwards, but it’s really the best option for large takeouts (over 5 gigs or so).

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    Have you looked at rclone?

    You can plug rclone into your Google drive and then use copy to download all your photos into immich, the setup is also very easy