rclone | cloud storage '' - Google Drive | Cloud Storage library

 by   rclone Go Version: v1.62.2 License: MIT

kandi X-RAY | rclone Summary

kandi X-RAY | rclone Summary

rclone is a Go library typically used in Storage, Cloud Storage, Amazon S3 applications. rclone has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

Rclone ("rsync for cloud storage") is a command-line program to sync files and directories to and from different cloud storage providers.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              rclone has a medium active ecosystem.
              It has 38853 star(s) with 3531 fork(s). There are 570 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 809 open issues and 4077 have been closed. On average issues are closed in 183 days. There are 100 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of rclone is v1.62.2

            kandi-Quality Quality

              rclone has 0 bugs and 0 code smells.

            kandi-Security Security

              rclone has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              rclone code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              rclone is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              rclone releases are available to install and integrate.
              Installation instructions are available. Examples and code snippets are not available.
              It has 181706 lines of code, 7336 functions and 832 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of rclone
            Get all kandi verified functions for this library.

            rclone Key Features

            No Key Features are available at this moment for rclone.

            rclone Examples and Code Snippets

            No Code Snippets are available at this moment for rclone.

            Community Discussions

            QUESTION

            How to upload from Digital Ocean Storage to Google Cloud Storage, directly, programatically without rclone
            Asked 2022-Mar-27 at 16:18

            I want to migrate files from Digital Ocean Storage into Google Cloud Storage programatically without rclone.

            I know the exact location file that resides in the Digital Ocean Storage(DOS), and I have the signed url for the Google Cloud Storage(GCS).

            How can I modify the following code so I can copy the DOS file directly into GCS without intermediate download to my computer ?

            ...

            ANSWER

            Answered 2022-Mar-27 at 16:18

            Google's Storage Transfer Servivce should be an answer for this type of problem (particularly because DigitalOcean Spaces like most is S3-compatible. But (!) I think (I'm unfamiliar with it and unsure) it can't be used for this configuration.

            There is no way to transfer files from a source to a destination without some form of intermediate transfer but what you can do is use memory rather than using file storage as the intermediary. Memory is generally more constrained than file storage and if you wish to run multiple transfers concurrently, each will consume some amount of storage.

            It's curious that you're using Signed URLs. Generally Signed URLs are provided by a 3rd-party to limit access to 3rd-party buckets. If you own the destination bucket, then it will be easier to use Google Cloud Storage buckets directly from one of Google's client libraries, such as Python Client Library.

            The Python examples include uploading from file and from memory. It will likely be best to stream the files into Cloud Storage if you'd prefer to not create intermediate files. Here's a Python example

            Source https://stackoverflow.com/questions/71637044

            QUESTION

            SFTP How to List Large # of Files
            Asked 2022-Feb-01 at 06:36

            I am currently working on SFTP load to GCS bucket. However, I am able to do it for a limited number of files in any given SFTP directory by getting the list of files & iterating the absolute path of files. However, if the directory has too many files (or files within another folder), I am not able to do a simple ls & get the list of files to download from SFTP. Following is the working code to get the list of files in any given directory recursively from sftp:

            ...

            ANSWER

            Answered 2022-Jan-31 at 17:00

            You can get a filelist quickly using the find(1) executing the find command in ssh:

            Source https://stackoverflow.com/questions/70928978

            QUESTION

            Powershell/CMD is not recognizing PATH variable
            Asked 2021-Dec-24 at 06:22

            Windows 11/ Powershell 7.2.1

            I've added the following variables to user PATH and system PATH. C:\Program Files\rclone\rclone-v1.57.0-windows-amd64\rclone.exe

            When I try to run rclone from Powershell or cmd I get the following message:

            PS C:\Windows\System32> rclone rclone: The term 'rclone' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

            Sucessfully ran refreshenv and just to be sure I restarted Windows. After running $env:path -split ";" I can see C:\Program Files\rclone\rclone-v1.57.0-windows-amd64\rclone.exe is set correctly.

            I can run rclone from within the program folder I get this notice.

            PS C:\Program Files\rclone\rclone-v1.57.0-windows-amd64> rclone rclone: The term 'rclone' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

            Suggestion [3,General]: The command rclone was not found, but does exist in the current location. PowerShell does not load commands from the current location by default. If you trust this command, instead type: ".\rclone". See "get-help about_Command_Precedence" for more details.

            After setting rclone on PATH it still isn't "seen", what am I doing wrong here?

            ...

            ANSWER

            Answered 2021-Dec-24 at 06:22

            You have to specify the directory containing the rclone.exe not the path of the executable. You should add C:\Program Files\rclone\rclone-v1.57.0-windows-amd64 to the PATH enveronement not C:\Program Files\rclone\rclone-v1.57.0-windows-amd64\rclone.exe

            Source https://stackoverflow.com/questions/70469614

            QUESTION

            Copy Azure Files to Amazon-S3 bucket
            Asked 2021-Dec-23 at 14:39

            I would like to copy a directory (with all its sub files, folders etc.) from azure file storage (not azure blob storage) to an aws s3 bucket on powershell.

            So : Azure Files -> Amazon Web Services (AWS) S3

            What I tried :

            • using Rclone but rclone only takes into account blob and not file storage for the moment (see here)

            • use of azcopy but azcopy does not allow the following combination Azure Files (SAS) -> Amazon Web Services (AWS) S3 (Access Key)

            The process must not go through a local location (Virtual machine).

            Any Ideas ? Thanks !

            ...

            ANSWER

            Answered 2021-Dec-23 at 14:39

            I thought of an alternative that works.

            The idea is

            • To mount the azure file storage on disk. So it's not really "local" but rather a shared file. (here)
            • I use Rclone to copy to S3 from the local path to the mounted disk.

            Source https://stackoverflow.com/questions/70449760

            QUESTION

            Automatic Syncing between two Google Drive Shared Drives
            Asked 2021-Dec-18 at 05:13

            Question: I have two Google Shared Drives (Team Drives), let's say Movies and MoviesBackup. I need to backup whatever I upload to Movies into MoviesBackup. For this, I need an unidirectional sync from Movies to MoviesBackup, once daily.

            What I have tried: I know of of Rclone, and have used its Command Line Interface to sync between two Shared Drives. Maybe if Rclone can be used from Google AppScript, I would set a daily trigger. But I seem to find no way to do so.

            Any other solutions that work will be appreciated.

            ...

            ANSWER

            Answered 2021-Dec-18 at 05:13

            Although I'm not sure whether this is the direct solution of your issue, in your situation, how about the following sample script? This sample scripts uses a Google Apps Script library. Ref

            When this library is used, a source folder can be copied to the specific destination folder. And, when the files in the source folder are updated, the updated files are copied to the destination folder as the overwrite.

            Usage: 1. Install library.

            Please install the library to your Google Apps Script project. The method of installing it can be seen at here.

            2. Sample script.

            Please copy and paste the following script to the script editor of your Google Apps Script project. And save it. And, in this library, Drive API is used. So please enable Drive API at Advanced Google services.

            And, please set the source and destination folder IDs to the following object.

            Source https://stackoverflow.com/questions/70358447

            QUESTION

            How to create a folder in Google Drive Sync created cloud directory?
            Asked 2021-Dec-16 at 19:33

            This question assumes you have used Google Drive Sync or at least have knowledge of what files it creates in your cloud drive

            While using rclone to sync a local ubuntu directory to a Google Drive (a.k.a. gdrive) location, I found that rclone wasn't able to (error googleapi: Error 500: Internal Error, internalError; the Google Cloud Platform API console revealed that the gdrive API call drive.files.create was failing)

            By location I mean the root of the directory structure that the Google Drive Sync app creates on the cloud (eg. emboldened of say: Computers/laptopName/(syncedFolder1,syncedFolder2,...)). In the current case, the gdrive sync app (famously unavailable on Linux) was running from a separate Windows machine. It was in this location that rclone wasn't able to create a dir.

            Forget rclone. Trying to manually create the folder in the web app also fails as follows.

            Working...

            Could not execute action

            Why is this happening and how to achieve this - making a directory in the cloud region where gdrive sync has put all my synced folders?

            ...

            ANSWER

            Answered 2021-Dec-16 at 19:33

            Basically you can't. I found an explanation here

            If I am correct in my suspicion, there are a few things you have to understand:

            1. Even though you may be able to create folders inside the Computers isolated containers, doing so will immediately create that folder not only in your cloud, but on that computer/device. Any changes to anything inside the Computers container will automatically be synced to the device/computer the container is linked to- just like any change on the device/computer side is also synced to the cloud.
            2. It is not possible to create anything at the "root" level of each container in the cloud. If that were permitted then the actual preferences set in Backup & Sync would have to be magically altered to add that folder to the preferences. Thus this is not done.

            So while folders inside the synced folders may be created, no new modifications may be made in the "root" dir

            Source https://stackoverflow.com/questions/70384555

            QUESTION

            Snakemake: Use checkpoint and function to aggregate unknown number of files using wildcards
            Asked 2021-Dec-07 at 05:19

            Before this, I checked this, snakemake's documentation, this,and this. Maybe they actually answered this question but I just didn't understand it.

            In short, I create in one rule a number of files from other files, that both conform to a wildcard format. I don't know how many of these I create, since I don't know how many I originally download.

            In all of the examples I've read so far, the output is directory("the/path"), while I have a "the/path/{id}.txt. So this I guess modifies how I call the checkpoints in the function itself. And the use of expand.

            The rules in question are:

            download_mv

            textgrid_to_ctm_txt

            get_MV_IDs

            merge_ctms

            The order of the rules should be:

            download_mv (creates {MV_ID}.TEX and .wav (though not necessarily the same amount)

            textgrid_to_ctm_txt (creates from {MV_ID}.TEX matching .txt and .ctm)

            get_MV_IDs (should make a list of the .ctm files)

            merge_ctms (should concatenate the ctm files)

            kaldi_align (from the .wav and .txt directories creates one ctm file)

            analyse_align (compares ctm file from kaldi_align the the merge_ctms)

            upload_print_results

            I have tried with the outputs of download_mv being directories, and then trying to get the IDs but I had different errors then. Now with snakemake --dryrun I get

            ...

            ANSWER

            Answered 2021-Dec-07 at 05:19

            I can see the reason why you got the error is:

            You use input function in rule merge_ctms to access the files generated by checkpoint. But merge_ctms doesn't have a wildcard in output file name, snakemake didn't know which wildcard should be filled into MV_ID in your checkpoint.

            I'm also a bit confused about the way you use checkpoint, since you are not sure how many .TEX files would be downloaded (I guess), shouldn't you use the directory that stores .TEX as output of checkpoint, then use glob_wildcards to find out how many .TEX files you downloaded?

            An alternative solution I can think of is to let download_mv become your checkpoint and set the output as the directory containing .TEX files, then in input function, replace the .TEX files with .ctm files to do the format conversion

            Source https://stackoverflow.com/questions/70247422

            QUESTION

            PHP string concat with slashes and variables
            Asked 2021-Dec-04 at 18:14

            I am trying to exec an rclone command via a PHP script. The plain text version of the call looks like this:

            ...

            ANSWER

            Answered 2021-Dec-04 at 18:02

            If the below is working fine

            rclone copy /media/storage/Events/01//999/001 events:testing-events-lowres/Events/01/999/001 --size-only --exclude /HiRes/* --include /Thumbs/* --include /Preview/* -P --checkers 64 --transfers 8 --config /home/admin/.config/rclone/rclone.conf -v --log-file=/www/html/admin/scripts/upload_status/001.json --use-json-log

            Then below is the related PHP code, assuming your variables will contain correct values. You are doing some mistake while concatenating, not using proper . (dots) and " (quotes).

            exec("rclone copy ".$baseDir."/".$mainEventCode."/".$eventCode." events:testing-events-lowres/Events/01/".$mainEventCode."/".$eventCode." --size-only --exclude /HiRes/* --include /Thumbs/* --include /Preview/* -P --checkers 64 --transfers 8 --config /www/html/admin/scrips/rclone.conf -v --log-file=$directoryName/".$eventCode.".json --use-json-log");

            Source https://stackoverflow.com/questions/70228103

            QUESTION

            AWS log driver not picking up `tail -f` to Docker image `stdout`
            Asked 2021-Dec-03 at 19:10

            I have an Alpine Docker image I'm deploying using Fargate. This is the command it runs:

            ...

            ANSWER

            Answered 2021-Dec-03 at 19:10

            I figured out a solution with help from this answer to a similar question, though I'm still unsure why tail -f wasn't working for me.

            In order to ensure that Fargate instantiates the image with a virtual terminal, I added this line to the ContainerDefinition inside my TaskDefinition in my CloudFormation template:

            Source https://stackoverflow.com/questions/70190600

            QUESTION

            How to configure new google drive to resemble the old google backup and sync?
            Asked 2021-Nov-15 at 20:53

            I'm on OSX Big Sur, previously using Google Backup and Sync to sync files between my computer and google drive.

            I have setup Backup and Sync to sync any files in the folder /Users/doe/ODrive which contains 16GB file size.

            After migrating to the new google drive since backup and sync got deprecated, I see a different behaviour.

            The new google drive by default works like rclone. It creates a virtual drive under /Volumes/GoogleDrive and at the same time makes a symbolic link to /Users/doe/Google Drive for quick access.

            Here's my problem:

            1. If I choose to access any files offline it starts downloading them on disk taking unnecessary disk space since I already have all files downloaded on disk but on a different location /Users/doe/ODrive. How do I tell google drive to use those files and not download anything?

            2. Theres a preference settings in the new google drive allowing to choose your desired directory location for google drive. If I set up that preference from the current setting /Volumes/GoogleDrive ---> /Users/doe/ODrive will that mess my ODrive folder and its content? I'd rather die than loose its content.

            3. What's the difference between Folders from my computer and Folders from Drive. Isn't this two way communication like backup and sync was?

            ...

            ANSWER

            Answered 2021-Nov-15 at 20:53

            I did a bit of research & testing on my end and here's what I have found:

            1. If I choose to access any files offline it starts downloading them on disk taking unnecessary disk space since I already have all files downloaded on disk but on a different location /Users/doe/ODrive. How do I tell google drive to use those files and not download anything?
            Findings:

            It seems like this is not possible. If you want to tell Google Drive to use those files and not download anything, the only option that you can do is to select the Stream Files option & then add the folder /Users/doe/ODrive on the My MacBook Pro preferences. This way, the files from your ODrive will be uploaded back to your drive instead. But, there's a catch as the uploaded files will be now be a duplicate because the Google Drive app will treat this as a new upload. And also, if you have Google Docs, Sheets, Slides or Forms on your ODrive, the app seems to not upload these files back & it will show you an error on the app's activity screen.

            Once the folder /Users/doe/ODrive on the My MacBook Pro preferences has been successfully added & synced, you will then see the ODrive folder on your drive.google.com > Computers (left side) > My MacBook Pro > ODrive. At the same time, the ODrive files are backed up and synced from your Drive to your computer and will also be available for offline use

            1. There's a preference settings in the new google drive allowing to choose your desired directory location for google drive. If I set up that preference from the current setting /Volumes/GoogleDrive ---> /Users/doe/ODrive will that mess my ODrive folder and its content? I'd rather die than loose its content.
            Findings:

            No. The Google Drive app will show you a message to reset the folder back to default because it has to be an empty folder before you attempt to change & save the default directory folder

            See this result on my end:

            1. What's the difference between Folders from my computer and Folders from Drive. Isn't this two way communication like backup and sync was?
            Findings:

            On my observation, Folders from my computer is the section where you can see/access all of the synced folders that you've added from the Google Drive app, on the My Macbook Pro preferences. You can then view these folders and their synced files at drive.google.com > Computers (left side option) > My MacBook Pro

            Source https://stackoverflow.com/questions/69978015

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install rclone

            Please see the rclone website for:.
            Installation
            Documentation & configuration
            Changelog
            FAQ
            Storage providers
            Forum
            ...and more

            Support

            Please see the rclone website for:.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Cloud Storage Libraries

            minio

            by minio

            rclone

            by rclone

            flysystem

            by thephpleague

            boto

            by boto

            Dropbox-Uploader

            by andreafabrizi

            Try Top Libraries by rclone

            rclone-webui-react

            by rcloneJavaScript

            passwordcheck

            by rcloneGo

            debughttp

            by rcloneGo

            rclone-js-api

            by rcloneJavaScript

            rclone-test-plugin

            by rcloneJavaScript