cloud-pipeline | Cloud agnostic genomics analysis , scientific computation | Continuous Deployment library

 by   epam Java Version: Current License: Apache-2.0

kandi X-RAY | cloud-pipeline Summary

kandi X-RAY | cloud-pipeline Summary

cloud-pipeline is a Java library typically used in Devops, Continuous Deployment, Docker applications. cloud-pipeline has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

Cloud Pipeline solution wraps AWS, GCP and Azure compute and storage resources into a single service. Providing an easy and scalable approach to accomplish a wide range of scientific tasks. Cloud Pipeline provides a Web-based GUI and also supports CLI, which exposes most of the GUI features. Cloud Pipeline supports Amazon Web Services , Google Cloud Platform and Microsoft Azure Cloud providers to run computing and store data.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              cloud-pipeline has a low active ecosystem.
              It has 124 star(s) with 58 fork(s). There are 19 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 670 open issues and 645 have been closed. On average issues are closed in 309 days. There are 141 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of cloud-pipeline is current.

            kandi-Quality Quality

              cloud-pipeline has 0 bugs and 0 code smells.

            kandi-Security Security

              cloud-pipeline has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              cloud-pipeline code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              cloud-pipeline is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              cloud-pipeline releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions are available. Examples and code snippets are not available.
              It has 333822 lines of code, 21681 functions and 5021 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed cloud-pipeline and discovered the below as its top functions. This is intended to give you an instant insight into cloud-pipeline implemented functionality, and help decide if they suit your requirements.
            • Parses a single pipeline run .
            • Lists the versions of a bucket .
            • Associate a filter field .
            • merges parameters from runVO to default configuration
            • Commit a container
            • Determines if a given permission is granted to the given permissions .
            • Reads a list of ACLs from a list of object IDs .
            • Matches the common system parameters for the given run .
            • Builds an email message .
            • Check the status of a tool .
            Get all kandi verified functions for this library.

            cloud-pipeline Key Features

            No Key Features are available at this moment for cloud-pipeline.

            cloud-pipeline Examples and Code Snippets

            No Code Snippets are available at this moment for cloud-pipeline.

            Community Discussions

            QUESTION

            How to properly extract endpoint id from gcp_resources of a Vertex AI pipeline on GCP?
            Asked 2022-Feb-14 at 21:24

            I am using GCP Vertex AI pipeline (KFP) and using google-cloud-aiplatform==1.10.0, kfp==1.8.11, google-cloud-pipeline-components==0.2.6 In a component I am getting a gcp_resources documentation :

            ...

            ANSWER

            Answered 2022-Feb-14 at 21:24

            In this case is the best way to extract the information. But, I recommend using the yarl library for complex uri to parse.

            You can see this example:

            Source https://stackoverflow.com/questions/71101070

            QUESTION

            ModelUploadOp step failing with custom prediction container
            Asked 2022-Feb-07 at 13:09

            I am currenlty trying to deploy a Vertex pipeline to achieve the following:

            1. Train a custom model (from a custom training python package) and dump model artifacts (trained model and data preprocessor that will be sed at prediction time). This is step is working fine as I can see new resources being created in the storage bucket.

            2. Create a model resource via ModelUploadOp. This step fails for some reason when specifying serving_container_environment_variables and serving_container_ports with the error message in the errors section below. This is somewhat surprising as they are both needed by the prediction container and environment variables are passed as a dict as specified in the documentation.
              This step works just fine using gcloud commands:

            ...

            ANSWER

            Answered 2022-Feb-04 at 09:10

            After some time researching the problem I've stumbled upon this Github issue. The problem was originated by a mismatch between google_cloud_pipeline_components and kubernetes_api docs. In this case, serving_container_environment_variables is typed as an Optional[dict[str, str]] whereas it should have been typed as a Optional[list[dict[str, str]]]. A similar mismatch can be found for serving_container_ports argument as well. Passing arguments following kubernetes documentation did the trick:

            Source https://stackoverflow.com/questions/70968460

            QUESTION

            Vertex AI Model Batch prediction, issue with referencing existing model and input file on Cloud Storage
            Asked 2021-Dec-21 at 14:35

            I'm struggling to correctly set Vertex AI pipeline which does the following:

            1. read data from API and store to GCS and as as input for batch prediction.
            2. get an existing model (Video classification on Vertex AI)
            3. create Batch prediction job with input from point 1.
              As it will be seen, I don't have much experience with Vertex Pipelines/Kubeflow thus I'm asking for help/advice, hope it's just some beginner mistake. this is the gist of the code I'm using as pipeline
            ...

            ANSWER

            Answered 2021-Dec-21 at 14:35

            I'm glad you solved most of your main issues and found a workaround for model declaration.

            For your input.output observation on gcs_source_uris, the reason behind it is because the way the function/class returns the value. If you dig inside the class/methods of google_cloud_pipeline_components you will find that it implements a structure that will allow you to use .outputs from the returned value of the function called.

            If you go to the implementation of one of the components of the pipeline you will find that it returns an output array from convert_method_to_component function. So, in order to have that implemented in your custom class/function your function should return a value which can be called as an attribute. Below is a basic implementation of it.

            Source https://stackoverflow.com/questions/70356856

            QUESTION

            Partial credentials found in env, missing: AWS_SECRET_ACCESS_KEY using Bitbucket pipeline
            Asked 2021-Dec-15 at 13:44

            I am getting Partial credentials found in env error while running below command.

            aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name build-session --web-identity-token $BITBUCKET_STEP_OIDC_TOKEN --duration-seconds 1000

            I am using below AWS CLI and Python version-

            ...

            ANSWER

            Answered 2021-Dec-15 at 13:44

            Ugh... I was struggling for two days and right after posting it on stackoverflow in the end, I thought of clearing ENV variable and it worked. Somehow AWS Keys were being stored in env, not sure how?. I just cleared them by below cmd and it worked :D

            Source https://stackoverflow.com/questions/70364363

            QUESTION

            how to concatenate the OutputPathPlaceholder with a string with Kubeflow pipelines?
            Asked 2021-Nov-18 at 19:26

            I am using Kubeflow pipelines (KFP) with GCP Vertex AI pipelines. I am using kfp==1.8.5 (kfp SDK) and google-cloud-pipeline-components==0.1.7. Not sure if I can find which version of Kubeflow is used on GCP.

            I am bulding a component (yaml) using python inspired form this Github issue. I am defining an output like:

            ...

            ANSWER

            Answered 2021-Nov-18 at 19:26

            I didn't realized in the first place that ConcatPlaceholder accept both Artifact and string. This is exactly what I wanted to achieve:

            Source https://stackoverflow.com/questions/69681031

            QUESTION

            How to resolve this issue ErrImageNeverPull pod creation status kubernetes
            Asked 2020-Nov-08 at 17:01

            I am creating a pod from an image which resides on the master node. When I create a pod on the master node to be scheduled on the worker node, I get the status of pod ErrImageNeverPull

            ...

            ANSWER

            Answered 2020-Nov-08 at 17:01

            When kubernetes creates containers, it first looks to local images, and then will try registry(docker registry by default)

            You are getting this error because:

            • your image cant be found localy on your node.

            • you specified imagePullPolicy: Never, so you will never try to download image from registry

            You have few ways of resolving this, but all of them generally instruct you to get image locally and tag it properly.

            To get image on your node you can:

            Once you get image, tag it and specify in the deployment

            Source https://stackoverflow.com/questions/64729735

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install cloud-pipeline

            Cloud Pipeline prebuilt binaries are available from the GitHub Releases page.

            Support

            Detailed documentation on the Cloud Pipeline platform is available via:.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/epam/cloud-pipeline.git

          • CLI

            gh repo clone epam/cloud-pipeline

          • sshUrl

            git@github.com:epam/cloud-pipeline.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link