cloudml | CloudML : Transparent deployment of cloud applications | Continuous Deployment library

 by   SINTEF-9012 Java Version: root-2.0-rc0 License: LGPL-3.0

kandi X-RAY | cloudml Summary

kandi X-RAY | cloudml Summary

cloudml is a Java library typically used in Devops, Continuous Deployment applications. cloudml has no bugs, it has no vulnerabilities, it has build file available, it has a Weak Copyleft License and it has low support. You can download it from GitHub.

Transparent provisioning of cloud resources and deployment of cloud applications. For more details on how to use CloudML please have a look at our Wiki page. ##License## Licensed under the GNU LESSER GENERAL PUBLIC LICENSE.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              cloudml has a low active ecosystem.
              It has 28 star(s) with 8 fork(s). There are 17 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 12 open issues and 39 have been closed. On average issues are closed in 43 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of cloudml is root-2.0-rc0

            kandi-Quality Quality

              cloudml has 0 bugs and 0 code smells.

            kandi-Security Security

              cloudml has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              cloudml code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              cloudml is licensed under the LGPL-3.0 License. This license is Weak Copyleft.
              Weak Copyleft licenses have some restrictions, but you can use them in commercial projects.

            kandi-Reuse Reuse

              cloudml releases are available to install and integrate.
              Build file is available. You can build the component from source.
              cloudml saves you 15909 person hours of effort in developing the same functionality from scratch.
              It has 31688 lines of code, 2670 functions and 427 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed cloudml and discovered the below as its top functions. This is intended to give you an instant insight into cloudml implemented functionality, and help decide if they suit your requirements.
            • Create a new instance
            • Create an OSVHardDisk
            • Create role list
            • Create configuration set
            • Execute the switch
            • Set a property on an object
            • Convert a value to an object
            • Create a VM instance
            • Find a product offer
            • Adds a security group
            • Entry point for a daemon
            • Gets the environment ids of an environment
            • Fire a CloudM command
            • Opens a DB instance
            • Create an image of the specified VM instance
            • Handle a CloudM command
            • Execute the query
            • Execute the crossref expression
            • Executes the crossref query
            • Handle a mouse released event
            • Create a runtime instance from a VM instance
            • Execute the VM
            • Create a VM instance
            • Create an environment
            • Handle the deployment
            • Execute remove
            Get all kandi verified functions for this library.

            cloudml Key Features

            No Key Features are available at this moment for cloudml.

            cloudml Examples and Code Snippets

            No Code Snippets are available at this moment for cloudml.

            Community Discussions

            QUESTION

            After training in AI Platform, where can I find model.bst or other model file?
            Asked 2021-May-28 at 05:48

            I trained a XGBoost model using AI Platform as here.

            Now I have the choice in the Console to download the model, as follows (but not Deploy it, since "Only models trained with built-in algorithms can be deployed from this page"). So, I click to download.

            However, in the bucket the only file I see is a tar, as follows.

            That tar (directory tree follows) holds only some training code, and not a model.bst, model.pkl, or model.joblib, or other such model file.

            Where do I find model.bst or the like, which I can deploy?

            EDIT:

            Following the answer, below, we see that the "Download model" button is misleading as it sends us to the job directory, not the output directory (which is set arbitrarily in the codel the model is at census_data_20210527_215945/model.bst )

            ...

            ANSWER

            Answered 2021-May-28 at 05:48

            Only in-build algorithms automatically store the model in Google Cloud storage.

            In your case, you have a custom training application. You have to take care of saving the model on your own.

            Referring to your example this is implemented as listed here.

            The model is uploaded to Google Cloud Storage using the cloud storage client.

            Source https://stackoverflow.com/questions/67727022

            QUESTION

            Could not load dynamic library libcuda.so.1 error on Google AI Platform with custom container
            Asked 2021-Mar-11 at 01:46

            I'm trying to launch a training job on Google AI Platform with a custom container. As I want to use GPUs for the training, the base image I've used for my container is:

            ...

            ANSWER

            Answered 2021-Mar-11 at 01:05

            The suggested way to build the most reliable container is to use the officially maintained 'Deep Learning Containers'. I would suggest pulling 'gcr.io/deeplearning-platform-release/tf2-gpu.2-4'. This should already have CUDA, CUDNN, GPU Drivers, and TF 2.4 installed & tested. You'll just need to add your code into it.

            Source https://stackoverflow.com/questions/66550195

            QUESTION

            GCP AI Platform: Error when creating a custom predictor model version ( trained model Pytorch model + torchvision.transform)
            Asked 2021-Jan-30 at 17:52

            Am currently trying to deploy a custom model to AI platform by following https://cloud.google.com/ai-platform/prediction/docs/deploying-models#gcloud_1. which is based on a combination of the pre-trained model from 'Pytorch' and 'torchvision.transform'. Currently, I keep getting below error which happens to be related to 500MB constraint on custom prediction.

            ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to experience errors, please contact support.

            Setup.py

            ...

            ANSWER

            Answered 2021-Jan-30 at 17:52

            Got this fixed by a combination of few things. I stuck to 4gb CPU MlS1 machine and custom predictor routine (<500MB).

            • Install the libraries using setup.py parameter but instead of parsing just the package name and it's version, add correct torch wheel (ideally <100 mb).

            Source https://stackoverflow.com/questions/65795374

            QUESTION

            Triggering a training task on cloud ml when file arrives to cloud storage
            Asked 2020-Jun-16 at 12:44

            I am trying to build an app where the user is able to upload a file to cloud storage. This would then trigger a model training process (and predicting later on). Initially I though I could do this with cloud functions/pubsub and cloudml, but it seems that cloud functions are not able to trigger gsutil commands which is needed for cloudml.

            Is my only option to enable cloud-composer and attach GPUs to a kubernetes node and create a cloud function that triggers a dag to boot up a pod on the node with GPUs and mounting the bucket with the data? Seems a bit excessive but I can't think of another way currently.

            ...

            ANSWER

            Answered 2020-Jun-16 at 12:44

            You're correct. As for now, there's no possibility to execute gsutil command from a Google Cloud Function:

            Cloud Functions can be written in Node.js, Python, Go, and Java, and are executed in language-specific runtimes.

            I really like your second approach with triggering the DAG. Another idea that comes to my mind is to interact with GCP Virtual Machines within Cloud Composer through the Python operator by using the Compute Engine Pyhton API. You can find more information in automating infrastructure and taking a deep technical dive into the core features of Cloud Composer here.

            Another solution that you can think of is Kubeflow, which aims to make running ML workloads on Kubernetes. Kubeflow adds some resources to your cluster to assist with a variety of tasks, including training and serving models and running Jupyter Notebooks. Please, have a look on Codelabs tutorial.

            I hope you find the above pieces of information useful.

            Source https://stackoverflow.com/questions/62392971

            QUESTION

            Submit a Keras training job to Google cloud
            Asked 2020-Jan-25 at 22:20

            I am trying to follow this tutorial: https://medium.com/@natu.neeraj/training-a-keras-model-on-google-cloud-ml-cb831341c196

            to upload and train a Keras model on Google Cloud Platform, but I can't get it to work.

            Right now I have downloaded the package from GitHub, and I have created a cloud environment with AI-Platform and a bucket for storage.

            I am uploading the files (with the suggested folder structure) to my Cloud Storage bucket (basically to the root of my storage), and then trying the following command in the cloud terminal:

            ...

            ANSWER

            Answered 2020-Jan-21 at 15:40

            I got it to work halfway now by not uploading the files but just running the upload commands from cloud at my local terminal... however there was an error during it running ending in "job failed"

            Seems it was trying to import something from the TensorFlow backend called "from tensorflow.python.eager import context" but there was an ImportError: No module named eager

            I have tried "pip install tf-nightly" which was suggested at another place, but it says I don't have permission or I am loosing the connection to cloud shell(exactly when I try to run the command).

            I have also tried making a virtual environment locally to match that on gcloud (with Conda), and have made an environment with Conda with Python=3.5, Tensorflow=1.14.0 and Keras=2.2.5, which should be supported for gcloud.

            The python program works fine in this environment locally, but I still get the (ImportError: No module named eager) when trying to run the job on gcloud.

            I am putting the flag --python-version 3.5 when submitting the job, but when I write the command "Python -V" in the google cloud shell, it says Python=2.7. Could this be the issue? I have not fins a way to update the python version with the cloud shell prompt, but it says google cloud should support python 3.5. If this is anyway the issue, any suggestions on how to upgrade python version on google cloud?

            It is also possible to manually there a new job in the google cloud web interface, doing this, I get a different error message: ERROR: Could not find a version that satisfies the requirement cnn_with_keras.py (from versions: none) and No matching distribution found for cnn_with_keras.py. Where cnn_with_keras.py is my python code from the tutorial, which runs fine locally.

            Really don't know what to do next. Any suggestions or tips would be very helpful!

            Source https://stackoverflow.com/questions/59840427

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install cloudml

            You can download it from GitHub.
            You can use cloudml like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the cloudml component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/SINTEF-9012/cloudml.git

          • CLI

            gh repo clone SINTEF-9012/cloudml

          • sshUrl

            git@github.com:SINTEF-9012/cloudml.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link