create-deployment | Create GitHub deployments from GitHub actions | Continuous Deployment library

 by   avakar JavaScript Version: v1.0.2 License: MIT

kandi X-RAY | create-deployment Summary

kandi X-RAY | create-deployment Summary

create-deployment is a JavaScript library typically used in Devops, Continuous Deployment applications. create-deployment has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Create GitHub deployments from GitHub actions
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              create-deployment has a low active ecosystem.
              It has 10 star(s) with 3 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 1 open issues and 1 have been closed. On average issues are closed in 21 days. There are 4 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of create-deployment is v1.0.2

            kandi-Quality Quality

              create-deployment has no bugs reported.

            kandi-Security Security

              create-deployment has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              create-deployment is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              create-deployment releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of create-deployment
            Get all kandi verified functions for this library.

            create-deployment Key Features

            No Key Features are available at this moment for create-deployment.

            create-deployment Examples and Code Snippets

            No Code Snippets are available at this moment for create-deployment.

            Community Discussions

            QUESTION

            put-rest-api aws cli not updating endpoint description tags
            Asked 2021-Apr-07 at 22:43

            I am trying to update my rest api via the aws cli and I don't get the results I desire. I am running the commands

            aws apigateway put-rest-api --rest-api-id XXXXXXXXXX --mode merge --body 'file://api.yaml'

            aws apigateway create-deployment --rest-api XXXXXXXXXX --stage-name latest

            However I notice that even though the endpoint was added, documentation specific things such as tags and description are not being set and so when we fetch the swagger definition from aws, these keys are omitted.

            I put the yaml file I am using with the into https://editor.swagger.io/ and no problems there as well

            I don't get any errors when running the above commands. I don't understand why the "merge" process is not finding the swagger keys and applying them.

            ...

            ANSWER

            Answered 2021-Jan-22 at 22:03

            I figured out that not only do I need to run an update via the put-rest-api command but I also need to publish the documentation(did this via the AWS Console UI and it worked). I haven't found the best command to do so via the aws cli. Will make an edit when I do.

            EDIT

            I have learned that the aws cli cmd put-rest-api is a precursor for updating the documentation and definition of the REST API, and that the two are deployed via different commands. So I was missing the step:

            aws apigateway create-documentation-version --rest-api-id XXXXXXXXX --documentation-version test_version --stage dev

            As you may or may not know, you can only deploy documentation once so use

            aws apigateway update-stage --stage-name dev --rest-api-id tu2ye61vyg --patch-operations "op=replace,path=/documentationVersion,value=test_version" to deploy an existing version to another stage

            Source https://stackoverflow.com/questions/65814447

            QUESTION

            Google Cloud Platform API for Python and AWS Lambda Incompatibility: Cannot import name 'cygrpc'
            Asked 2020-Apr-27 at 14:30

            I am trying to use Google Cloud Platform (specifically, the Vision API) for Python with AWS Lambda. Thus, I have to create a deployment package for my dependencies. However, when I try to create this deployment package, I get several compilation errors, regardless of the version of Python (3.6 or 2.7). Considering the version 3.6, I get the issue "Cannot import name 'cygrpc'". For 2.7, I get some unknown error with the .path file. I am following the AWS Lambda Deployment Package instructions here. They recommend two options, and both do not work / result in the same issue. Is GCP just not compatible with AWS Lambda for some reason? What's the deal?

            Neither Python 3.6 nor 2.7 work for me.

            NOTE: I am posting this question here to answer it myself because it took me quite a while to find a solution, and I would like to share my solution.

            ...

            ANSWER

            Answered 2020-Mar-21 at 23:08

            TL;DR: You cannot compile the deployment package on your Mac or whatever pc you use. You have to do it using a specific OS/"setup", the same one that AWS Lambda uses to run your code. To do this, you have to use EC2.

            I will provide here an answer on how to get Google Cloud Vision working on AWS Lambda for Python 2.7. This answer is potentially extendable for other other APIs and other programming languages on AWS Lambda.

            So the my journey to a solution began with this initial posting on Github with others who have the same issue. One solution someone posted was

            I had the same issue " cannot import name 'cygrpc' " while running the lambda. Solved it with pip install google-cloud-vision in the AMI amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 instance and exported the lib/python3.6/site-packages to aws lambda Thank you @tseaver

            This is partially correct, unless I read it wrong, but regardless it led me on the right path. You will have to use EC2. Here are the steps I took:

            1. Set up an EC2 instance by going to EC2 on Amazon. Do a quick read about AWS EC2 if you have not already. Set one up for amzn-ami-hvm-2018.03.0.20180811-x86_64-gp2 or something along those lines (i.e. the most updated one).
            2. Get your EC2 .pem file. Go to your Terminal. cd into your folder where your .pem file is. ssh into your instance using

              ssh -i "your-file-name-here.pem" ec2-user@ec2-ip-address-here.compute-1.amazonaws.com

            3. Create the following folders on your instance using mkdir: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.

            4. On your EC2 instance, cd into google-cloud-vision. Run the command:

              pip install google-cloud-vision -t .

            Note If you get "bash: pip: command not found", then enter "sudo easy_install pip" source.

            1. Repeat step 4 with the following packages, while cd'ing into the respective folder: protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.

            2. Copy each folder on your computer. You can do this using the scp command. Again, in your Terminal, not your EC2 instance and not the Terminal window you used to access your EC2 instance, run the command (below is an example for your "google-cloud-vision" folder, but repeat this with every folder):

              sudo scp -r -i your-pem-file-name.pem ec2-user@ec2-ip-address-here.compute-1.amazonaws.com:~/google-cloud-vision ~/Documents/your-local-directory/

            3. Stop your EC2 instance from the AWS console so you don't get overcharged.

            4. For your deployment package, you will need a single folder containing all your modules and your Python scripts. To begin combining all of the modules, create an empty folder titled "modules." Copy and paste all of the contents of the "google-cloud-vision" folder into the "modules" folder. Now place only the folder titled "protobuf" from the "protobuf" (sic) main folder in the "Google" folder of the "modules" folder. Also from the "protobuf" main folder, paste the Protobuf .pth file and the -info folder in the Google folder.

            5. For each module after protobuf, copy and paste in the "modules" folder the folder titled with the module name, the .pth file, and the "-info" folder.

            6. You now have all of your modules properly combined (almost). To finish combination, remove these two files from your "modules" folder: googleapis_common_protos-1.5.3-nspkg.pth and google_cloud_vision-0.34.0-py3.6-nspkg.pth. Copy and paste everything in the "modules" folder into your deployment package folder. Also, if you're using GCP, paste in your .json file for your credentials as well.

            7. Finally, put your Python scripts in this folder, zip the contents (not the folder), upload to S3, and paste the link in your AWS Lambda function and get going!

            If something here doesn't work as described, please forgive me and either message me or feel free to edit my answer. Hope this helps.

            Source https://stackoverflow.com/questions/52913257

            QUESTION

            How to install external modules in a Python Lambda Function created by AWS CDK?
            Asked 2020-Apr-16 at 10:27

            I'm using the Python AWS CDK in Cloud9 and I'm deploying a simple Lambda function that is supposed to send an API request to Atlassian's API when an Object is uploaded to an S3 Bucket (also created by the CDK). Here is my code for CDK Stack:

            ...

            ANSWER

            Answered 2020-Apr-02 at 23:48

            You should install the dependencies of your lambda locally before deploying the lambda via CDK. CDK does not have idea how to install the dependencies and which libraries should be installed.

            In you case, you should install the dependency requests and other libraries before executing cdk deploy.

            For example,

            Source https://stackoverflow.com/questions/58855739

            QUESTION

            Pip install Python package within AWS Lambda?
            Asked 2020-Feb-20 at 23:01

            I'm trying to pip install a package in an AWS Lambda function.

            The method recommended by Amazon is to create a zipped deployment package that includes the dependencies and python function all together (as described in AWS Lambda Deployment Package in Python). However, this results in not being able to edit the Lambda function using inline code editing within the AWS Lambda GUI.

            So instead, I would like to pip install the package within the AWS Lambda function itself. In AWS Lambda, the filesystem is read-only apart from the /tmp/ directory, so I am trying to pip install to the /tmp/ directory. The function is only called once-daily, so I don't mind about the few extra seconds required to re-pip install the package every time the function is run.

            My attempt

            ...

            ANSWER

            Answered 2020-Feb-20 at 23:01

            I solved this with a one-line adjustment to the original attempt. You just need to add /tmp/ to sys.path so that Python knows to search /tmp/ for the module. All you need to do is add the line sys.path.insert(1, '/tmp/').

            Solution

            Source https://stackoverflow.com/questions/60311148

            QUESTION

            install python package at current directory
            Asked 2020-Feb-05 at 01:43

            I am mac user, used to run pip install with --user, but recently after brew update, I found there are some strange things, maybe related.

            Whatever I tries, the packages are always installed to ~/Library/Python/2.7/lib/python/site-packages

            Here are the commands I run.

            ...

            ANSWER

            Answered 2018-Oct-08 at 05:17

            You can use the target (t) flag of pip install to specify a target location for installation.

            In use:

            Source https://stackoverflow.com/questions/52695451

            QUESTION

            AWS EC2 Bitbucket Pipeline is not executing the latest code deployed
            Asked 2020-Jan-20 at 15:15

            I've followed all the steps of implementing the Bitbucket pipeline in order to have continuous development in AWS EC2. I've used the Code Deploy Application tool together with all configuration that needs to be done in AWS. I'm using EC2, Ubuntu and I'm trying to deploy a MEAN app.

            As per bitbucket, I've added variables under "Repository variables" including:

            • S3_BUCKET
            • DEPLOYMENT_GROUP_NAME
            • DEPLOYMENT_CONFIG
            • AWS_DEFAULT_REGION
            • AWS_ACCESS_KEY_ID
            • AWS_SECRET_ACCESS_KEY

            and also I've added three required files:

            codedeploy_deploy.py - that I've got from this link: https://bitbucket.org/awslabs/aws-codedeploy-bitbucket-pipelines-python/src/73b7c31b0a72a038ea0a9b46e457392c45ce76da/codedeploy_deploy.py?at=master&fileviewer=file-view-default

            appspec.yml -

            ...

            ANSWER

            Answered 2020-Jan-20 at 15:15

            This codedeploy_deploy.py script is not supported anymore. The recommended way is to migrate from the CodeDeploy addon to aws-code-deploy Bitbucket Pipe. There is a deployment guide from Atlassian that will help you to get started with the pipe: https://confluence.atlassian.com/bitbucket/deploy-to-aws-with-codedeploy-976773337.html

            Source https://stackoverflow.com/questions/59806048

            QUESTION

            Specify AWS CodeDeploy Target instances in AWS CodePipeline for Blue/Green deployment
            Asked 2019-Dec-27 at 05:32

            I'm trying to create a CodePipeline to deploy an application to EC2 instances using Blue/Green Deployment.

            My Deployment Group looks like this:

            ...

            ANSWER

            Answered 2019-Dec-27 at 05:32

            You didn't mention which error you get when you invoke the pipeline? Are you getting this error:

            "The deployment failed because no instances were found in your green fleet"

            Taking this assumption, since you are using manual tagging in your CodeDeploy configuration, this is not going to work to deploy using Blue/Green with manual tags as CodeDeploy expects to see a tagSet to find the "Green" instances and there is no way to provide this information via CodePipeline.

            To workaround this, please use the 'Copy AutoScaling' option for implementing Blue/Green deployments in CodeDeploy using CodePipeline. See Step 10 here [1]

            Another workaround is that you can create lambda function that is invoked as an action in your CodePipeline. This lambda function can be used to trigger the CodeDeploy deployment where you specify the target-instances with the value of the green AutoScalingGroup. You will then need to make describe calls at frequent intervals to the CodeDeploy API to get the status of the deployment. Once the deployment has completed, your lambda function will need to signal back to the CodePipeline based on the status of the deployment.

            Here is an example which walks through how to invoke an AWS lambda function in a pipeline in CodePipeline [2].

            Ref:

            [1] https://docs.aws.amazon.com/codedeploy/latest/userguide/applications-create-blue-green.html

            [2] https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html

            Source https://stackoverflow.com/questions/59494656

            QUESTION

            How do you manage multiple AWS Lambda functions in Visual Studio?
            Asked 2019-Sep-19 at 02:31

            In the AWS Lambda Visual Studio walkthrough to create a Lambda function: http://docs.aws.amazon.com/lambda/latest/dg/lambda-dotnet-create-deployment-package-toolkit.html you create a single AWS Lambda function in the Visual Studio project.

            Does that mean that you can only create one function per project? What do you do if your serverless app has many functions? Is the function to VS project ratio 1:1, or am I missing something?

            ...

            ANSWER

            Answered 2017-May-07 at 20:41

            If you use the AWS Lambda Project(.Net Core) template, you can only write one function per project. You can see that the aws-lambda-tools-defaults.json file only contains configuration for one function.

            However, if you use AWS Serverless Application(.Net Core) template, you can manage multiple Lambda functions in one project to response to different API call using API Gateway. This is achieved through CloudFormation.

            Check out this AWS ReInvent video: https://www.youtube.com/watch?v=Ymn6WGCSjE4&t=24s Jump to 31:08 to see how AWS Serverless Application with multiple Lambda functions works.

            Source https://stackoverflow.com/questions/43827396

            QUESTION

            What does the 9 mean, in -r option: zip -r9 ${OLDPWD}/package .?
            Asked 2019-Sep-18 at 17:16

            Context: AWS documentation on how to create zip files for python code with dependencies, see: https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html

            I understand -r is recursion flag, but I'm unclear what the "9" in -r9 achieves?

            ...

            ANSWER

            Answered 2019-Sep-18 at 17:16

            -r9 is a combination of the -r and -9 switches.

            The switch -9 means strongest compression, on a scale from 0 to 9.

            Type zip for a list of options.

            Source https://stackoverflow.com/questions/57965208

            QUESTION

            git on Azure don't ask me for password, push fails
            Asked 2019-Sep-05 at 09:12

            I am learning Azure, and what I observed, a huge part of the official Microsoft documentation is either obsolete, or doesn't correspond to the reality, or just does not work as expected because of errors.

            One example is the article I study, Create deployment slots (on Azure).

            Here I am stuck with the following:

            The first question is, what the hell is the "Cloud Shell", I didn't find such a thing in the Azure portal, so I supposed they talk about the Console from the Development Tools section... OK, lets go :

            ...

            ANSWER

            Answered 2019-Sep-05 at 09:12

            To answer your question about Cloud Shell in Azure Portal, please see this link: https://docs.microsoft.com/en-us/azure/cloud-shell/quickstart.

            Regarding your comments about the quality of the content on Microsoft Learn, you can provide feedback by navigating to the bottom of the page (https://docs.microsoft.com/en-us/learn/modules/stage-deploy-app-service-deployment-slots/3-exercise-create-deployment-slots) and then clicking on reporting an issue. This will redirect you to this link.

            You can provide detailed feedback there. In my experience, Microsoft folks are quite responsive to the feedback.

            Source https://stackoverflow.com/questions/57800839

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install create-deployment

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/avakar/create-deployment.git

          • CLI

            gh repo clone avakar/create-deployment

          • sshUrl

            git@github.com:avakar/create-deployment.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link