Continuous-Delivery | blog series about continuous delivery | Continous Integration library

 by   kvarv C# Version: Current License: No License

kandi X-RAY | Continuous-Delivery Summary

kandi X-RAY | Continuous-Delivery Summary

Continuous-Delivery is a C# library typically used in Devops, Continous Integration applications. Continuous-Delivery has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

This project will support a blog series about continuous delivery with Psake and TeamCity.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Continuous-Delivery has a low active ecosystem.
              It has 5 star(s) with 2 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              Continuous-Delivery has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of Continuous-Delivery is current.

            kandi-Quality Quality

              Continuous-Delivery has 0 bugs and 0 code smells.

            kandi-Security Security

              Continuous-Delivery has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              Continuous-Delivery code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              Continuous-Delivery does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              Continuous-Delivery releases are not available. You will need to build from source code and install.
              It has 10783 lines of code, 0 functions and 13 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Continuous-Delivery
            Get all kandi verified functions for this library.

            Continuous-Delivery Key Features

            No Key Features are available at this moment for Continuous-Delivery.

            Continuous-Delivery Examples and Code Snippets

            No Code Snippets are available at this moment for Continuous-Delivery.

            Community Discussions

            QUESTION

            Azure Load Balancing Solutions. Direct Traffic to Specific VMs
            Asked 2021-Dec-14 at 05:34

            We are having difficulties choosing a load balancing solution (Load Balancer, Application Gateway, Traffic Manager, Front Door) for IIS websites on Azure VMs. The simple use case when there are 2 identical sites is covered well – just use Azure Load Balancer or Application Gateway. However, in cases when we would like to update websites and test those updates, we encounter limitation of load balancing solutions.

            For example, if we would like to update IIS websites on VM1 and test those updates, the strategy would be:

            • Point a load balancer to VM2.
            • Update IIS website on VM1
            • Test the changes
            • If all tests are passed then point the load balancer to VM1 only, while we update VM2.
            • Point the load balancer to both VMs

            We would like to know what is the best solution for directing traffic to only one VM. So far, we only see one option – removing a VM from backend address pool then returning it back and repeating the process for other VMs. Surely, there must be a better way to direct 100% of traffic to only one (or to specific VMs), right?

            Update:

            We ended up blocking the connection between VMs and Load Balancer by creating Network Security Group rule with Deny action on Service Tag Load Balancer. Once we want that particular VM to be accessible again we switch the NSG rule from Deny to Allow.

            The downside of this approach is that it takes 1-3 minutes for the changes to take an effect. Continuous Delivery with Azure Load Balancer

            If anybody can think of a faster (or instantaneous) solution for this, please let me know.

            ...

            ANSWER

            Answered 2021-Nov-02 at 21:22

            Without any Azure specifics, the usual pattern is to point a load balancer to a /status endpoint of your process, and to design the endpoint behavior according to your needs, eg:

            • When a service is first deployed its status is 'pending"
            • When you deem it healthy, eg all tests pass, do a POST /status to update it
            • The service then returns status 'ok'

            Meanwhile the load balancer polls the /status endpoint every minute and knows to mark down / exclude forwarding for any servers not in the 'ok' state.

            Some load balancers / gateways may work best with HTTP status codes whereas others may be able to read response text from the status endpoint. Pretty much all of them will support this general behavior though - you should not need an expensive solution.

            Source https://stackoverflow.com/questions/69688874

            QUESTION

            Azure DevOps pipeline for Pulumi when using Azure blob storage as backend
            Asked 2021-Oct-26 at 15:56

            Instead of using Pulumi service (managed) backend, I am using Azure blob container as stack state backend. According to the documentation, Pulumi CLI would expect AZURE_STORAGE_KEY (or AZURE_STORAGE_SAS_TOKEN) environment variable in the Pipeline Agent.

            When account key is provided as pipeline variable, it's working. But when account key is stored in KeyVault as secret, it's not working.

            What I did:

            ...

            ANSWER

            Answered 2021-Oct-21 at 16:24

            You need to set the variable using a different powershell task:

            Source https://stackoverflow.com/questions/69665170

            QUESTION

            (@aws-cdk/pipelines) Build stage for application source code
            Asked 2021-Oct-12 at 06:16

            I worked through the CDK Pipelines: Continuous delivery for AWS CDK applications tutorial, which gave an overview of creating a self-mutating CDK pipeline with the new CodePipeline API.

            The tutorial creates a CodePipeline with the CDK source code automatically retrieved from a GitHub repo every time a change is pushed to the master branch. The CDK code defines a lambda with a typescript handler defined alongside the CDK.

            For my use case, I would like to define a self-mutating CodePipeline that is also triggered whenever I push to a second repository containing my application source code. The second repository will also contain a buildspec that generates a Docker image with my application and uploads the image to ECR. The new image will then be deployed to Fargate clusters in the application stages of my pipeline.

            I've created an ApplicationBuild stage after the PublishAssets stage, which includes a CodeBuild project. The CodeBuild project reads from my repository and builds / uploads the image to ECR; however, I need a way to link this CodeBuild to the deployment of the pipeline. It's not clear to me how to do this with the new cdk CodePipeline API.

            ...

            ANSWER

            Answered 2021-Oct-12 at 06:16

            In case anyone has the same problem, I was able to hack together a solution using the legacy CdkPipeline API following the archived version of the tutorial I mentioned in my question.

            Here is a minimum viable pipeline stack that includes...

            1. a CDK pipeline source action (in "Source" stage)
            2. an application source action (in "Source" stage)
            3. a CDK build action (in "Build" stage) + self-mutating pipeline ("UpdatePipeline" stage)
            4. an application build action (in "Build" stage)

            lib/cdkpipelines-demo-pipeline-stack.ts

            Source https://stackoverflow.com/questions/69519916

            QUESTION

            GitOps: Roll out changes to k8s resources across multiple namespaces
            Asked 2021-Jun-18 at 12:06

            We have a multi-tenant kubernetes cluster, that hosts various customer environments. Kubernetes namespaces are used to isolate these customer environments. Each of these namespaces have a similar set of k8s resources (deployments, configmap etc) configured. Some of these resources are identical across namespaces, while some are not identical.

            From a continuous delivery perspective, I am exploring options to roll out changes to these identical components (across namespaces) in a seamless manner. Git-Ops/Pull-based-Continuous-Delivery seems to be a good approach, which would enable us to seamlessly manage 100s of namespaces across various clusters.

            Explored a couple of Git-Ops tools such as ArgoCD, Fluxv2 etc... but couldn't figure out whether these tools would let you roll out changes to multiple namespaces simultaneously (or within a predictable time window). It will be helpful, if you can guide/advice me in terms of choosing the right tool/approach to perform roll out to multiple namespaces. It will also be good to understand whether these Git-Ops tools can be customised to handle such scenarios.

            An illustration of how namespaces are setup in our k8s cluster.

            ...

            ANSWER

            Answered 2021-Jun-17 at 22:18

            We use Flux to deploy resources to our K8s cluster.

            these tools would let you roll out changes to multiple namespaces simultaneously (or within a predictable time window)

            Yeah, Flux basically monitors your git repo for any changes and applies them to the cluster. By default Flux checks for changes every 5 minutes but that can be reduced (though I remember reading that anything lower than a minute can be problematic).

            If the changes are made to more than one namespace it will apply all of them in one go. I haven't ever had to see if all changes to multiple namespaces are applied simultaneously as they are generally pretty quick. However I must say, we haven't tested it across "100s of namespaces" as you write but it works just fine across a handful of them.

            Source https://stackoverflow.com/questions/68026272

            QUESTION

            Can we use GitLab as host for source code with AWS CDK pipeline?
            Asked 2021-May-27 at 17:01

            I have a repository in GitLab (Lambda Functions) and want to work with the AWS CDK pipeline (https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications) to develop CI/CD pipeline. In AWS CDK docs there is nowhere mention about Gitlab. Will really appreciate it if someone can confirm this.

            ...

            ANSWER

            Answered 2021-May-27 at 17:01

            Only Bitbucket, GitHub, GitHub Enterprise Server actions are supported natively with codepipelines at the moment. You will need a custom lambda for gitlab. https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/ can help you with that.

            You can use s3 source actions after implementing up git to s3 logic

            Source https://stackoverflow.com/questions/67725696

            QUESTION

            Why is the patch version not auto incrementing when using GitVersion?
            Asked 2021-Mar-02 at 08:52

            I'm using Azure pipelines to set a version number using GitVersion. Here is the pipeline:

            ...

            ANSWER

            Answered 2021-Mar-02 at 08:52

            It seems the FullSemVer is affected: 0.8.0+11

            When I use the ContinuousDelivery mode, I could get the same result.

            You could try to use the mainline mode in yml file.

            Here is my example: You could try to remove the next-version parameter

            Source https://stackoverflow.com/questions/66422109

            QUESTION

            How to package dependencies using @aws-cdk/pipelines module construct? Getting error TS2307
            Asked 2020-Oct-30 at 13:14

            I've been experimenting with using the new AWS CDK pipelines construct (https://docs.aws.amazon.com/cdk/api/latest/docs/pipelines-readme.html), and have successfully setup a couple of projects, including following the blog post announcement here: https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/.

            However, as soon as I try to add a dependency to my lambdas the build fails with a

            lib/lambda/handler.ts(2,24): error TS2307: Cannot find module 'stripe' or its corresponding type declarations.

            I have installed a package.json file and the node_modules in the directory with the lambdas, tried zipping the lambdas and node_modules, tried uploading the zip file with the console, and tried getting the 'buildCommand' during the 'synthAction' step to install the dependencies. Nothing works.

            The asset appears to get created in the cdk.out directory, and code changes are being uploaded, but node_modules themselves never get packaged along with the lambda functions.

            I am using the 'SimpleSynthAction.standardNpmSynth' action, along with an 'npm run build' command in the 'buildCommand' step.

            I've looked at Lambda can't find modules from outer folders when deployed with CDK, How to install dependencies of lambda functions upon cdk build with AWS CDK and https://github.com/aws-samples/aws-cdk-examples/issues/110#issuecomment-550506116 without luck.

            What am I missing?

            ...

            ANSWER

            Answered 2020-Oct-30 at 13:14

            Looks like the issue has to do with bundling. I abandoned the 'all-in on CDK approach' and went back to using a SAM/CDK hybrid, but this new blog post suggests bundling is the answer.

            https://aws.amazon.com/blogs/devops/building-apps-with-aws-cdk/

            Specifically, it references this construct, which likely would have helped in my situation: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-nodejs-readme.html

            Something to try in future.

            Source https://stackoverflow.com/questions/63419064

            QUESTION

            AWS Fargate cannot access dockerized spring boot application over Load balancer/ public IP
            Asked 2020-Aug-12 at 05:17

            I have deployed the docker image of my spring boot application over aws ECR, following creation of AWS fargate cluster. PFA screenshots of the configurations stating task, security, service and cluster definition.I can ping my public ip successfully. But I can't access my application over neither load balancer nor public ip. The urls I tried to access application were

            public_ip:8081/my_rest_end_point

            and

            load_balancer_public_dns:8081/my_reset_end_point

            I have tested running my docker image on local host using port 8081 and the same 8081 port I have configured for port mapping in my Fargate container configuration (reference: service definition). How can I access my application? I have followed almost all the articles over Medium and AWS.

            Tutorials followed: https://medium.com/underscoretec/deploy-your-own-custom-docker-image-on-amazon-ecs-b1584e62484

            https://medium.com/faun/aws-fargate-services-deployment-with-continuous-delivery-pipeline-b67d33407c88

            ...

            ANSWER

            Answered 2020-Aug-12 at 05:17

            Based on the comments and chat discussion.

            The tutorial followed creates a custom VPC with CIDR range of 10.0.0.0/16 with two subnets. There is no internet gateway (IGW); and subsequently, the Fargate tasks launched in the VPC have no internet access nor they can't be accessed from the internet.

            There are two solutions to this problem:

            • use the default VPC which correctly provides internet access
            • create a custom VPC (or modify existing) that is setup with IGW and the corresponding route tables. An example of a custom VPC with internet access is in AWS docs.

            Source https://stackoverflow.com/questions/63352705

            QUESTION

            AWS CodeCommit Permissions Errors in CodePipeline
            Asked 2020-Jul-16 at 12:08

            I am attempting to follow this example of setting up an AWS Pipeline for use across multiple accounts. I have the four different accounts set up. I've followed through on each step of the process successfully. No commands are generating any errors. The pipeline completes successfully. I can then connect to the pipeline and commit my code changes. In short, every single step up to the final one works as written in the documentation.

            However, I'm then presented with an error on the initial trigger of the code commit:

            Insufficient permissions The service role or action role doesn’t have the permissions required to access the AWS CodeCommit repository named dbmigration. Update the IAM role permissions, and then try again. Error: User: arn:aws:sts::12345678912:assumed-role/my-pipeline-CodePipelineRole-1UPXOXOXO1WD0H/987654321 is not authorized to perform: codecommit:UploadArchive on resource: arn:aws:codecommit:us-east-2:123456789:dbmigration

            The AWS Account I used to create the pipeline is not the root account, but an IAM Administrator login with admin privileges across the account. I've tried adding AWSCodeCommitFullAccess and AWSCodePipelineFullAccess, which I would have thought would have been part of Administration anyway. However, that didn't change anything.

            My assumption is I've done something horribly wrong, but I'm not able to identify what that is. Any suggestions for better troubleshooting, let alone suggestions on how to fix it would be most welcome.

            The code used to create the pipeline, again, run using the IAM login, Administrator, from a fourth AWS account, is as follows:

            ...

            ANSWER

            Answered 2020-Jul-16 at 12:08

            Based on the comments.

            The error message indicated that the role my-pipeline-CodePipelineRole-1UPXOXOXO1WD0H/987654321 was missing permission codecommit:UploadArchive which:

            Grants permission to the service role for AWS CodePipeline to upload repository changes into a pipeline

            The solution was to add the codecommit:UploadArchive to the role as an inline policy.

            Source https://stackoverflow.com/questions/62932876

            QUESTION

            Patch version does not increase when commits are made to feature branch in MainLine mode of GitVersion
            Asked 2020-Mar-10 at 14:02

            I am trying to figure out the simple Mainline developement mode.

            My observation is the following.

            1. Each direct commit on the master branch is incrementing the patch version.
            2. When I create a feature branch, and then when I make commits on this feature branch, the patch is not increasing. I am wondering why? Only the PreReleaseTag is increasing. Example featureBranchName.1, featureBranchName.2 and so on for each commit. Patch does not change.

            Why does not the patch version increase when commits are made to feature branch? Does a feature branch always imply a prerelease?

            Is there a way to increase the patch on commits to feature branch?

            My GitVersion.yml file is below. The -diag output follows, if that helps.

            ...

            ANSWER

            Answered 2020-Mar-10 at 14:02

            You didn't set a mode for your feature branch configuration, so it is inheriting the global configuration of Mainline. In Mainline delivery mode, versioning is calculated based on merges to the branch (usually, Mainline is only used for the master branch).

            Try setting mode: ContinuousDeployment to achieve the desired result.

            Source https://stackoverflow.com/questions/59920495

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Continuous-Delivery

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/kvarv/Continuous-Delivery.git

          • CLI

            gh repo clone kvarv/Continuous-Delivery

          • sshUrl

            git@github.com:kvarv/Continuous-Delivery.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Continous Integration Libraries

            chinese-poetry

            by chinese-poetry

            act

            by nektos

            volkswagen

            by auchenberg

            phpdotenv

            by vlucas

            watchman

            by facebook

            Try Top Libraries by kvarv

            LightBus

            by kvarvC#

            RestClient

            by kvarvC#

            PowerShellScripts

            by kvarvShell

            Fagkveld_PowerShell

            by kvarvPowerShell