Continuous-Delivery | blog series about continuous delivery | Continous Integration library
kandi X-RAY | Continuous-Delivery Summary
kandi X-RAY | Continuous-Delivery Summary
This project will support a blog series about continuous delivery with Psake and TeamCity.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Continuous-Delivery
Continuous-Delivery Key Features
Continuous-Delivery Examples and Code Snippets
Community Discussions
Trending Discussions on Continuous-Delivery
QUESTION
We are having difficulties choosing a load balancing solution (Load Balancer, Application Gateway, Traffic Manager, Front Door) for IIS websites on Azure VMs. The simple use case when there are 2 identical sites is covered well – just use Azure Load Balancer or Application Gateway. However, in cases when we would like to update websites and test those updates, we encounter limitation of load balancing solutions.
For example, if we would like to update IIS websites on VM1 and test those updates, the strategy would be:
- Point a load balancer to VM2.
- Update IIS website on VM1
- Test the changes
- If all tests are passed then point the load balancer to VM1 only, while we update VM2.
- Point the load balancer to both VMs
We would like to know what is the best solution for directing traffic to only one VM. So far, we only see one option – removing a VM from backend address pool then returning it back and repeating the process for other VMs. Surely, there must be a better way to direct 100% of traffic to only one (or to specific VMs), right?
Update:
We ended up blocking the connection between VMs and Load Balancer by creating Network Security Group rule with Deny action on Service Tag Load Balancer. Once we want that particular VM to be accessible again we switch the NSG rule from Deny to Allow.
The downside of this approach is that it takes 1-3 minutes for the changes to take an effect. Continuous Delivery with Azure Load Balancer
If anybody can think of a faster (or instantaneous) solution for this, please let me know.
...ANSWER
Answered 2021-Nov-02 at 21:22Without any Azure specifics, the usual pattern is to point a load balancer to a /status endpoint of your process, and to design the endpoint behavior according to your needs, eg:
- When a service is first deployed its status is 'pending"
- When you deem it healthy, eg all tests pass, do a POST /status to update it
- The service then returns status 'ok'
Meanwhile the load balancer polls the /status endpoint every minute and knows to mark down / exclude forwarding for any servers not in the 'ok' state.
Some load balancers / gateways may work best with HTTP status codes whereas others may be able to read response text from the status endpoint. Pretty much all of them will support this general behavior though - you should not need an expensive solution.
QUESTION
Instead of using Pulumi service (managed) backend, I am using Azure blob container as stack state backend. According to the documentation, Pulumi CLI would expect AZURE_STORAGE_KEY
(or AZURE_STORAGE_SAS_TOKEN
) environment variable in the Pipeline Agent.
When account key is provided as pipeline variable, it's working. But when account key is stored in KeyVault as secret, it's not working.
What I did:
...ANSWER
Answered 2021-Oct-21 at 16:24You need to set the variable using a different powershell task:
QUESTION
I worked through the CDK Pipelines: Continuous delivery for AWS CDK applications tutorial, which gave an overview of creating a self-mutating CDK pipeline with the new CodePipeline API.
The tutorial creates a CodePipeline with the CDK source code automatically retrieved from a GitHub repo every time a change is pushed to the master branch. The CDK code defines a lambda with a typescript handler defined alongside the CDK.
For my use case, I would like to define a self-mutating CodePipeline that is also triggered whenever I push to a second repository containing my application source code. The second repository will also contain a buildspec that generates a Docker image with my application and uploads the image to ECR. The new image will then be deployed to Fargate clusters in the application stages of my pipeline.
I've created an ApplicationBuild
stage after the PublishAssets stage, which includes a CodeBuild project. The CodeBuild project reads from my repository and builds / uploads the image to ECR; however, I need a way to link this CodeBuild to the deployment of the pipeline. It's not clear to me how to do this with the new cdk CodePipeline API.
ANSWER
Answered 2021-Oct-12 at 06:16In case anyone has the same problem, I was able to hack together a solution using the legacy CdkPipeline API following the archived version of the tutorial I mentioned in my question.
Here is a minimum viable pipeline stack that includes...
- a CDK pipeline source action (in "Source" stage)
- an application source action (in "Source" stage)
- a CDK build action (in "Build" stage) + self-mutating pipeline ("UpdatePipeline" stage)
- an application build action (in "Build" stage)
lib/cdkpipelines-demo-pipeline-stack.ts
QUESTION
We have a multi-tenant kubernetes cluster, that hosts various customer environments. Kubernetes namespaces are used to isolate these customer environments. Each of these namespaces have a similar set of k8s resources (deployments, configmap etc) configured. Some of these resources are identical across namespaces, while some are not identical.
From a continuous delivery perspective, I am exploring options to roll out changes to these identical components (across namespaces) in a seamless manner. Git-Ops/Pull-based-Continuous-Delivery seems to be a good approach, which would enable us to seamlessly manage 100s of namespaces across various clusters.
Explored a couple of Git-Ops tools such as ArgoCD, Fluxv2 etc... but couldn't figure out whether these tools would let you roll out changes to multiple namespaces simultaneously (or within a predictable time window). It will be helpful, if you can guide/advice me in terms of choosing the right tool/approach to perform roll out to multiple namespaces. It will also be good to understand whether these Git-Ops tools can be customised to handle such scenarios.
An illustration of how namespaces are setup in our k8s cluster.
...ANSWER
Answered 2021-Jun-17 at 22:18We use Flux to deploy resources to our K8s cluster.
these tools would let you roll out changes to multiple namespaces simultaneously (or within a predictable time window)
Yeah, Flux basically monitors your git repo for any changes and applies them to the cluster. By default Flux checks for changes every 5 minutes but that can be reduced (though I remember reading that anything lower than a minute can be problematic).
If the changes are made to more than one namespace it will apply all of them in one go. I haven't ever had to see if all changes to multiple namespaces are applied simultaneously as they are generally pretty quick. However I must say, we haven't tested it across "100s of namespaces" as you write but it works just fine across a handful of them.
QUESTION
I have a repository in GitLab (Lambda Functions) and want to work with the AWS CDK pipeline (https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications) to develop CI/CD pipeline. In AWS CDK docs there is nowhere mention about Gitlab. Will really appreciate it if someone can confirm this.
...ANSWER
Answered 2021-May-27 at 17:01Only Bitbucket, GitHub, GitHub Enterprise Server actions are supported natively with codepipelines at the moment. You will need a custom lambda for gitlab. https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/ can help you with that.
You can use s3 source actions after implementing up git to s3 logic
QUESTION
I'm using Azure pipelines to set a version number using GitVersion. Here is the pipeline:
...ANSWER
Answered 2021-Mar-02 at 08:52It seems the FullSemVer is affected: 0.8.0+11
When I use the ContinuousDelivery mode
, I could get the same result.
You could try to use the mainline mode
in yml file.
Here is my example: You could try to remove the next-version
parameter
QUESTION
I've been experimenting with using the new AWS CDK pipelines construct (https://docs.aws.amazon.com/cdk/api/latest/docs/pipelines-readme.html), and have successfully setup a couple of projects, including following the blog post announcement here: https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/.
However, as soon as I try to add a dependency to my lambdas the build fails with a
lib/lambda/handler.ts(2,24): error TS2307: Cannot find module 'stripe' or its corresponding type declarations.
I have installed a package.json file and the node_modules in the directory with the lambdas, tried zipping the lambdas and node_modules, tried uploading the zip file with the console, and tried getting the 'buildCommand' during the 'synthAction' step to install the dependencies. Nothing works.
The asset appears to get created in the cdk.out directory, and code changes are being uploaded, but node_modules themselves never get packaged along with the lambda functions.
I am using the 'SimpleSynthAction.standardNpmSynth' action, along with an 'npm run build' command in the 'buildCommand' step.
I've looked at Lambda can't find modules from outer folders when deployed with CDK, How to install dependencies of lambda functions upon cdk build with AWS CDK and https://github.com/aws-samples/aws-cdk-examples/issues/110#issuecomment-550506116 without luck.
What am I missing?
...ANSWER
Answered 2020-Oct-30 at 13:14Looks like the issue has to do with bundling. I abandoned the 'all-in on CDK approach' and went back to using a SAM/CDK hybrid, but this new blog post suggests bundling is the answer.
https://aws.amazon.com/blogs/devops/building-apps-with-aws-cdk/
Specifically, it references this construct, which likely would have helped in my situation: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-nodejs-readme.html
Something to try in future.
QUESTION
I have deployed the docker image of my spring boot application over aws ECR, following creation of AWS fargate cluster. PFA screenshots of the configurations stating task, security, service and cluster definition.I can ping my public ip successfully. But I can't access my application over neither load balancer nor public ip. The urls I tried to access application were
public_ip:8081/my_rest_end_point
and
load_balancer_public_dns:8081/my_reset_end_point
I have tested running my docker image on local host using port 8081 and the same 8081 port I have configured for port mapping in my Fargate container configuration (reference: service definition). How can I access my application? I have followed almost all the articles over Medium and AWS.
Tutorials followed: https://medium.com/underscoretec/deploy-your-own-custom-docker-image-on-amazon-ecs-b1584e62484
...ANSWER
Answered 2020-Aug-12 at 05:17Based on the comments and chat discussion.
The tutorial followed creates a custom VPC with CIDR range of 10.0.0.0/16
with two subnets. There is no internet gateway (IGW); and subsequently, the Fargate tasks launched in the VPC have no internet access nor they can't be accessed from the internet.
There are two solutions to this problem:
- use the default VPC which correctly provides internet access
- create a custom VPC (or modify existing) that is setup with IGW and the corresponding route tables. An example of a custom VPC with internet access is in AWS docs.
QUESTION
I am attempting to follow this example of setting up an AWS Pipeline for use across multiple accounts. I have the four different accounts set up. I've followed through on each step of the process successfully. No commands are generating any errors. The pipeline completes successfully. I can then connect to the pipeline and commit my code changes. In short, every single step up to the final one works as written in the documentation.
However, I'm then presented with an error on the initial trigger of the code commit:
Insufficient permissions The service role or action role doesn’t have the permissions required to access the AWS CodeCommit repository named dbmigration. Update the IAM role permissions, and then try again. Error: User: arn:aws:sts::12345678912:assumed-role/my-pipeline-CodePipelineRole-1UPXOXOXO1WD0H/987654321 is not authorized to perform: codecommit:UploadArchive on resource: arn:aws:codecommit:us-east-2:123456789:dbmigration
The AWS Account I used to create the pipeline is not the root account, but an IAM Administrator login with admin privileges across the account. I've tried adding AWSCodeCommitFullAccess and AWSCodePipelineFullAccess, which I would have thought would have been part of Administration anyway. However, that didn't change anything.
My assumption is I've done something horribly wrong, but I'm not able to identify what that is. Any suggestions for better troubleshooting, let alone suggestions on how to fix it would be most welcome.
The code used to create the pipeline, again, run using the IAM login, Administrator, from a fourth AWS account, is as follows:
...ANSWER
Answered 2020-Jul-16 at 12:08Based on the comments.
The error message indicated that the role my-pipeline-CodePipelineRole-1UPXOXOXO1WD0H/987654321
was missing permission codecommit:UploadArchive
which:
Grants permission to the service role for AWS CodePipeline to upload repository changes into a pipeline
The solution was to add the codecommit:UploadArchive
to the role as an inline policy.
QUESTION
I am trying to figure out the simple Mainline developement mode.
My observation is the following.
- Each direct commit on the master branch is incrementing the patch version.
- When I create a feature branch, and then when I make commits on this feature branch, the patch is not increasing. I am wondering why? Only the PreReleaseTag is increasing. Example featureBranchName.1, featureBranchName.2 and so on for each commit. Patch does not change.
Why does not the patch version increase when commits are made to feature branch? Does a feature branch always imply a prerelease?
Is there a way to increase the patch on commits to feature branch?
My GitVersion.yml file is below. The -diag output follows, if that helps.
...ANSWER
Answered 2020-Mar-10 at 14:02You didn't set a mode
for your feature
branch configuration, so it is inheriting the global configuration of Mainline
. In Mainline
delivery mode, versioning is calculated based on merges to the branch (usually, Mainline is only used for the master
branch).
Try setting mode: ContinuousDeployment
to achieve the desired result.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Continuous-Delivery
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page