aws-ansible-autoscaling-and-code-deploy | AWS , Ansible , and CodeDeploy demo - Deployment Repo | Continuous Deployment library
kandi X-RAY | aws-ansible-autoscaling-and-code-deploy Summary
kandi X-RAY | aws-ansible-autoscaling-and-code-deploy Summary
This repository demonstrates one way of building an autoscaling app on AWS using Ansible, CloudFormation, and CodeDeploy. A presentation explaining how this fits together is available in the docs/ directory. There's also a simplistic matching application available at which shows how the CodeDeploy steps work.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of aws-ansible-autoscaling-and-code-deploy
aws-ansible-autoscaling-and-code-deploy Key Features
aws-ansible-autoscaling-and-code-deploy Examples and Code Snippets
Community Discussions
Trending Discussions on Continuous Deployment
QUESTION
You see a lot of articles on combining GitHub actions with Terraform. It makes sense that anytime one wants to provision something different in their infrastructure that a CI/CD pipeline would add visibility and repeatability to an otherwise manual process.
But some article make it sound as though Terraform is doing the deploying of any change. For example, this article says "anytime there is a push to the src directory it will kick off the action which will have Terraform deploy the changes made to your website."
But doesn't this only make sense if the change you are making is related to provisioning infrastructure? Why would you want any code push to trigger a Terraform job if most pushes to the codecase have nothing to do with provisioning new infrastrucutre? Aren't most code pushes things like changing some CSS on the website, or adding a function to a back-end node script. These don't require provisioning new infrastructure, as the code is just placed onto existing infrastructure.
Or perhaps the article is suggesting the repo is dedicated only to Terraform.
...ANSWER
Answered 2022-Feb-15 at 09:04In my case the changes are from terraform(only) repos. Any change to infra would be triggered by these repos. In rest of the actual app code, it would always be Ansible-Jenkins. Deploying terraform infrastructure change everytime there is a push to app-code might bring down the uptime of the application. In case of containerized application it would be Helm-kubernetes doing the application bit.
QUESTION
From our Tekton pipeline we want to use ArgoCD CLI to do a argocd app create
and argocd app sync
dynamically based on the app that is build. We created a new user as described in the docs by adding a accounts.tekton: apiKey
to the argocd-cm
ConfigMap:
ANSWER
Answered 2022-Feb-10 at 15:01The problem is mentioned in Argo's useraccounts docs:
When you create local users, each of those users will need additional RBAC rules set up, otherwise they will fall back to the default policy specified by policy.default field of the argocd-rbac-cm ConfigMap.
But these additional RBAC rules could be setup the simplest using ArgoCD Projects
. And with such a AppProject
you don't even need to create a user like tekton
in the ConfigMap argocd-cm
. ArgoCD projects have the ability to define Project roles:
Projects include a feature called roles that enable automated access to a project's applications. These can be used to give a CI pipeline a restricted set of permissions. For example, a CI system may only be able to sync a single app (but not change its source or destination).
There are 2 solutions how to configure the AppProject
, role & permissions incl. role token:
- using
argocd
CLI - using a manifest YAML file
argocd
CLI to create AppProject
, role & permissions incl. role token
So let's get our hands dirty and create a ArgoCD AppProject
using the argocd
CLI called apps2deploy
:
QUESTION
I am trying to deploy cloud function to artifact registry instead of container registry using Terraform.
I have created an artifact repository in GCP and Using the google-beta provider. But I am not able to understand where to mention "docker-registry" path(path for artifact registry)
Following in my main tf file's create CF:- I have added a parameter called docker-repository(this doesn't exist in terraform) based on https://cloud.google.com/functions/docs/building#image_registry_options But looks like this parameter doesn't exist in terraform and is giving me errors.
...ANSWER
Answered 2022-Feb-07 at 21:21At this time, your will need to use Terraform plus Cloud Build to specify the repository to use. You can then use gcloud --docker-repository in a Cloud Build step.
This document explains how to integrate Terraform with Cloud Build.
Managing infrastructure as code with Terraform, Cloud Build, and GitOps
QUESTION
We are thinking about migrating our infrastructure to Kubernetes. All our Source-code is in GitHub, Docker containers are in Docker Hub.
I would like to have a CI/CD pipeline for Kubernetes only using GitHub and Docker Hub. Is there a way?
If not, what tools (as few as possible) should we use?
...ANSWER
Answered 2022-Jan-05 at 18:43You can go it as per need using the Github Action and Docker hub only.
You should also checkout the keel with GitHub :https://github.com/keel-hq/keel
Step: 1
QUESTION
I began learning to use to Jenkins and wanted to make it run a Python script of mine automatically. I followed their tutorial and created a new Project called Pipeline Test
.
I've also added the GitHub repo of a Python script I wanted to test (https://github.com/mateasmario/spam-bot).
As you can see, I've created a Jenkinsfile
in that repo. Because my script is called spam-bot.py
, I want my Jenkinsfile to run that script every time I click "Build now" inside Jenkins. This is the content of my Jenkinsfile:
ANSWER
Answered 2021-Dec-23 at 12:01Your Jenkinsfile
contains invalid syntax on the first line, which is why the error is being thrown. Assuming you intended that first line to be a comment, you can modify the pipeline code to be:
QUESTION
We currently have an AWS Kinesis Data Analytics app that requires a .jar file to run.
We have automated the deployment for our .jar file that resides in an S3 bucket.
Our issue is, whenever the .jar file is updated we are forced to restart the kinesis app to get the new build which is causing downtime
Does anyone have a workaround or another way of deploying the app Without causing downtime ?
...ANSWER
Answered 2021-Dec-14 at 09:41Flink itself does not support zero-downtime deployments. While a few users have built their own solutions for this, it requires implementing application-specific deployment automation and tooling. See
for examples.
QUESTION
I want to use the App-of-apps practice with ArgoCD. So I created a simple folder structure like the one below. Then I created a project called dev
and I created an app that will look inside the folder apps
, so when new Application
manifests are included, it will automatically create new applications. This last part works. Every time I add a new Application
manifest, a new app is created as a child of the apps
. However, the actual app that will monitor the respective folder and create the service and deployment is not created and I can't figure out what I am doing wrong. I have followed different tutorials that use Helm and Kustomize and all have given the same end result.
Can someone spot what am I missing here?
- Folder structure
ANSWER
Answered 2021-Dec-08 at 13:55It turns out that at the moment ArgoCD can only recognize application declarations made in ArgoCD namespace, but @everspader was doing it in the default namespace. For more info, please refer to GitHub Issue
QUESTION
I'm trying to understand CI/CD strategy.
Many CI/CD articles mention that it's a automation services of build, test, deploy phase.
I would to know does CI/CD concept have any prerequisites step(s)?
For example, if I make a simple tool that automatically builds and deploys, but test step is manual - can this be considered CI/CD?
...ANSWER
Answered 2021-Nov-30 at 19:58There's a minor point of minutia that should be mentioned first: the "D" in "CI/CD" can either mean "Delivery" or "Deployment". For the sake of this question, we'll accept the two terms as relatively interchangeable -- but be aware that others may apply a more narrow definition, which may be slightly different depending on which "D" you mean, specifically. For additional context, see: Continuous Integration vs. Continuous Delivery vs. Continuous Deployment
For example, if I make a simple tool that automatically builds and deploys, but test step is manual - can this be considered CI/CD?
Let's break this down. Beforehand, let's establish what can be considered "CI/CD". Easy enough: if your (automated) process is practicing both CI (continuous integration) and CD (continuous deployment), then we can consider the solution as being some form of "CI/CD".
We'll need some definitions for CI and CD (see above link), which may vary by opinion. But if the question is whether this can be considered CI/CD, we can proceed on the lowest common denominator / bare minimum of popular/accepted definitions and apply those definitions liberally as they relate to the principles of CI/CD.
With that context, let's proceed to determine whether the constituent components are present.
Is Continuous Integration being practiced?Yes. Continuous Integration is being practiced in this scenario. Continuous integration, in its most basic sense, is making sure that your ongoing work is regularly (continually) integrated (tested).
The whole idea is to combat the consequences of integrating (testing) too infrequently. If you do many many changes and never try to build/test the software, any of those changes may have very well broken the build, but you won't know until the point in time where integration (testing) occurs.
You are regularly integrating your changes and making sure the software still builds. This is unequivocally CI in practice.
But there are no automated tests?!One may make an objection to the effect of "if you're not running what is traditionally thought of as tests (unit|integration|smoke|etc) as part of your automated process, it's not CI" -- this is a demonstrably false statement.
Even though in this case you mention that your "test" steps would be manual, it's still fair to say that simply building your application would be sufficient to meet the basic definition of a "test" in the sense of continuous integration. Successfully building (e.g. compiling) your code is, in itself IS a test. You are effectively testing "can it build". If your code change breaks the compile/build process, your CI process will tell you so right after committing your code -- that's CI in action.
Just like code changes may break a unit test, they can also break the compilation process -- automating your build tests that your changes did not break the build and is, therefore, a kind of continuous integration, without question.
Sure, your product can be broken by your changes even if it compiles successfully. It may even be the case that those software defects would have been caught by sufficient unit testing. But the same could be said of projects with proper unit tests, even projects with "100% code coverage". We certainly don't consider projects with test gaps as not practicing CI. The size of the test gap doesn't make the distinction between CI and non-CI; it's irrelevant to the definition.
Bottom line: building your software exercises (integrates/tests) your code changes, if even only in a minimally significant degree. Doing this on a continuous basis is a form of continuous integration.
Is Continuous Deployment/Delivery being practicedYes. It is plain to see in this scenario that, if you are deploying/delivering your software to whatever its 'production environment' is in an automated fashion then you have the "CD" component to CI/CD, at least in some minimal degree. The fact that your tests may be manual is not consequential.
Similar to the above, reasonable people could disagree on the effectiveness of the implementation depending on the details, but one would not be able to make the case that this practice is non-CD, by definition.
Conclusion: can this practice be considered "CI/CD"?Yes. Both elements of CI and CD are present in at least a minimum degree. The practices used probably can't reasonably be called non-CI or non-CD. Therefore, it should be concluded this described practice can be considered "CI/CD".
I think it goes without saying that the described CI/CD process has gaps and could benefit from improvement and, with the lack of automated tests and other features, doesn't reap all the possible benefits of a robust CI/CD process could offer. However, this doesn't render the process non-CICD by any means. It's certainly CI/CD in practice; whether it's a particularly good or robust CI/CD practice is a subject of opinion.
does CI/CD concept have any prerequisites step(s)?
No, there are no specific prerequisites (like writing automated software tests, for example) to applying CI/CD concepts. You can apply both CI and CD independently of one another without any prerequisites.
To further illustrate, let's think of an even more minimal project with "CI/CD"...
CD could be as simple as committing to the main branch repository of a GitHub Pages. If that same Pages repo, for example, uses Jekyll, then you have CI, too, as GitHub will build your project automatically in addition to deploying it and inform you of build errors when they occur.
In this basic example, the only thing that was needed to implement "CI/CD" was commit the Jekyll project code to a GitHub Pages repository. No prerequisites.
There's even cases where you can accurately consider a project as having a CI process and the CI process might not even build any software at all! CI could, for example, consist solely of code style checks or other trivial checks like checking for newlines at the end of files. When projects only include these kinds of checks alone, we would still call that check process "CI" and it wouldn't be an inaccurate description of the process.
QUESTION
I'm trying to implement a continuous deployment system to build my app and deploy to Google Play using codemagic. Doing a build works fine locally but fails remotely on codemagic.
Error summary:
...ANSWER
Answered 2021-Nov-09 at 10:54to fix this you need to upgrade Gradle version in android/gradle/wrapper/gradle-wrapper.properties
to 6.7.1 or commit gradle wrapper to your repository if you don't have this file.
Additional to that you also might need to upgrade Android Gradle plugin in andriod/build.gradle
QUESTION
In Azure Pipelines YAML, you can specify an environment for a job to run in.
...ANSWER
Answered 2021-Oct-13 at 07:32Why does Azure Pipelines say "The environment does not exist or has not been authorized for use"?
First, you need to make sure you are Creator in the Security of environment:
Second, make sure change/create the environment name from yaml editor, not from repo.
If above not help you, may I know what is your role in the project, Project Reader?
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install aws-ansible-autoscaling-and-code-deploy
Click on the "Cloudformation" service
Click "Create Stack"
Enter "blog-dev-s3-buckets" as the stack name (you will have multiple stacks eventually: one for dev, one for staging, etc) The exact name doesn't really matter.
Click "Choose File" and select the s3-buckets.json file you created. Then click Next.
Set System Parameters:
AppName: Choose 'blog' (lower-case) for this example.
BucketPrefix: Set this to something unique to you, so that you don't conflict with other users of this repo. use something like your-name or yourname or your-organisation-name (keep it lower-case and use hyphens, not underscores).
Click "Next", and "Next" again to go through the advanced options.
Click "Create"
Wait for the stack to build. If everything is successful, it should change to state 'CREATE_COMPLETE'.
AWS Cloudformation configs are limited to around 50kb - and whitespace counts against you. So you need to strip out the whitespace before doing the upload. The simplest way to do this is to use jq:.
Make sure you have the 'jq' app installed (brew install jq on a mac with Homebrew)
Compact the CloudFormation config on the command line with the command jq < stack.json > stack-compact.json.
Click on the "Cloudformation" service
Click "Create Stack"
Enter "blog-dev" as the stack name (you will have multiple stacks eventually: one for dev, one for staging, etc) The exact name doesn't really matter.
Click "Choose File" and select the stack-compact.json file you created. Then click Next.
Set System Parameters:
AppName: Choose 'blog' (lower-case) for this example.
BucketPrefix: Set this to something unique to you, so that you don't conflict with other users of this repo. use something like your-name or yourname or your-organisation-name (keep it lower-case and use hyphens, not underscores). This must match the value you previously set for the S3 stack.
DBAllocatedStorage: How many gigabytes of DB space to add to the server. Note that due to AWS pricing rules, you will have very poor performance if you create a DB of less than 100GB. You will hit your burst performance limit quickly. See http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html for more info. Obviously the higher the number, the bigger the cost.
DBInstanceType: Set this to whatever DB instance type you want. Note that small RDS instances can take a very long time to create, and don't support full encryption.
DBMaxConnections: How many DB connections you want. This reduces memory on your database, but allows you to set your DB pool size higher than the RDS defaults. https://wiki.postgresql.org/wiki/Number_Of_Database_Connections has more info on memory vs connection count tradeoffs. You probably want your connection count to be in the region of NumberOfRubyServers * NumberOfWorkers * NumberOfThreads - and the default app expects up 1 server, 4 workers, and up to 32 threads (128 connections per server.)
DBMultiAZ: If you're just testing things, you might want to run a single availability zone DB. Set to true if you want any reliability. See http://aws.amazon.com/rds/details/multi-az/ for details and cost implications, since multi-AZ dbs are expensive.
DBName: Set the name of the db - this will automatically have the environment appended to it. For this example - set it to 'blog', so that the db name will be created as 'blog_dev'. The associated user will be 'blog_dev_user'
DBPassword: set a long and complicated password that your DB will use. The app will be passed this through a DATABASE_URL environment variable. See http://12factor.net/config for details.
DBStorageEncrypted: Set to true if you want your DB to be encrypted at rest. Note that not all instance types (DBInstanceType param) support encryption. See http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html for info.
DBStorageType: Set the storage type. You probably want SSD (gpt).
EnvironmentName: What environment is this? Choose Dev for testing purposes. This is used for naming S3 buckets, and setting server instance permissions so that they can read from those buckets. Note that it is NOT used for setting the RAILS_ENV environment parameter. You should set that in the facts/*/app.fact file, in the app_environment section.
SecretKeyBase: This is critical for the security of your app. You will likely want to run bundle exec rake secret in a Rails app directory to generate this. This is passed into the application as an environment variable, so you can use it in secrets.yml in your Rails app.
ServerAMIID: Which AWS/Ubuntu AMI to run. See http://cloud-images.ubuntu.com/trusty/current/ You must choose a 'hvm-ssd' style kernel. To reduce the time spent applying security upgrades, and to avoid requiring a reboot when the kernel upgrades, you should keep this current with the latest daily Ubuntu AMI build. You will need to choose an AMI related to your chosen region. By default this is eu-west-1
SSHKeyName: Select your SSH key from the dropdown. For your first trial run you will use this key to ssh in - but in the Next Steps section of this document we will move to individual accounts.
SSHLocation: The IP address range that you want SSH to be allow in from. This configures AWS security groups to limit incoming SSH access. Leave it as 0.0.0.0/0 for a global allow for this test.
SSLCertificateIdArn: The Arn of the SSL key you generated earlier. You can find instructions for locating it above if needs be.
WebServerCountDesired / WebServerCountMax / WebServerCountMin: The number of entries in the WebServer auto scaling group. See http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-maintain-instance-levels.html for details. Leave this as 1/1/1 for this demo.
WebServerHealthCheckType: Set this to EC2 for now - we'll cover it in more detail below.
WebserverInstanceType: Which EC2 server to use for webservers. We've had success with the c4.xlarge instance type - but you can use any SSD-HVM supporting instance here if you want. Note that the smaller you go, the longer the systems take to start and build.
Click "Next", and "Next" again to go through the advanced options.
Acknowledge the warning about the creation of IAM resources. These Roles and permissions are detailed in the stack.json file, along with related URLs in the comment section if you are concerned.
Click "Create"
Wait for the stack to build. If everything is successful, it should change to state 'CREATE_COMPLETE'.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page