gitops | Managing Fleets of Kubernetes Clusters
kandi X-RAY | gitops Summary
kandi X-RAY | gitops Summary
Managing Fleets of Kubernetes Clusters w/GitOps
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gitops
gitops Key Features
gitops Examples and Code Snippets
Community Discussions
Trending Discussions on gitops
QUESTION
ANSWER
Answered 2021-Jan-25 at 02:49how about this
QUESTION
I have a ton of helm files, with the structure aligned to comply with Helm2
, i.e. a separate requirements.yaml
file and no type:application
in Chart.yaml
Is there an option in helm-2to3
plugin that automagically places the requirements.yaml
under Chart.yaml
or do I have to write myself a script to do this?
My charts are checked-in to GH btw (not using a helm repo but operating them via GitOps)
edit: After confirmation in answers below that helm-2to3
does not provide that functionality, I ended up using the draft script below (warning: do not use it in production :) ); you can then proceed by a simple find
/xargs
oneliner to remove all requirements.yaml
(or give them an extension of .bak
to keep around for some time).
the chart should of course be run from the root directory of the project where your helm
files are kept.
ANSWER
Answered 2021-Jan-13 at 09:27There is a very good plugin that does what you're looking for: https://github.com/helm/helm-2to3
There is also information on migrating a GitOps setup on the Flux CD page (assuming you're using Flux and Helm Operator for your GitOps) https://docs.fluxcd.io/projects/helm-operator/en/stable/helmrelease-guide/release-configuration/#migrating-from-helm-v2-to-v3
Even more information here: https://helm.sh/docs/topics/v2_v3_migration/
QUESTION
We'd like to allow our developers to automatically deploy their changes to a kubernetes cluster by merging their code and k8s ressources in a git repo which is watched by ArgoCD. The release management teams would responsible to manage the ArgoCD config and setting up new apps as well as for creation of namespaces, roles and role bindings on the cluster while the devs should be able to deploy their applications through GitOps without the need to interact with the cluster directly. Devs might have read access on the cluster for debugging purposes. Now the question: in theory it would be possible that a dev would create a new yaml and specify a rolebinding ressource, which binds his/her account to a cluster admin role. As ArgoCD has cluster admin rights, this would be a way to escalate privileges for the dev. (or an attacker impersonating a developer) Is there a way to restrict, which k8s ressources are allowed to be created through ArgoCD.
EDIT:
According to the docs, this is possible per project using clusterResourceWhitelist
.
Is it possible to do that globally?
ANSWER
Answered 2020-Oct-13 at 04:54You are right about Argo CD project. The project CRD supports allowing/denying K8S resources using clusterResourceWhitelist
, clusterResourceBlacklist
etc fields. The sample project definition is also available in Argo CD documentation.
In order to restrict the list of managed resources globally, you can specify the resource.exclusions
/resource.inclusions
field in the argocd-cm
ConfigMap. The example is available here.
QUESTION
I've already created a service account
and added a JSON
key with the owner role then downloaded from Chrome. Trying to create a Google cluster with Terraform apply
, but getting this error: 2020/09/26 01:46:14 [ERROR] eval: *terraform.EvalApplyPost, err: googleapi: Error 403: Required "container.clusters.create" permission(s) for "projects/gitops-webinar"., forbidden
Extended logs: https://pastebin.com/05btUi9f
Terraform main.tf
file
ANSWER
Answered 2020-Sep-29 at 08:24I think I found the issue. You use the project name and not the project ID. Try this
QUESTION
I've been looking into Argo as a Gitops style CD system. It looks really neat. That said, I am not understanding how to use Argo in across multiple GCP projects. Specifically, the plan is to have environment dependent projects (i.e. prod, stage dev). It seems like Argo is not designed to orchestrate deployment across environment dependent clusters, or is it?
...ANSWER
Answered 2020-Aug-16 at 19:16Your question is mainly about security management. You have several possibilities and several point of views/level of security.
1. Project segregation
The most simple and secure way is to have Argo running in each project without relation/bridge between each environment. No risk in security or to deploy on the wrong project. Default project segregation (VPC and IAM role) are sufficient.
But it implies to deploy and maintain the same app on several clusters, and to pay several clusters (Dev, Staging and prod CD aren't used at the same frequency)
In term of security, you can use the Compute Engine default service account for the authorization, or you can rely on Workload identity (preferred way)
2. Namespace segregation
The other way is to have only one project with a cluster deployed on it and a kubernetes namespace per delivery project. By the way, you can reuse the same cluster for all the projects in your company.
You still have to update and maintain Argo in each namespace, but the cluster administration is easier because the node are the same.
In term of security, you can use the Workload identity per namespace (and thus to have 1 service account per namespace authorized in the delivery project) and to keep the permission segregated
Here, the trade off is the private IP access. If your deployment need to access to private IP inside the delivery project (for testing purpose or to access to private K8S master), you have to set up a VPC peering (and you are limited to 25 peering per project) or set up a shared VPC.
3. Service account segregation
The latest solution isn't recommended, but it's the easiest to maintain. You have only one GKE cluster for all the environment, and only 1 namespace with Argo deployed on it. By configuration, you can say to Argo to use a specific service account to access to the delivery project (with service account key files (not recommended solution) stored in GKE secrets or in secret manager, or (better) by using service account impersonation).
Here also, you have 1 service account authorized per delivery project. And the peering issue is the same in case of private IP access required in the delivery project.
QUESTION
I have the following shell script:
...ANSWER
Answered 2020-Aug-05 at 08:55When you use $(...)
a new sub-shell will be created, but the parent shell will wait for the sub-shell to complete. Effectively making the code execute sequentially.
Side note: there is no need to use process substitution here. Consider the alternative:
QUESTION
Firstly this is in the context of a mono-repo application that runs inside Kubernetes.
In GitOps my understanding is everything is declarative and written in configuration files such as YAML. This allows a full change history within git. Branches represent environments. For example:
- BRANCH develop Could be a deployed QA or staging environment
- BRANCH feature/foo-bar Could be a deployed feature branch for assessment
- TAG v1.2.0 Could be latest version running in production This makes sense to me, any and all branches can be deployed as a running version of the application.
QUESTION I remember reading ... somewhere ... configuration should live outside of the main repository, inside another "configuration repository". From memory the idea is an application should not know a particular configuration... only how to use a configuration?
Is this true? Should I have an application repo and an application configuration repo? e.g.
- app repo:
foo-organisation/bar-application
- config repo:
foo-organisation/bar-application-config
Where the branching model of config for different environments lives inside that repository? Why and what are the advantages?
Otherwise should it just live inside a directory of the app repo?
...ANSWER
Answered 2020-Jul-31 at 03:38There's no real hard rule other than everything about your environment should be represented in a git repo (e.g. stop storing config in Jenkins env variables Steve!). Where the config lives is more of an implementation detail.
If your app is managed as a mono repo then why not use the same setup for the deployment? It's usually best to fit in with your existing release processes rather than creating something new.
One catch is that a single version of the app will normally need to support multiple deployments and the deployment config can often change after an app release. So there needs to be a 1 app to many config relationship, whether that is files, directories, branches, or repos. They all work as long as it's versioned/released.
Another release consideration is application builds. For small deployment updates you don't want to build and release a complete new application image. It's preferable to just apply the config and reuse the existing artefacts. This is often a driver for the separate deploy/config repo, so concerns are completely separated.
Security can play a part. You may have system information in the deployment config that all developers don't need access to or you don't want to even have the chance of it making it into a container image. This also drives the use of separate repos.
Size of infrastructure is another. If I am managing multiple apps, the generic system config will not be stored in an apps repo.
QUESTION
I am trying to setup CI using Azure DevOps and CD using GitOps for my AKS cluster. When CI completes the image is pushed to Azure Container Registry. My issue is the name of the image in my yaml file is :latest. When I push the image to container registry, Flux CD is not able to determine if there are any changes to the image or not because the name of the image remains same. I tried to look up the issue in github and came up with the below link: https://github.com/GoogleCloudPlatform/cloud-builders/issues/22#issuecomment-316181326 But I dont know how to implement it. Can someone please help me?
...ANSWER
Answered 2020-Jul-08 at 06:19From the docs of FluxCD here
Note: that Flux only works with immutable image tags (:latest is not supported). Every image tag must be unique, for this you can use the Git commit SHA or semver when tagging images.
Turn on automation based on timestamp:
QUESTION
I recently installed FluxCD 1.19.0 on an Azure AKS k8s cluster using fluxctl install. We use a private git (self hosted bitbucket) which Flux is able to reach and check out.
Now Flux is not applying anything with the error message:
...ANSWER
Answered 2020-Jun-15 at 17:21The problem was with the version of kubectl used in the 1.19 flux release, so I fixed it by using a prerelease: https://hub.docker.com/r/fluxcd/flux-prerelease/tags
QUESTION
I have created a docker image that has the following content:
...ANSWER
Answered 2020-May-22 at 14:14PATH
is overridden in your Drone environment variables
It doesn't look like your PATH
variable is intended to hold executable paths (PATH: keycloak-svc
). I'd recommend using a different name.
If, after changing that environment variable name, you find that /usr/local/bin/
isn't on Drone's PATH
, you can add it in the commands
section, as described in the Drone docs.
Note:
I thought to look at PATH
after finding logs of someone else's failed build and then taking a peek at a change they committed around the same time.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gitops
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page