Argo | Functional JSON parsing library for Swift | iOS library
kandi X-RAY | Argo Summary
kandi X-RAY | Argo Summary
Argo is maintained and funded by thoughtbot, inc. The names and logos for thoughtbot are trademarks of thoughtbot, inc. We love open source software! See our other projects or look at our product case studies and hire us to help build your iOS app.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Argo
Argo Key Features
Argo Examples and Code Snippets
Community Discussions
Trending Discussions on Argo
QUESTION
I've created a test Argo Workflow to help me understand how I can CI/CD approach to deploy an Ansible Playbook. When I create the app in Argo CD, it syncs fine, but then it just gets stuck on Progressing
and never gets out of that state.
I tried digging around to see if there was any indication in the logs, but I'm fairly new to Argo. It doesn't even get to the point where it's creating any pods to do any of the steps.
Thoughts?
Here is my workflow:
...ANSWER
Answered 2022-Mar-29 at 19:13I ended up solving this by adding a ServiceAccount and Role resource to the namespace that Argo Workflow was trying to run within.
Here's the Role I added:
QUESTION
If I run git fetch origin
and then git checkout
on a series of consecutive commits, I get a relatively small repo directory.
But if I run git fetch origin
and then git checkout FETCH_HEAD
on the same series of commits, the directory is relatively bloated. Specifically, there seem to be a bunch of large packfiles.
The behavior appears the same whether the commits are all in place at the time of the first fetch
or if they are committed immediately before each fetch.
The following examples use a public repo, so you can reproduce the behavior.
Why is the directory size of example 2 so much larger?
Example 1 (small):
...ANSWER
Answered 2022-Mar-25 at 19:08Because each fetch produces its own packfile and one packfile is more efficient than multiple packfiles. A lot more efficient. How?
First, the checkouts are a red herring. They don't affect the size of the .git/ directory.
Second, in the first example only the first git fetch origin
does anything. The rest will fetch nothing (unless something changed on origin).
Compression works by finding common long sequences within the data and reducing them to very short sequences. If
long block of legal mumbo jumbo
appears dozens of times it could be replaced with a few bytes. But the original long string must still be stored. If there's a single packfile it must only be stored once. If there's multiple packfiles it must be stored multiple times. You are, effectively, storing the whole history of changes up to that point in each packfile.
We can see in the example below that the first packfile is 113M, the second is 161M, the third is 177M, and the final fetch is 209M. The size of the final packfile is roughly equal to the size of the single garbage compacted packfile.
Why do multiple fetches result in multiple packfiles?git fetch
is very efficient. It will only fetch objects you not already have. Sending individual object files is inefficient. A smart Git server will send them as a single packfile.
When you do a single git fetch
on a fresh repository, Git asks the server for every object. The remote sends it a packfile of every object.
When you do git fetch ABC
and then git fetch DEF
s, Git tells the server "I already have everything up to ABC, give me all the objects up to DEF", so the server makes a new packfile of everything from ABC to DEF and sends it.
Eventually your repository will do an automatic garbage collection and repack these into a single packfile.
We can reduce the examples. I'm going to use Rails to illustrate because it has clearly defined tags to fetch.
QUESTION
I'm trying to manage Argo CD projects with helm definitions using kustomize.
Unfortunately Argo manages helm values with string literals, which gives me headaches in conjunction with kustomize configuration.
I have this base/application.yml
...ANSWER
Answered 2022-Mar-22 at 13:00There's an open PR to add support for arbitrary YAML in the values
field. If merged, I would expect it to be available in 2.4. Reviews/testing are appreciated if you have time!
One workaround is to use the parameters
field and set parameters individually. It's not ideal, but maybe could help until 2.4 is released.
QUESTION
I am new to using Argo Workflows. I have written down a sample workflow for demo purposes. Below is the attached workflow YAML in which I am facing this issue in the last step. The last step sayHello is erroring out with exit status 2 and the logs show up the error to be :
...'/bin/sh: arithmetic syntax error'
ANSWER
Answered 2022-Mar-08 at 19:00The result
input to the sayHello
template must be passed explicitly from the third step of the main
template.
steps.addTenToResult.outputs.result
has no meaning in the sayHello
template definition. Variables starting with steps.
only have meaning in steps templates (i.e. templates with the steps
field populated, like in main
).
QUESTION
I am trying to format my workflow per these instructions (https://argoproj.github.io/argo-workflows/workflow-inputs/#using-previous-step-outputs-as-inputs) but cannot seem to get it right. Specifically, I am trying to imitate "Using Previous Step Outputs As Inputs"
I have included my workflow below. In this version, I have added a path to the inputs.artifacts because the error requests one. The error I am now receiving is:
...ANSWER
Answered 2022-Mar-01 at 15:26A very similar workflow from the Argo developers/maintainers can be found here:
https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#artifacts
QUESTION
I have a CronWorkflow that sends the following metric:
...ANSWER
Answered 2022-Feb-14 at 15:09Argo Workflows automatically adds the name of the Cron Workflow as a label on the workflow. That label is accessible as a variable.
QUESTION
From our Tekton pipeline we want to use ArgoCD CLI to do a argocd app create
and argocd app sync
dynamically based on the app that is build. We created a new user as described in the docs by adding a accounts.tekton: apiKey
to the argocd-cm
ConfigMap:
ANSWER
Answered 2022-Feb-10 at 15:01The problem is mentioned in Argo's useraccounts docs:
When you create local users, each of those users will need additional RBAC rules set up, otherwise they will fall back to the default policy specified by policy.default field of the argocd-rbac-cm ConfigMap.
But these additional RBAC rules could be setup the simplest using ArgoCD Projects
. And with such a AppProject
you don't even need to create a user like tekton
in the ConfigMap argocd-cm
. ArgoCD projects have the ability to define Project roles:
Projects include a feature called roles that enable automated access to a project's applications. These can be used to give a CI pipeline a restricted set of permissions. For example, a CI system may only be able to sync a single app (but not change its source or destination).
There are 2 solutions how to configure the AppProject
, role & permissions incl. role token:
- using
argocd
CLI - using a manifest YAML file
argocd
CLI to create AppProject
, role & permissions incl. role token
So let's get our hands dirty and create a ArgoCD AppProject
using the argocd
CLI called apps2deploy
:
QUESTION
I am trying to install argo workflows and looking at the documentation I can see 3 different types of installation https://argoproj.github.io/argo-workflows/installation/.
Can anybody give some clarity on the namespace install
vs managed namespace install
? If its a managed namespace, how can I tell the managed namespace? Should I edit the k8's manifest for deployment? What benefit it can provide compared to simple namespace install
?
ANSWER
Answered 2022-Feb-09 at 15:13A namespace install allows Workflows to run only in the namespace where Argo Workflows is installed.
A managed namespace install allows Workflows to run only in one namespace besides the one where Argo Workflows is installed.
Using a managed namespace install might make sense if you want some users/processes to be able to run Workflows without granting them any privileges in the namespace where Argo Workflows is installed.
For example, if I only run CI/CD-related Workflows that are maintained by the same team that manages the Argo Workflows installation, it's probably reasonable to use a namespace install. But if all the Workflows are run by a separate data science team, it probably makes sense to give them a data-science-workflows
namespace and run a "managed namespace install" of Argo Workflows from another namespace.
To configure a managed namespace install, edit the workflow-controller
and argo-server
Deployments to pass the --managed-namespace
argument.
You can currently only configure one managed namespace, but in the future it may be possible to manage more than one.
QUESTION
General question here, wondering if anyone has any ideas or experience trying to achieve something I am right now. I'm not entirely sure if its even possible in the argo workflow system...
I'm wondering if it is possible to continue a workflow regardless if a dynamic fanout has finished. By dynamic fanout I mean that B1/B2/B3 can go to B30 potentially.
I want to see if C1 can start when B1 has finished. The B stage is creating a small file which then in C stage I need to run an api request that it has finished and upload said file. But in this scenario B2/B3 still are processing.
And finally, D1 would have to wait for all of C1/2/3-C# to finish to complete
Diagram what I'm trying to achieve
...ANSWER
Answered 2022-Feb-03 at 14:15Something like this should work:
QUESTION
I am new to the argo universe and was trying to set up Argo Workflows https://github.com/argoproj/argo-workflows/blob/master/docs/quick-start.md#install-argo-workflows .
I have installed the argo
CLI from the page : https://github.com/argoproj/argo-workflows/releases/latest . I was trying it in my minikube setup and I have my kubectl already configured to the minikube cluster. I am able to hit argo commands without any issues after putting it in my local bin folder.
How does it work? Where do the argo CLI is connecting to operate?
...ANSWER
Answered 2022-Feb-01 at 18:04The argo
CLI manages two API clients. The first client connects to the Argo Workflows API server. The second connects to the Kubernetes API. Depending on what you're doing, the CLI might connect just to one API or the other.
To connect to the Kubernetes API, the CLI just uses your kube config.
To connect to the Argo server, the CLI first checks for an ARGO_TOKEN
environment variable. If it's not available, the CLI falls back to using the kube config.
ARGO_TOKEN
is only necessary when the Argo Server is configured to require client auth and then only if you're doing things which require access to the Argo API instead of just the Kubernetes API.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Argo
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page