argo-workflows | Workflow engine for Kubernetes | BPM library
kandi X-RAY | argo-workflows Summary
kandi X-RAY | argo-workflows Summary
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). Argo is a Cloud Native Computing Foundation (CNCF) hosted project.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Checks the validity of the input value .
- Calls the API .
- Validate and convert input_value to required types .
- Instantiate a new instance of the given class .
- Validate all the properties of this schema
- Calls an API call .
- Return a OneOfSchema instance from the model .
- Initialize an OpenApiModel from keyword arguments .
- Convert a model instance to a python dictionary .
- Attempts to convert the given value to an appropriate object .
argo-workflows Key Features
argo-workflows Examples and Code Snippets
Community Discussions
Trending Discussions on argo-workflows
QUESTION
I found Argo lint today. Thank you to the Argo team!!! This is a very useful tool and has saved me tons of time. The following yaml checks out with no errors, but when I try to run it, I get the following error. How can I track down what is happening?
...ANSWER
Answered 2022-Mar-18 at 11:45The complete fix is detailed here https://github.com/argoproj/argo-workflows/issues/8168#event-6261265751
for purposes of this discussion, the output must be the explicit location (not a placeholder) e.g. /tmp/ouput
I think the standard is that you do not put the .tgz suffix in the output location, but that is not yet confirmed as there was another fix involved. Perhaps someone from the Argo team can confirm this.
QUESTION
I am trying to format my workflow per these instructions (https://argoproj.github.io/argo-workflows/workflow-inputs/#using-previous-step-outputs-as-inputs) but cannot seem to get it right. Specifically, I am trying to imitate "Using Previous Step Outputs As Inputs"
I have included my workflow below. In this version, I have added a path to the inputs.artifacts because the error requests one. The error I am now receiving is:
...ANSWER
Answered 2022-Mar-01 at 15:26A very similar workflow from the Argo developers/maintainers can be found here:
https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#artifacts
QUESTION
I am trying to install argo workflows and looking at the documentation I can see 3 different types of installation https://argoproj.github.io/argo-workflows/installation/.
Can anybody give some clarity on the namespace install
vs managed namespace install
? If its a managed namespace, how can I tell the managed namespace? Should I edit the k8's manifest for deployment? What benefit it can provide compared to simple namespace install
?
ANSWER
Answered 2022-Feb-09 at 15:13A namespace install allows Workflows to run only in the namespace where Argo Workflows is installed.
A managed namespace install allows Workflows to run only in one namespace besides the one where Argo Workflows is installed.
Using a managed namespace install might make sense if you want some users/processes to be able to run Workflows without granting them any privileges in the namespace where Argo Workflows is installed.
For example, if I only run CI/CD-related Workflows that are maintained by the same team that manages the Argo Workflows installation, it's probably reasonable to use a namespace install. But if all the Workflows are run by a separate data science team, it probably makes sense to give them a data-science-workflows
namespace and run a "managed namespace install" of Argo Workflows from another namespace.
To configure a managed namespace install, edit the workflow-controller
and argo-server
Deployments to pass the --managed-namespace
argument.
You can currently only configure one managed namespace, but in the future it may be possible to manage more than one.
QUESTION
General question here, wondering if anyone has any ideas or experience trying to achieve something I am right now. I'm not entirely sure if its even possible in the argo workflow system...
I'm wondering if it is possible to continue a workflow regardless if a dynamic fanout has finished. By dynamic fanout I mean that B1/B2/B3 can go to B30 potentially.
I want to see if C1 can start when B1 has finished. The B stage is creating a small file which then in C stage I need to run an api request that it has finished and upload said file. But in this scenario B2/B3 still are processing.
And finally, D1 would have to wait for all of C1/2/3-C# to finish to complete
Diagram what I'm trying to achieve
...ANSWER
Answered 2022-Feb-03 at 14:15Something like this should work:
QUESTION
I am new to the argo universe and was trying to set up Argo Workflows https://github.com/argoproj/argo-workflows/blob/master/docs/quick-start.md#install-argo-workflows .
I have installed the argo
CLI from the page : https://github.com/argoproj/argo-workflows/releases/latest . I was trying it in my minikube setup and I have my kubectl already configured to the minikube cluster. I am able to hit argo commands without any issues after putting it in my local bin folder.
How does it work? Where do the argo CLI is connecting to operate?
...ANSWER
Answered 2022-Feb-01 at 18:04The argo
CLI manages two API clients. The first client connects to the Argo Workflows API server. The second connects to the Kubernetes API. Depending on what you're doing, the CLI might connect just to one API or the other.
To connect to the Kubernetes API, the CLI just uses your kube config.
To connect to the Argo server, the CLI first checks for an ARGO_TOKEN
environment variable. If it's not available, the CLI falls back to using the kube config.
ARGO_TOKEN
is only necessary when the Argo Server is configured to require client auth and then only if you're doing things which require access to the Argo API instead of just the Kubernetes API.
QUESTION
Is there a way to provide an image name for the container template dynamically based on its input parameters?
We have more than 30 different tasks each with its own image and that should be invoked identically in a workflow. The number may vary each run depending on the output of a previous task. So we don't want to or even can't just hardcode them inside workflow YAML.
An easy solution would be to provide the image field for the container depending on the input parameter and have the same template for each of these tasks. But looks like it's impossible. This workflow doesn't work:
...ANSWER
Answered 2022-Jan-14 at 11:29This is a possible workaround: to use when
and conditional run of a task. We need to list of all possible tasks with their container images though:
QUESTION
One of our Argo Workflow steps may hit a rate limit and I want to be able to tell argo how long it should wait until the next retry.
Is there a way to do it?
I've seen Retries on the documentation but it only talks about retry count and backoff strategies and it doesn't look like it could be parameterized.
...ANSWER
Answered 2022-Jan-12 at 23:47As far as I know there's no built-in way to add a pause before the next retry.
However, you could build your own with Argo's exit handler feature.
QUESTION
I have an Argo workflow with dynamic fan-out tasks that do some map operation (in a Map-Reduce meaning context). I want to create a reducer that aggregates their results. It's possible to do that when the outputs of each mapper are small and can be put as an output parameter. See this SO question-answer for the description of how to do it.
But how to aggregate output artifacts with Argo without writing custom logic of writing them to some storage in each mapper and read from it in reducer?
...ANSWER
Answered 2022-Jan-10 at 14:20Artifacts are more difficult to aggregate than parameters.
Parameters are always text and are generally small. This makes it easy for Argo Workflows to aggregate them into a single JSON object which can then be consumed by a "reduce" step.
Artifacts, on the other hand, may be any type or size. So Argo Workflows is limited in how much it can help with aggregation.
The main relevant feature it provides is declarative repository write/read operations. You can specify, for example, an S3 prefix to write each parameter to. Then, in the reduce step, you can load everything from that prefix and perform your aggregation logic.
Argo Workflows provides a generic map/reduce example. But besides artifact writing/reading, you pretty much have to do the aggregation logic yourself.
QUESTION
I am trying to delete (and recreate) the Argo namespace, but it won't fully delete because I tried launching an eventsource and eventbus there. Now these will not delete.
I have tried to delete them via yaml and individually - no success yet.
The frustrating result is that I cannot re-launch argo
...ANSWER
Answered 2021-Dec-12 at 15:27For anyone who stumbles onto this question, it is a permissions issue. Make certain your service account has permissions to work in both namespaces (argo and argo-events).
QUESTION
The main question is if there is a way to finish a pod from the client-go sdk, I'm not trying to delete a pod, I just want to finish it with a Phase-Status: Completed.
In the code, I'm trying to update the pod phase but It doesn't work, It does not return an error or panic but The pod does not finish. My code:
...ANSWER
Answered 2021-Oct-29 at 12:01You cannot set the phase
or anything else in the Pod status
field, it is read only. According to the Pod Lifecycle documentation your pod will have a phase of Succeeded
after "All containers in the Pod have terminated in success, and will not be restarted." So this will only happen if you can cause all of your pod's containers to exit with status code 0
and if the pod restartPolicy
is set to onFailure
or Never
, if it is set to Always
(the default) then the containers will eventually restart and your pod will eventually return to the Running
phase.
In summary, you cannot do what you are attempting to do via the Kube API directly. You must:
- Ensure your pod has a
restartPolicy
that can support theSucceeded
phase. - Cause your application to terminate, possibly by sending it
SIGINT
orSIGTERM
, or possibly by commanding it via its own API.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install argo-workflows
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page