git-sync | Use git as a sync tool | Version Control System library
kandi X-RAY | git-sync Summary
kandi X-RAY | git-sync Summary
Use git as a sync tool, without munging your source and sync VCS operations
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of git-sync
git-sync Key Features
git-sync Examples and Code Snippets
Community Discussions
Trending Discussions on git-sync
QUESTION
When i set my airflow on kubernetes infra i got some problem. I refered this blog. and some setting was changed for my situation. and I think everything work out but I run dag manually or scheduled. worker pod work nicely ( I think ) but web-ui always didn't change the status just running and queued... I want to know what is wrong...
here is my setting value.
Version info
...ANSWER
Answered 2022-Mar-15 at 04:01the issue is with the airflow Docker image you are using.
The ENTRYPOINT
I see is a custom .sh
file you have written and that decides whether to run a webserver or scheduler.
Airflow scheduler submits a pod for the tasks with args as follows
QUESTION
I am using official Helm chart for airflow. Every Pod works properly except Worker node.
Even in that worker node, 2 of the containers (git-sync and worker-log-groomer) works fine.
The error happened in the 3rd container (worker) with CrashLoopBackOff. Exit code status as 137 OOMkilled.
In my openshift, memory usage is showing to be at 70%.
Although this error comes because of memory leak. This doesn't happen to be the case for this one. Please help, I have been going on in this one for a week now.
Kubectl describe pod airflow-worker-0 ->
...ANSWER
Answered 2022-Mar-03 at 13:01The issues occurs due to placing a limit in "resources" under helm chart - values.yaml in any of the pods.
By default it is -
QUESTION
I have a pod with two containers:
main
whose simple job is to display the content of a directorysidecar
whose responsibility is to synchronize the content of a blob storage into a predefined directory
In order for the synchronization to be atomic, sidecar
download the blob storage content into a new temp directory and then switch a symlink in the target directory.
The target directory is shared between the two containers using an emptyDir
volume.
main
has the symlink but cannot list the content sitting behind.
How to access the latest synchronized data?
Additional information ReasonI try to achieve what is being done by Apache Airflow with Git-Sync but, instead of using Git, I need to synchronize files from an Azure Blob storage. This is necessary because (1) my content is mostly dynamic and (2) the azureFile
volume type has some serious performance issues.
ANSWER
Answered 2021-Oct-27 at 11:14A symlink is just a special kind of file that contains a filename; it doesn't actually contain the file content in any meaningful way, and it doesn't have to point to a file that exists. mktemp(1) by default creates directories in /tmp
, which probably isn't in the shared volume.
Imagine putting a physical file folder in a physical file cabinet, writing the third drawer at the very front
on a Post-It note, and driving to another building, and handing the note to a colleague. The Post-It note (the symlink) still exists, but in the other building's (container filesystem's) context, the location it names isn't especially meaningful.
The easiest way around this is to ask mktemp
to create the file directly in the destination volume, and then create a relative-path symlink.
QUESTION
I have the following code:
...ANSWER
Answered 2021-Sep-12 at 16:56You are close:
QUESTION
Im usign kubernetes helm Wordpress installation, Im managing everything using Argo CD. How to add sidecar container to this deployment?
I already have Wordpress installation, now that what I need is to sync wp-content with git repository because its only way to update files (bitnami wordpress is non-root container, I cant create sftp).
...ANSWER
Answered 2021-Aug-03 at 13:03In the ReplicaSet
manifest, containers
is a list. You can add sidecars by adding more containers to that list.
Since you're trying to sync some static assets, you might find that an init container is the better choice, since it'll complete before your main container starts.
QUESTION
It is possible, through either https or ssh to clone from a private repo without creating a secrets file with my git credentials? I don't see why this is recommended, anyone in the kubernetes cluster can view my git credentials if they wanted to...
Both of the top two answers advocate this dangerously unsafe practice see and also.
I've also been looking at git-sync but it also wants to expose the git credentials to everyone in the cluster see this answer.
Is it assumed that you'd have a service account for this? What if I don't have a service account? Am I just out of luck?
...ANSWER
Answered 2021-May-08 at 08:29The credentials have to exist somewhere and a Secret is the best place for them. You wouldn't give access to "anyone" though, you should use the Kubernetes RBAC policy system to limit access to Secret objects to only places and people that need them. There are other solutions which read directly from some other database (Hashicorp Vault, AWS SSM, GCP SM, etc) but they are generally the same in terms of access control since the pod would be authenticating to that other system using its ServiceAccount token which ... is in a Secret. If you go full-out on this I'm sure you can find some kind of HSM which supports GitHub but unless you have a lot of hundreds of thousands of dollars to burn, that seems like overkill vs. just writing a better RBAC policy.
QUESTION
I am using this helm chart to deploy airflow https://github.com/apache/airflow/tree/master/chart
Apache Airflow version: 2.0.0
Kubernetes version: v1.19.4
What happened: I get this error when try to execute tasks using kubernetes
...ANSWER
Answered 2021-Mar-11 at 17:51I think I found a root cause of this issue. In airflow helm chart I see this code:
QUESTION
When trying to run dag with KubernetesExecutor
getting exception in worker pod terminate immediately after the start :
I have a question why scheduler sending LocalExecutor as an env variable that can be found in pod describe result
is this the right behavior?
Please find the all required files:
- airflow.cfg
- worker dag describe
- worker dag logs
- dag file
Worker pod describe result :
...ANSWER
Answered 2020-Aug-07 at 21:02K8s runs airflow as a docker container.When you are spinning up the container you need to run it as airflow user.
This can be achieved in your dockerfile. You can instruct it to run as a user. Please let me if you want to know more about this.
Also for your above issue. Please refer this.
https://issues.apache.org/jira/browse/AIRFLOW-6754
Hope this answers your questions. Let me know.
QUESTION
I have a pod created using a deployment using git-sync image and mount the volume to a PVC
...ANSWER
Answered 2020-Jan-30 at 11:05Very basic example on how share file content between PODs using PV/PVC
First Create a persistent volume refer below yaml example with hostPath configuration
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install git-sync
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page