kubectl | Issue tracker and mirror of kubectl code | Command Line Interface library
kandi X-RAY | kubectl Summary
kandi X-RAY | kubectl Summary
The k8s.io/kubectl repo is used to track issues for the kubectl cli distributed with k8s.io/kubernetes. It also contains packages intended for use by client programs. E.g. these packages are vendored into k8s.io/kubernetes for use in the kubectl cli client. That client will eventually move here too.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kubectl
kubectl Key Features
kubectl Examples and Code Snippets
Community Discussions
Trending Discussions on kubectl
QUESTION
I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.
Output from /etc/hosts
:
ANSWER
Answered 2021-Oct-10 at 18:29error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
QUESTION
I was setting up my new Mac for my eks environment. After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
...ANSWER
Answered 2022-Mar-28 at 09:41I have the same problem
You're using aws-iam-authenticator
0.5.5
, AWS changed the way it behaves in 0.5.4
to require v1beta1
.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
Otherwise switch back to aws-iam-authenticator
0.5.3
- you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64
binary built for it
QUESTION
I'm following a tutorial https://docs.openfaas.com/tutorials/first-python-function/,
currently, I have the right image
...ANSWER
Answered 2022-Mar-16 at 08:10If your image has a latest
tag, the Pod's ImagePullPolicy
will be automatically set to Always
. Each time the pod is created, Kubernetes tries to pull the newest image.
Try not tagging the image as latest
or manually setting the Pod's ImagePullPolicy
to Never
.
If you're using static manifest to create a Pod, the setting will be like the following:
QUESTION
I faced this problem since yesterday, no problems before.
My environment is
- Windows 11
- Docker Desktop 4.4.4
- minikube 1.25.1
- kubernetes-cli 1.23.3
ANSWER
Answered 2022-Mar-07 at 08:38This seems to be a bug introduced with 1.25.0 version of minikube: https://github.com/kubernetes/minikube/issues/13503 . A PR to revert the changes introducing the bug is already open: https://github.com/kubernetes/minikube/pull/13506
The fix is scheduled for minikube v1.26.
QUESTION
I run prometheus locally as http://localhost:9090/targets with
...ANSWER
Answered 2021-Dec-28 at 08:33There are many agents capable of saving metrics collected in k8s to remote Prometheus server outside the cluster, example Prometheus itself now support agent mode, exporter from Opentelemetry, or using managed Prometheus etc.
QUESTION
I am trying to get a volume mounted as a non-root user in one of my containers. I'm trying an approach from this SO post using an initContainer to set the correct user, but when I try to start the configuration I get an "unbound immediate PersistentVolumneClaims" error. I suspect it's because the volume is mounted in both my initContainer and container, but I'm not sure why that would be the issue: I can see the initContainer taking the claim, but I would have thought when it exited that it would release it, letting the normal container take the claim. Any ideas or alternatives to getting the directory mounted as a non-root user? I did try using securityContext/fsGroup, but that seemed to have no effect. The /var/rdf4j directory below is the one that is being mounted as root.
Configuration:
...ANSWER
Answered 2022-Jan-21 at 08:431 pod has unbound immediate PersistentVolumeClaims.
- this error means the pod cannot bound to the PVC on the node where it has been scheduled to run on. This can happen when the PVC bounded to a PV that refers to a location that is not valid on the node that the pod is scheduled to run on. It will be helpful if you can post the complete output of kubectl get nodes -o wide
, kubectl describe pvc triplestore-data-storage
, kubectl describe pv triplestore-data-storage-dir
to the question.
The mean time, PVC/PV is optional when using hostPath
, can you try the following spec and see if the pod can come online:
QUESTION
Not sure if this is OS specific, but on my M1 Mac, I'm installing the Nginx controller and resource example located in the official Quick Start guide for the controller. for Docker Desktop for Mac. The instructions are as follows:
...ANSWER
Answered 2022-Jan-03 at 16:11I replicated your issue and got a similar behaviour on the Ubuntu 20.04.3 OS.
The problem is that NGINX Ingress controller Local testing guide did not mention that demo.localdev.me
address points to 127.0.0.1
- that's why it works without editing /etc/hosts
or /etc/resolve.conf
file. Probably it's something like *.localtest.me
addresses:
Here’s how it works. The entire domain name localtest.me—and all wildcard entries—point to 127.0.0.1. So without any changes to your host file you can immediate start testing with a local URL.
Also good and detailed explanation in this topic.
So Docker Desktop / Kubernetes change nothing on your host.
The address demo2.localdev.me
also points to 127.0.0.1
, so it should work as well for you - and as I tested in my environment the behaviour was exactly the same as for the demo.localdev.me
.
You may run nslookup
command and check which IP address is pointed to the specific domain name, for example:
QUESTION
I would like to run a pod on one of my IoT devices.
Each one of those devices contains an environment variable I want this pod to use.
Is there any way to inject this env variable into the pod using build-in templating of helm
/kubectl
?
I was trying the following on my deployment.yaml
file:
ANSWER
Answered 2021-Dec-06 at 01:10It's not possible to directly pass the host's env vars to the pods. I often do that by creating a ConfigMap.
Create a ConfigMap with
from-lireral
option:
QUESTION
Update
...ANSWER
Answered 2021-Nov-22 at 08:59By default, the integration only logs into standard output, so that logs can be gathered by Kubernetes.
You can provide advanced logging configuration using Quarkus properties.
E.g. Create a logging.properties
file with the following content:
QUESTION
I have an API that recently started receiving more traffic, about 1.5x. That also lead to a doubling in the latency:
This surprised me since I had setup autoscaling of both nodes and pods as well as GKE internal loadbalancing.
My external API passes the request to an internal server which uses a lot of CPU. And looking at my VM instances it seems like all of the traffic got sent to one of my two VM instances (a.k.a. Kubernetes nodes):
With loadbalancing I would have expected the CPU usage to be more evenly divided between the nodes.
Looking at my deployment there is one pod on the first node:
And two pods on the second node:
My service config:
...ANSWER
Answered 2021-Nov-05 at 14:01Google Cloud provides health checks to determine if backends respond to traffic.Health checks connect to backends on a configurable, periodic basis. Each connection attempt is called a probe. Google Cloud records the success or failure of each probe.
Based on a configurable number of sequential successful or failed probes, an overall health state is computed for each backend. Backends that respond successfully for the configured number of times are considered healthy.
Backends that fail to respond successfully for a separately configurable number of times are unhealthy.
The overall health state of each backend determines eligibility to receive new requests or connections.So one of the chances of instance not getting requests can be that your instance is unhealthy. Refer to this documentation for creating health checks .
You can configure the criteria that define a successful probe. This is discussed in detail in the section How health checks work.
Edit1:
The Pod is evicted from the node due to lack of resources, or the node fails. If a node fails, Pods on the node are automatically scheduled for deletion.
So to know the exact reason for pods getting evicted Run
kubectl describe pod
and look for the node name of this pod. Followed by kubectl describe node
that will show what type of resource cap the node is hitting under Conditions: section.
From my experience this happens when the host node runs out of disk space.
Also after starting the pod you should run kubectl logs -f
and see the logs for more detailed information.
Refer this documentation for more information on eviction.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kubectl
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page