gcr | A node gitlab-ci-runner | Runtime Evironment library
kandi X-RAY | gcr Summary
kandi X-RAY | gcr Summary
gcr v4.x will only support node v4.x+. To use gcr with an older version of node, please use gcr v3.x.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gcr
gcr Key Features
gcr Examples and Code Snippets
Community Discussions
Trending Discussions on gcr
QUESTION
Why kubectl cluster-info is running on control plane and not master node And on the control plane it is running on a specific IP Address https://192.168.49.2:8443 and not not localhost or 127.0.0.1 Running the following command in terminal:
- minikube start --driver=docker
😄 minikube v1.20.0 on Ubuntu 16.04 ✨ Using the docker driver based on user configuration 🎉 minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... > gcr.io/k8s-minikube/kicbase...: 358.10 MiB / 358.10 MiB 100.00% 797.51 K ❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.22, but successfully downloaded kicbase/stable:v0.0.22 as a fallback image 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
- kubectl cluster-info
Kubernetes control plane is running at https://192.168.49.2:8443 KubeDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
...To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ANSWER
Answered 2021-Jun-15 at 12:59The Kubernetes project is making an effort to move away from wording that can be considered offensive, with one concrete recommendation being renaming master to control-plane. In other words control-plane
and master
mean essentially the same thing, and the goal is to switch the terminology to use control-plane
exclusively going forward. (More info in this answer)
The kubectl
command is a command line interface that executes on a client (i.e your computer) and interacts with the cluster through the control-plane
.
The IP address you are seing through cluster-info
is the IP address through which you reach the control-plane
QUESTION
I'm trying to follow instructions on this guide but under docker.
I set up a folder with:
...ANSWER
Answered 2021-Jun-14 at 06:46If you want to use kubernetes inside a docker container my suggestion is to use k3d .
k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.
You can Download , install and use it directly with Docker. For more information you can follow the official documentation from https://k3d.io/ .
To get the list of pods you dont' need to create a k8s cluster inside a docker container . what you need is a config file for any k8s cluster . ├── Dockerfile ├-- config └── main.py 0 directories, 3 files
after that :
QUESTION
In my GCP project, I have a python API running in a docker container (using connexion). I want to expose the API (with an API key) using API Gateway.
When I deploy the docker container with --ingress internal
, I get Access is forbidden.
on API calls over the Gateway. So the API gateway cannot access the Google Run container.
When I use --ingress all
, all works as expected, but then my internal API is accessible from the web, which is not what I want.
I created a service account for this:
...ANSWER
Answered 2021-Jun-13 at 12:12Ingress internal means "Accept only the requests coming from the project's VPC or VPC SC perimeter".
When you use API Gateway, you aren't in your VPC, it's serverless, it's in Google Cloud managed VPC. Therefore, your query are forbidden.
And because API Gateway can't be plugged to a VPC Connector (for now) and thus can't route the request to your VPC, you can't use this ingress=internal mode.
Thus, the solution is to set an ingress to all, which is not a concern is you authorize only the legit accounts to access it.
For that, check in Cloud Run service is there is allUsers granted with the roles/run.invoker in your project.
- If yes, remove it
Then, create a service account and grant it the roles/run.invoker on the Cloud Run service.
Follow this documentation
- Step 4: update the x-google-backend in your OpenAPI spec file to add the correct authentication audience when you call your Cloud Run (it's the base service URL)
- Step 5: create a gateway with a backend service account; set the service account that you created previously
At the end, only the account authenticated and authorized will be able to reach your Cloud Run service
All the unauthorized access are filtered by Google Front End and discarded before reaching your service. Therefore, your service isn't invoked for nothing and therefore your pay nothing!
Only API Gateway (and the potential other accounts that you let on the Cloud Run service) can invoke to the Cloud Run service.
So, OK, your URL is public, reachable from the wild internet, but protected with Google Front End and IAM.
QUESTION
I am trying to connect to Firestore from code running on GKE Container. Simple REST GET api is working fine, but when I access the Firestore from read/write, I am getting Missing or insufficient permissions.
...ANSWER
Answered 2021-Jun-12 at 12:26Looks like they key itself might not be correctly visible to the pod. I would start by getting into the pod with kubectl exec --stdin --tty -- /bin/bash
and ensuring that the /var/key.json
(per your config) is accessible and has the correct credentials.
The following would be a good way to mount the secret:
QUESTION
Could somebody explain to me why Google Cloud Build creates intermediate containers to run commands?
...ANSWER
Answered 2021-Jun-10 at 01:54Cloud Build uses Docker to execute builds. To understand why Cloud Build creates intermediate containers, first you must understand the Docker build process.
For each build step, Cloud Build executes a Docker container as an instance of docker run
. Each step is processed in an intermediate container.
"Those intermediate containers can succeed or fail. If they succeed, the intermediate container is merged with the image from the last successful build step, and then the intermediate container is deleted."
On the performance perspective, removing immediate containers are part of the build process and it helps reduce the size of your container image.
There are already some existing articles that further explains the Docker build process. Here are some interesting links:
QUESTION
I am trying to run a beam job on dataflow using the python sdk.
My directory structure is :
...ANSWER
Answered 2021-Jun-08 at 09:22Probably the wrapper-runner script generated by Bazel (you can find path to it by calling bazel build
on a target) restrict set of modules available in your script. The proper approach is to fetch PyPI dependencies by Bazel, look at example
QUESTION
Has anyone figured out how to pull from private GCR repos in the containrrr watchtower image in docker compose?
For context, I ran gcloud auth configure-docker
in the host, and added these volumes to watchtower:
ANSWER
Answered 2021-Jun-04 at 15:20I'm unfamiliar with Watchtower but familiar with GCR.
If you want to authenticate to GCR and then interact with it solely through clients of the Docker Registry API (i.e. docker [push|pull]
etc.), then you may want to consider creating a suitably IAM'd Service Account, a key and mounting the key via a volume mount into Watchtower. Then, you will be able to authenticate using docker login ...
and avoid needing to install|use the Google Cloud SDK (gcloud
).
See: https://cloud.google.com/container-registry/docs/advanced-authentication#json-key
QUESTION
I have a Github repo with 2 branches on it, develop
and main
. The first is the "test" environment and the other is the "production" environment. I am working with Google Kubernetes Engine and I have automated deployment from the push on Github to the deploy on GKE. So our workflow is :
- Pull
develop
- Write code and test locally
- When everything is fine locally, push on
develop
(it will automatically deploy on GKE workloadapp_name_develop
) - QA tests on
app_name_develop
- If QA tests passed, we create a pull request to put
develop
intomain
- Automatically deploy on GKE workload
app_name_production
(from themain
branch)
The deployment of the container is defined in Dockerfile
and the Kubernetes deployment is defined in kubernetes/app.yaml
. Those two files are tracked with Git inside the repo.
The problem here is when we create a pull request to put develop
into main
, it also take the two files app.yaml
and Dockerfile
from develop
to main
. We end up with the settings from develop
in main
, and it messes the whole thing.
I can't define env variables in those files because it could end up in the wrong branch. My question is : How can I exclude those files from the pull request ? Or is there any way to manage multiples environment without having to manually modify the files after each pull request ?
I don't know if it can hlphere is my Dockerfile :
...ANSWER
Answered 2021-Jun-04 at 14:40You can't ignore some files from a pull request selectively. But there are 2 simple workarounds for this :
First -
Create a new branch from ‘develop’
Replace the non-required files from 'main'
Create pull request from this new branch
Second -
Create a new branch from 'main'
Put changes of required files from 'develop'
Create pull request from this new branch
Any of these methods will work. Which will be easier depends on how many files are to be included / excluded.
Example :
Considering main as target and dev as source
QUESTION
My firebase functions was running fine till last night, but it has stopped working now. When I run my functions I get this error on my Google cloud platform log -
"Step #5 - "exporter": [31;1mERROR: [0mfailed to export: failed to write image to the following tags: [us.gcr.io/tookforms/gcf/us-central1/77926137-2972-4613-947e-c66d12cfd46f:calc_version-59: GET https://storage.googleapis.com/us.artifacts.tookforms.appspot.com/containers/images/sha256:b18e538d0dbca11a254142f571dfce8058959925b5e8c2c25679211b8b1bf0c6?access_token=REDACTED: unexpected status code 404 Not Found:
NoSuchKey
The specified key does not exist.
No such object: us.artifacts.tookforms.appspot.com/containers/images/sha256:b18e538d0dbca11a254142f571dfce8058959925b5e8c2c25679211b8b1bf0c6
]" insertId: "3f132e37-fa6b-4f0a-8dc4-1244dca5a7a5-228"
It's saying that it's trying to upload some image to somewhere on google cloud platform. But I don't have anything to do with any image in my function. I don't even understand exactly what the "image" means over here.
This is the second error I am getting just below the first error -
ERROR: build step 5 "us.gcr.io/fn-img/buildpacks/nodejs12/builder:nodejs12_20210310_12_21_0_RC00" failed: step exited with non-zero status: 246
I tried looking for what Status Code 246 means but apparently it's something made by google. I am not sure.
Here's my function code -
...ANSWER
Answered 2021-Jun-04 at 12:22Something was wrong with firebase functions on google cloud sever. I just removed the faulty firebase function from functions dashboard and deployed my local function again. And it worked.
QUESTION
I am using google cloud build to build my maven projects and I use JFrog antifactory registry to store maven artifacts. In cloud build need these artifacts. I tried with several documentations [1], [2]. But time to time it given many errors. Can I take proper latest updated guide to integrate cloud build and JFrog antifactory. Proper authentication method need to use other than user name password. API key method can be used.
[1]. https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/jfrog
EDIT 1
I set M2_HOME as MAVEN_HOME. Then that issue was fixed. But new error given as Unsupported major.minor version 52.0. This is common issue with java version mismatch.
Error message :
...ANSWER
Answered 2021-Jun-04 at 06:13I solved this issue using maven settings xml file. I followed below steps.
Create maven settings.xml in root directory.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gcr
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page