container-builder | Website code for container-builder.com | Continuous Deployment library
kandi X-RAY | container-builder Summary
kandi X-RAY | container-builder Summary
Container Builder - Docker Compose skeletons for Projects.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of container-builder
container-builder Key Features
container-builder Examples and Code Snippets
Community Discussions
Trending Discussions on container-builder
QUESTION
This is similar to Passing files from Google Cloud Container Builder to Docker build task but I can't seem to figure out what the difference is.
I am attempting to build a simple Java program and package it into a Container using Google Cloud Build. I am following mostly along with https://cloud.google.com/build/docs/building/build-java but using my own repo which is a fork of https://github.com/jchraibi/cloud-native-workshop
...ANSWER
Answered 2021-May-06 at 19:29Thank you for your question! I cloned your repo and added a cloudbuild.yaml
at the root and added a Dockerfile
in the inventory-quarkus/src/main/docker
directory. I'm sure this isn't exactly the repo structure you're working with, but the concept should carry over.
Essentially, you want to use the dir
field to set your working directory between the steps to more easily pass the data around. This cloudbuild.yaml worked for me:
QUESTION
I'm trying to build a build pipeline for my application and share a specific folder between steps using volumes.
The problem is because on my first step (unit-tests) I have to install all the libs on the requirements.txt to be able to run my unit tests. And after that I have to build my application running my Dockerfile in other step. I don't want to re-install all the requirements again, so, I thought in copy the requirements already installed and paste them inside the docker build step. Am I able to do that? I followed this thread and tried to replicate to my reality but I still have problems.
Passing files from Google Cloud Container Builder to Docker build task
Here is a sample of what I've done:
My cloudbuild.yaml:
...ANSWER
Answered 2020-Oct-05 at 18:46The Cloud Build (VM) persists /workspace
across steps so you may create e.g. /workspace/requirements
and use requirements
in subsequent steps.
QUESTION
After the latest updates to gcloud and docker I'm unable to access images on my google container repository. Locally when I run: gcloud auth configure-docker
as per the instructions after updating gcloud, I get the following message:
ANSWER
Answered 2018-Apr-11 at 21:05Never found a way to directly resolve the docker-credential-gcloud
issue, but the following got me up and running again. WARNING: the following will delete all your existing docker images and install a bunch of gcloud utilities:
gcloud components install docker-credential-gcr
,- Restart the terminal completely
docker-credential-gcr configure-docker
.screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
umount /var/lib/docker/overlay2
rm -rf /var/lib/docker
- Restart the terminal completely.
QUESTION
I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS
or CSS
file we have to generate bundle.
When there is any changes in JS
& CSS
file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
...ANSWER
Answered 2019-Sep-11 at 07:42I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent. This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
QUESTION
I would like to use Google Cloud SDK (including the Python Extensions for App Engine) inside Compute Engine, so I can replicate my local development setup in a virtual machine. I.e. run local dev_appserver.py
and unit-testing on the VM, or deploy new app versions to Google App Engine.
After creating a new VM instance from the default Ubuntu 16.04 image and (machine type n1-standard-1
), I have noticed that gcloud
is already pre-installed.
ANSWER
Answered 2019-Aug-06 at 07:42Apt-get is the way to install the app engine component In a Ubuntu/Debían System.
If this is a recurring installation among your VM's you might want to write a start-up script to do this installation. Or save that finished installation as an image, depending on your booting time requirements.
About the shared processor, that is not going to affect at all your installation.
QUESTION
I'm trying the following tutorial.
Automatic serverless deployments with Cloud Source Repositories and Container Builder
But I got the error below.
...ANSWER
Answered 2018-Mar-15 at 10:53While I don't know the reason, I found a workaround.
QUESTION
we're testing out CB and part of our requirements is sending messages to Slack.
This tutorial works great, but it'd be helpful if we could specify the source of the build, so we don't have to click in to the message to see what repo/trigger failed/succeeded.
Is there a variable we can pass to the cloud function in the tutorial? I couldn't find helpful documentation.
Ideally, it would be great if CB had an integration/slack GUI that made these options configurable but c'est la vie.
...ANSWER
Answered 2018-Jun-03 at 20:23You can add source information to the slack message by adding a new item to the fields
list within the createSlackMessage
function. You need to make sure title
and value
are strings.
QUESTION
I am using Container Builder to process huge JSON files and transform them. It's a nice possibility of a non-standard usage of it as described here.
Is it possible to trigger a container builder build and pass a parameter to it via cloud functions? This would allow to act on newly uploaded files in GCS and process them via container builder automatically.
Currently I am trying to use the REST API for triggering it (I am new to Node.js), but I get a 404 on my URL. I am developing on a Cloud Shell instance with full API access.
The URL that I am trying to trigger via a PUT
request and a JSON body containing the JSON equivalent of a successfully ran cloudbuild.yaml
is: https://cloudbuild.googleapis.com/v1/projects/[PROJECT_ID]/builds
I am using the requests library from Node.js:
...ANSWER
Answered 2018-May-10 at 15:03The procedure you propose require three different steps:
Google Cloud Storage → Cloud Functions → API call.
According to the requirements you exposed, could be better to use Container Builder’s Build Triggers.
You upload the files to Google Cloud Source Repository and create a trigger. Everytime you upload a change to the repository, Container Builder will build the image automatically. This way you avoid using cloud functions, the API call and Node.js.
This will reduce the procedure to only one step, which reduces the complexity and increases the reliability.
QUESTION
I'm a bit new to Google Cloud and am using a storage bucket to host a static website.
I've integrated automated builds via a build trigger when my master branch gets updated. I'm successfully able to see the changes when I push to GitHub, but when a preexisting file such as index.html gets updated, the file looses the permission to "Share publicly"
I've followed the tutorial below with the only difference being you the object permissions are now handled at the individual file level on the platform rather then a the top level for the bucket.
https://cloud.google.com/community/tutorials/automated-publishing-container-builder
This is my cloudbuild.yaml file
...ANSWER
Answered 2018-Apr-18 at 09:59If you don't configure at the bucket level to have all objects in that bucket publicly readable by default, you'll need to re-apply the permission to the newly uploaded file.
If you know all your updated files need to be set as publicly readable, you can use the -a option with your rsync command and use the canned_acl named "public-read". Your cloudbuild.yaml file would look like this:
QUESTION
In the Google Source Repositories docs, it asks you to use git config credential.helper gcloud.sh
to allow Git to authenticate
Recently, that's prevented me from using osxkeychain auth with GitHub - after adding that command, I get this error message when I attempt to pull from GitHub (on a repo whose only remotes are GitHub remotes):
...ANSWER
Answered 2018-Feb-16 at 22:53The problem here is that the instructions overwrite a possible existing credential helper. To restrict the credential helper to only apply to Google Source Repositories run:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install container-builder
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page