kandi background
Explore Kits

ruby-docker | Ruby runtime for Google Cloud Platform | GCP library

 by   GoogleCloudPlatform Ruby Version: Current License: Apache-2.0

 by   GoogleCloudPlatform Ruby Version: Current License: Apache-2.0

Download this library from

kandi X-RAY | ruby-docker Summary

ruby-docker is a Ruby library typically used in Cloud, GCP applications. ruby-docker has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.
Ruby runtime for Google Cloud Platform
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • ruby-docker has a low active ecosystem.
  • It has 124 star(s) with 51 fork(s). There are 31 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 10 open issues and 24 have been closed. On average issues are closed in 82 days. There are 2 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of ruby-docker is current.
ruby-docker Support
Best in #GCP
Average in #GCP
ruby-docker Support
Best in #GCP
Average in #GCP

quality kandi Quality

  • ruby-docker has 0 bugs and 0 code smells.
ruby-docker Quality
Best in #GCP
Average in #GCP
ruby-docker Quality
Best in #GCP
Average in #GCP

securitySecurity

  • ruby-docker has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • ruby-docker code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
ruby-docker Security
Best in #GCP
Average in #GCP
ruby-docker Security
Best in #GCP
Average in #GCP

license License

  • ruby-docker is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
ruby-docker License
Best in #GCP
Average in #GCP
ruby-docker License
Best in #GCP
Average in #GCP

buildReuse

  • ruby-docker releases are not available. You will need to build from source code and install.
  • Installation instructions, examples and code snippets are available.
  • It has 2192 lines of code, 92 functions and 87 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
ruby-docker Reuse
Best in #GCP
Average in #GCP
ruby-docker Reuse
Best in #GCP
Average in #GCP
Top functions reviewed by kandi - BETA

kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here

Get all kandi verified functions for this library.

Get all kandi verified functions for this library.

ruby-docker Key Features

Ruby runtime for Google Cloud Platform

Local builds and tests

copy iconCopydownload iconDownload
bundle install
bundle exec rake

Runtime images

copy iconCopydownload iconDownload
./build-ruby-runtime-images.sh -i -p gcp-runtimes -s
./release-ruby-runtime-images.sh -p gcp-runtimes

Prebuilt binaries

copy iconCopydownload iconDownload
./build-ruby-binary-images.sh -p gcp-runtimes -s -c <versions>
./release-ruby-binary-images.sh -p gcp-runtimes -c <versions>

Runtime pipeline

copy iconCopydownload iconDownload
./build-ruby-runtime-pipeline.sh -b gcp-runtimes -p gcp-runtimes -s
./release-ruby-runtime-pipeline.sh -b gcp-runtimes -p gcp-runtimes

Exec Wrapper

copy iconCopydownload iconDownload
./build-app-engine-exec-wrapper.sh -p google-appengine -s
./release-app-engine-exec-wrapper.sh -p google-appengine

Community Discussions

Trending Discussions on ruby-docker
  • Error while performing Cloud Build and connecting to Cloud SQL
Trending Discussions on ruby-docker

QUESTION

Error while performing Cloud Build and connecting to Cloud SQL

Asked 2021-Jun-25 at 20:46

I am trying to automate the process of building a Django app on Google Cloud Build. This app has to communicate with a PostgreSQL DB hosted on Cloud SQL and there're three stages I would like to complete:

  1. Building the image with a Dockerfile
  2. Pushing the image to Artifact Registry
  3. Running Django migrations python manage.py migrate by connecting to Cloud PostgreSQL through Cloud SQL Proxy

I have successfully made the first two stages work with these configuration files:

cloudbuild.yaml

steps:
  # Build the container image
  - id: "build image"
    name: "gcr.io/cloud-builders/docker"
    args: ["build", "-t", "${_IMAGE_TAG}", "."]
  # Push the container image to Artifact Registry
  - id: "push image"
    name: "gcr.io/cloud-builders/docker"
    args: ["push", "${_IMAGE_TAG}"]
  # Apply Django migrations
  - id: "apply migrations"
    # https://github.com/GoogleCloudPlatform/ruby-docker/tree/master/app-engine-exec-wrapper
    name: "gcr.io/google-appengine/exec-wrapper"
    # Args: image tag, Cloud SQL instance, environment variables, command
    args:
      ["-i", "${_IMAGE_TAG}",
       "-s", "${PROJECT_ID}:${_DB_REGION}:${_DB_INSTANCE_NAME}=tcp:127.0.0.1:3306",
       "-e", "DJANGO_SECRET_ID=${_DJANGO_SECRET_ID}",
       "--", "python", "manage.py", "migrate"]

# Substitutions (more substitutions within the trigger on Google Cloud Build)
substitutions:
  _IMAGE_TAG: ${_REPOSITORY_REGION}-docker.pkg.dev/${PROJECT_ID}/${_REPOSITORY}/${_IMAGE_NAME}:${COMMIT_SHA}

# Display the image in the build results
# https://cloud.google.com/build/docs/building/build-containers#store-images
images:
  - '${_IMAGE_TAG}'

Dockerfile

FROM python:3.7-slim

# Add new /app directory to the base image
ENV APP_HOME /app
WORKDIR $APP_HOME

# Removes output stream buffering, allowing for more efficient logging
ENV PYTHONUNBUFFERED 1

# Copy requirements.txt to WORKDIR and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy local code to the container image.
COPY . .

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
# PORT is automatically added to the running container and shouldn't be set by us
# https://cloud.google.com/run/docs/reference/container-contract#env-vars
CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 main_project.wsgi:application

settings.py

import os
import io

import environ
import google.auth
from google.cloud import secretmanager


# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

# Load environment variables
env = environ.Env(DEBUG=(bool, False))
env_file = os.path.join(BASE_DIR, ".env")

# ...from file
if os.path.exists(env_file):
    env.read_env(env_file)
# ...from Secret manager
else:
    # Get Google project ID
    _, os.environ["GOOGLE_CLOUD_PROJECT"] = google.auth.default()
    g_project_id = os.environ.get("GOOGLE_CLOUD_PROJECT")

    # Pull secrets
    sm_client = secretmanager.SecretManagerServiceClient()
    django_secret_id = os.environ.get("DJANGO_SECRET_ID")
    name = f"projects/{g_project_id}/secrets/{django_secret_id}/versions/latest"
    payload = sm_client.access_secret_version(name=name).payload.data.decode("UTF-8")

    # Load secrets
    env.read_env(io.StringIO(payload))

...

DATABASES = {
    'default': env.db()
}

SecretManager / env.py

DATABASE_URL=postgres://username:user_password@127.0.0.1:3306/db_name
SECRET_KEY=50 characters
DEBUG=True

For some reason, however, I am having an issue with reaching my Cloud SQL instance through the Cloud SQL Proxy:

Starting Step #2 - "apply migrations"
Step #2 - "apply migrations": Pulling image: gcr.io/google-appengine/exec-wrapper
Step #2 - "apply migrations": Using default tag: latest
Step #2 - "apply migrations": latest: Pulling from google-appengine/exec-wrapper
Step #2 - "apply migrations": 75f546e73d8b: Already exists
Step #2 - "apply migrations": 0f3bb76fc390: Already exists
Step #2 - "apply migrations": 3c2cba919283: Already exists
Step #2 - "apply migrations": ca8b528f3beb: Pulling fs layer
Step #2 - "apply migrations": 9192e910d340: Pulling fs layer
Step #2 - "apply migrations": 8d727c8f3915: Pulling fs layer
Step #2 - "apply migrations": 8d727c8f3915: Download complete
Step #2 - "apply migrations": 9192e910d340: Verifying Checksum
Step #2 - "apply migrations": 9192e910d340: Download complete
Step #2 - "apply migrations": ca8b528f3beb: Verifying Checksum
Step #2 - "apply migrations": ca8b528f3beb: Download complete
Step #2 - "apply migrations": ca8b528f3beb: Pull complete
Step #2 - "apply migrations": 9192e910d340: Pull complete
Step #2 - "apply migrations": 8d727c8f3915: Pull complete
Step #2 - "apply migrations": Digest: sha256:2ed781e6546168ea45a0c7483b725d4a159b0d88770445ababb5420a8fb5b5b4
Step #2 - "apply migrations": Status: Downloaded newer image for gcr.io/google-appengine/exec-wrapper:latest
Step #2 - "apply migrations": gcr.io/google-appengine/exec-wrapper:latest
Step #2 - "apply migrations": 
Step #2 - "apply migrations": ---------- INSTALL IMAGE ----------
Step #2 - "apply migrations": f77d0fc9799de606907614381a65d904bf75c89d: Pulling from my-google-project-id/rep-backend-staging/image-backend-staging
Step #2 - "apply migrations": Digest: sha256:0be33db09badd30dcd22d7b9d1b711276e67a35bb5b19ae337ee2af63a480448
Step #2 - "apply migrations": Status: Image is up to date for europe-west1-docker.pkg.dev/my-google-project-id/rep-backend-staging/image-backend-staging:f77d0fc9799de606907614381a65d904bf75c89d
Step #2 - "apply migrations": europe-west1-docker.pkg.dev/my-google-project-id/rep-backend-staging/image-backend-staging:f77d0fc9799de606907614381a65d904bf75c89d
Step #2 - "apply migrations": 
Step #2 - "apply migrations": ---------- CONNECT CLOUDSQL ----------
Step #2 - "apply migrations": cloud_sql_proxy is running.
Step #2 - "apply migrations": Connections: my-google-project-id:europe-west1:my-cloudsql-instance=tcp:127.0.0.1:3306.
Step #2 - "apply migrations": 
Step #2 - "apply migrations": ---------- EXECUTE COMMAND ----------
Step #2 - "apply migrations": python manage.py migrate
Step #2 - "apply migrations": Traceback (most recent call last):
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
Step #2 - "apply migrations":     self.connect()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "apply migrations":     return func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 197, in connect
Step #2 - "apply migrations":     self.connection = self.get_new_connection(conn_params)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "apply migrations":     return func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py", line 185, in get_new_connection
Step #2 - "apply migrations":     connection = Database.connect(**conn_params)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 127, in connect
Step #2 - "apply migrations":     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
Step #2 - "apply migrations": psycopg2.OperationalError: could not connect to server: Connection refused
Step #2 - "apply migrations":   Is the server running on host "127.0.0.1" and accepting
Step #2 - "apply migrations":   TCP/IP connections on port 3306?
Step #2 - "apply migrations": 
Step #2 - "apply migrations": 
Step #2 - "apply migrations": The above exception was the direct cause of the following exception:
Step #2 - "apply migrations": 
Step #2 - "apply migrations": Traceback (most recent call last):
Step #2 - "apply migrations":   File "manage.py", line 20, in <module>
Step #2 - "apply migrations":     main()
Step #2 - "apply migrations":   File "manage.py", line 16, in main
Step #2 - "apply migrations":     execute_from_command_line(sys.argv)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
Step #2 - "apply migrations":     utility.execute()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
Step #2 - "apply migrations":     self.fetch_command(subcommand).run_from_argv(self.argv)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 328, in run_from_argv
Step #2 - "apply migrations":     self.execute(*args, **cmd_options)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 369, in execute
Step #2 - "apply migrations":     output = self.handle(*args, **options)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 83, in wrapped
Step #2 - "apply migrations":     res = handle_func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle
Step #2 - "apply migrations":     executor = MigrationExecutor(connection, self.migration_progress_callback)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/migrations/executor.py", line 18, in __init__
Step #2 - "apply migrations":     self.loader = MigrationLoader(self.connection)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/migrations/loader.py", line 49, in __init__
Step #2 - "apply migrations":     self.build_graph()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/migrations/loader.py", line 212, in build_graph
Step #2 - "apply migrations":     self.applied_migrations = recorder.applied_migrations()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/migrations/recorder.py", line 76, in applied_migrations
Step #2 - "apply migrations":     if self.has_table():
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/migrations/recorder.py", line 56, in has_table
Step #2 - "apply migrations":     return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "apply migrations":     return func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 260, in cursor
Step #2 - "apply migrations":     return self._cursor()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 236, in _cursor
Step #2 - "apply migrations":     self.ensure_connection()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "apply migrations":     return func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
Step #2 - "apply migrations":     self.connect()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 90, in __exit__
Step #2 - "apply migrations":     raise dj_exc_value.with_traceback(traceback) from exc_value
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
Step #2 - "apply migrations":     self.connect()
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "apply migrations":     return func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 197, in connect
Step #2 - "apply migrations":     self.connection = self.get_new_connection(conn_params)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "apply migrations":     return func(*args, **kwargs)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/django/db/backends/postgresql/base.py", line 185, in get_new_connection
Step #2 - "apply migrations":     connection = Database.connect(**conn_params)
Step #2 - "apply migrations":   File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 127, in connect
Step #2 - "apply migrations":     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
Step #2 - "apply migrations": django.db.utils.OperationalError: could not connect to server: Connection refused
Step #2 - "apply migrations":   Is the server running on host "127.0.0.1" and accepting
Step #2 - "apply migrations":   TCP/IP connections on port 3306?

This problem does not occur when I try to run the Django migrations locally. The connection is established perfectly with ./cloud_sql_proxy -instances=my-google-project-id:europe-west1:my-cloudsql-instance=tcp:127.0.0.1:3306

ANSWER

Answered 2021-Jun-25 at 20:46

According to the documentation, the exec-wrapper simulate an App Engine flex environment.

Therefore, it expects only a Cloud SQL connection ID and not more (not the tcp:127.0.0.1:3306 at the end as you can put in Cloud SQL proxy). That also means it create a unix socket and not a TCP port to connect to the database.

I recommend you to review your script and to use unix socket connection and to give it another try.

Source https://stackoverflow.com/questions/68130507

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install ruby-docker

To build and release prebuilt binary images, use the build-ruby-binary-images.sh and release-ruby-binary-images.sh scripts. When building, you should either set the -c flag or provide a prebuilt-versions.txt file to tell the runtime which Rubies to build. You may also want to set the -s flag to mark the new images as staging.

Support

See CONTRIB.md

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Consider Popular GCP Libraries
Try Top Libraries by GoogleCloudPlatform
Compare GCP Libraries with Highest Support
Compare GCP Libraries with Highest Quality
Compare GCP Libraries with Highest Security
Compare GCP Libraries with Permissive License
Compare GCP Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.