sql-proxy | SQL Proxy for PlanetScale DB | SQL Database library

 by   planetscale Go Version: v0.12.0 License: Apache-2.0

kandi X-RAY | sql-proxy Summary

kandi X-RAY | sql-proxy Summary

sql-proxy is a Go library typically used in Database, SQL Database, PostgresSQL, MariaDB, Oracle applications. sql-proxy has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

SQL Proxy for PlanetScale DB
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              sql-proxy has a low active ecosystem.
              It has 39 star(s) with 3 fork(s). There are 7 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 5 open issues and 11 have been closed. On average issues are closed in 10 days. There are 4 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of sql-proxy is v0.12.0

            kandi-Quality Quality

              sql-proxy has 0 bugs and 8 code smells.

            kandi-Security Security

              sql-proxy has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              sql-proxy code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              sql-proxy is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              sql-proxy releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.
              It has 690 lines of code, 32 functions and 5 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed sql-proxy and discovered the below as its top functions. This is intended to give you an instant insight into sql-proxy implemented functionality, and help decide if they suit your requirements.
            • realMain is the main entry point .
            • copyThenClose will close the remote and close the remote .
            • NewClient returns a new client .
            • Shutdown shuts down the client
            • myCopy reads from src into dst .
            • newRemoteCertSource returns a new certificate source
            • newLocalCertSource returns local cert source .
            • printVersion prints a version
            • logError logs an error .
            • main is the entry point for testing .
            Get all kandi verified functions for this library.

            sql-proxy Key Features

            No Key Features are available at this moment for sql-proxy.

            sql-proxy Examples and Code Snippets

            sql-proxy ,Using the Docker container
            Godot img1Lines of Code : 8dot img1License : Permissive (Apache-2.0)
            copy iconCopy
            docker pull planetscale/pscale-proxy:latest
            
            $ docker run -p 127.0.0.1:3306:3306 planetscale/pscale-proxy \
              --host 0.0.0.0 \
              --org "$PLANETSCALE_ORG" \
              --database "$PLANETSCALE_DATABASE" \
              --branch "$PLANETSCALE_BRANCH" \
              --service-token "$  
            sql-proxy ,Usage
            Godot img2Lines of Code : 3dot img2License : Permissive (Apache-2.0)
            copy iconCopy
            pscale auth login
            
            sql-proxy-client --token "$(cat ~/.config/planetscale/access-token)" --org "org" --database "db" --branch "branch" 
            
            mysql -u root -h 127.0.0.1 -P 3307
              
            sql-proxy ,Installation
            Godot img3Lines of Code : 1dot img3License : Permissive (Apache-2.0)
            copy iconCopy
            brew install planetscale/tap/pscale-proxy
              

            Community Discussions

            QUESTION

            CloudSQL Proxy on GKE : Service vs Sidecar
            Asked 2022-Mar-16 at 15:38

            Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?

            I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?

            ...

            ANSWER

            Answered 2022-Mar-15 at 13:19

            The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.

            When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.

            As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.

            Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.

            Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.

            Refer to the documentation for more information.

            Source https://stackoverflow.com/questions/71480852

            QUESTION

            GKE Django MySQL is not accessible during rolling update
            Asked 2022-Feb-28 at 16:14

            I have Django application deployed in GKE. (Done with this tutorial)

            My configuration file: myapp.yaml

            ...

            ANSWER

            Answered 2022-Feb-28 at 16:14

            It looks like the sidecar for the proxy is terminating, and not letting you clean up before the application does.

            Consider using the -term-timeout flag to give yourself some time: https://github.com/GoogleCloudPlatform/cloudsql-proxy#-term_timeout30s

            Source https://stackoverflow.com/questions/71283772

            QUESTION

            How to configure GKE Autopilot w/Envoy & gRPC-Web
            Asked 2021-Dec-14 at 20:31

            I have an application running on my local machine that uses React -> gRPC-Web -> Envoy -> Go app and everything runs with no problems. I'm trying to deploy this using GKE Autopilot and I just haven't been able to get the configuration right. I'm new to all of GCP/GKE, so I'm looking for help to figure out where I'm going wrong.

            I was following this doc initially, even though I only have one gRPC service: https://cloud.google.com/architecture/exposing-grpc-services-on-gke-using-envoy-proxy

            From what I've read, GKE Autopilot mode requires using External HTTP(s) load balancing instead of Network Load Balancing as described in the above solution, so I've been trying to get that to work. After a variety of attempts, my current strategy has an Ingress, BackendConfig, Service, and Deployment. The deployment has three containers: my app, an Envoy sidecar to transform the gRPC-Web requests and responses, and a cloud SQL proxy sidecar. I eventually want to be using TLS, but for now, I left that out so it wouldn't complicate things even more.

            When I apply all of the configs, the backend service shows one backend in one zone and the health check fails. The health check is set for port 8080 and path /healthz which is what I think I've specified in the deployment config, but I'm suspicious because when I look at the details for the envoy-sidecar container, it shows the Readiness probe as: http-get HTTP://:0/healthz headers=x-envoy-livenessprobe:healthz. Does ":0" just mean it's using the default address and port for the container, or does indicate a config problem?

            I've been reading various docs and just haven't been able to piece it all together. Is there an example somewhere that shows how this can be done? I've been searching and haven't found one.

            My current configs are:

            ...

            ANSWER

            Answered 2021-Oct-14 at 22:35

            Here is some documentation about Setting up HTTP(S) Load Balancing with Ingress. This tutorial shows how to run a web application behind an external HTTP(S) load balancer by configuring the Ingress resource.

            Related to Creating a HTTP Load Balancer on GKE using Ingress, I found two threads where instances created are marked as unhealthy.

            In the first one, they mention the necessity to manually enable a firewall rule to allow http load balancer ip range to pass health check.

            In the second one, they mention that the Pod’s spec must also include containerPort. Example:

            Source https://stackoverflow.com/questions/69560536

            QUESTION

            Unschedulable Kubernetes pods on GCP using Autoscaler
            Asked 2021-Nov-28 at 21:04

            I have a Kubernetes Cluster with pods autoscalables using Autopilot. Suddenly they stop to autoscale, I'm new at Kubernetes and I don't know exactly what to do or what is supposed to put in the console to show for help.

            The pods automatically are Unschedulable and inside the cluster put his state at Pending instead of running and doesn't allow me to enter or interact.

            Also I can't delete or stop them at GCP Console. There's no issue regarding memory or insufficient CPU because there's not much server running on it.

            The cluster was working as expected before this issue I have.

            ...

            ANSWER

            Answered 2021-Nov-28 at 21:04

            Pods failed to schedule on any node because none of the nodes have cpu available.

            Cluster autoscaler tried to scale up but it backoff after failed scale-up attempt which indicates possible issues with scaling up managed instance groups which are part of the node pool.

            Cluster autoscaler tried to scale up but as the quota limit is reached no new nodes can be added.

            You can't see the Autopilot GKE VMs that are being counted against your quota.

            Try by creating the autopilot cluster in another region. If your needs are not no longer fulfilled by an autopilot cluster then go for a standard cluster.

            Source https://stackoverflow.com/questions/70139877

            QUESTION

            Is there a way to impersonate a service account with the cloudsql_proxy executable?
            Asked 2021-Oct-09 at 23:01

            https://github.com/GoogleCloudPlatform/cloudsql-proxy

            I have found this is possible by setting impersonation system wide with this command: gcloud config set auth/impersonate_service_account .

            The proxy exe seems to read the gcloud config.

            But that is really clunky. I want to start the proxy and specify a specific user to impersonate without having to change it system wide. Also, I'm not resorting to generating non-expiring json keys- I want to use impersonation.

            Many Gcloud commands now support a specific switch for this, but the proxy exe does not. See this GitHub issue (with no response from google): https://github.com/GoogleCloudPlatform/cloudsql-proxy/issues/417

            Can I run gcloud auth print-access-token --impersonate-service-account= and set an env var the proxy exe will pick up or something?

            I can't find anything in the code except this mention of gcloud: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/eca37935e7cd54efcd612c170e46f45c1d8e3556/cmd/cloud_sql_proxy/cloud_sql_proxy.go#L160

            • When the gcloud command-line tool is installed on the local machine, the "active account" is used for authentication. Run 'gcloud auth list' to see which accounts are installed on your local machine and 'gcloud config list account' to view the active account.

            which is funny because when running auth/impersonate_service_account gcloud config list account doesn't say anything about it.

            Is there a way to have Gcloud do impersonation on a per session basis?

            EDIT: just to follow up, per the answer the --token totally works, so now I can run the proxy with IAM auth and impersonation a gsa simultaneously:

            ...

            ANSWER

            Answered 2021-Sep-02 at 19:29

            I found this trick with the --token parameter

            Source https://stackoverflow.com/questions/69032860

            QUESTION

            Zonal network endpoint group unhealthy even though that container application working properly
            Asked 2021-Sep-22 at 15:18

            I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?

            I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.

            ...

            ANSWER

            Answered 2021-Sep-22 at 12:26

            I'm still not sure why, but i've managed to work when moved the service to port 80 and kept the health check on 5000.

            Service config:

            Source https://stackoverflow.com/questions/69277599

            QUESTION

            How to export specific routes over a peered vpc connection in google cloud?
            Asked 2021-Aug-18 at 19:16

            I have the following VPC connectivity in google cloud:

            VPC A <===== VPC Peering connection =====> VPC B (google managed VPC for cloudsql vi a private service connect)

            VPC A route table:

            Destination Next hop 10.2.4.0/24 VPN connection 1 10.2.5.0/24 VPN connection 2

            I want to export specific custom routes(for eg 10.2.4.0/24) from VPC A to VPC B, but on the VPC peering options, it shows only export custom routes option. Is there a way to export specific routes? Google cloud's auth-cloudsql-proxy seems to be the way to go, but I wanted to know about this from other folks.

            ...

            ANSWER

            Answered 2021-Aug-18 at 17:03

            Currently, the best solution is to set up a SOCKS5 proxy in the intermediary VPC between the client and your Cloud SQL instance. The Cloud SQL Auth proxy supports chaining through a SOCKS5 proxy, a protocol that forwards TCP packets to a destination IP address. This method allows the intermediate node to forward encrypted traffic from the Cloud SQL Auth proxy to the destination Cloud SQL instance.

            The SOCKS5 support can be configured by specifying a SOCKS url in an ALL_PROXY environment variable when invoking the Cloud SQL Auth proxy. Users can direct the Cloud SQL Auth proxy to use a SOCKS5 proxy with the following command:

            Source https://stackoverflow.com/questions/68828177

            QUESTION

            How do i get client ip with kubernetes?
            Asked 2021-Jul-04 at 15:25

            I'm trying to get real client ip from using kubernetes. Many people said i should put externalTrafficPolicy: Local on my kubernetes settings, the question is i dont even know where to put it and keep getting errors. Here is my code yaml file

            ...

            ANSWER

            Answered 2021-Jul-04 at 15:25

            externalTrafficPolicy belongs under service spec:

            Source https://stackoverflow.com/questions/68246119

            QUESTION

            Autoscaling Deployments with Cloud Monitoring metrics
            Asked 2021-May-21 at 09:35

            I am trying to auto-scale my pods based on CloudSQL instance response time. We are using cloudsql-proxy for secure connection. Deployed the Custom Metrics Adapter.

            https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

            ...

            ANSWER

            Answered 2021-May-21 at 09:35
            1. Please refer to the link below to deploy a HorizontalPodAutoscaler (HPA) resource to scale your application based on Cloud Monitoring metrics.

            https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#custom-metric_4

            1. Looks like the custom metric name is different in the app and hpa deployment configuration files(yaml). Metric and application names should be the same in both app and hpa deployment configuration files.

            2. In the hpa deployment yaml file,

              a. Replace custom-metric-stackdriver-adapter with custom-metric (Or change the metric name to custom-metric-stackdriver-adapter in the app deployment yaml file).

              b. Add “namespace: default” next to the application name at metadata.Also ensure you are adding the namespace in the app deployment configuration file.

              c. Delete the duplicate lines 6 & 7 (minReplicas: 1, maxReplicas: 5).

              d. Go to Cloud Console->Kubernetes Engine->Workloads. Delete the workloads (application-name & custom-metrics-stackdriver-adapter) created by app deployment yaml and adapter_new_resource_model.yaml files.

              e. Now apply configurations to resource model, app and hpa (yaml files).

            Source https://stackoverflow.com/questions/67261520

            QUESTION

            deployment throwing error for init container only when I add a second regular container to my deployment
            Asked 2021-May-05 at 07:54

            Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.

            This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)

            Sonarqube container, however, also requires an init container to give it some special memory requirments.

            When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below

            ...

            ANSWER

            Answered 2021-May-05 at 07:54

            Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:

            Source https://stackoverflow.com/questions/67382354

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install sql-proxy

            homebrew tap (only on macOS for now):. Download the .deb or .rpm from the releases page and install with dpkg -i and rpm -i respectively. Download the pre-compiled binaries from the releases page and copy to the desired location.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/planetscale/sql-proxy.git

          • CLI

            gh repo clone planetscale/sql-proxy

          • sshUrl

            git@github.com:planetscale/sql-proxy.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link