pgbouncer | lightweight connection pooler for PostgreSQL | Database library

 by   pgbouncer C Version: pgbouncer_1_19_1 License: Non-SPDX

kandi X-RAY | pgbouncer Summary

kandi X-RAY | pgbouncer Summary

pgbouncer is a C library typically used in Database, PostgresSQL applications. pgbouncer has no bugs, it has no vulnerabilities and it has medium support. However pgbouncer has a Non-SPDX License. You can download it from GitHub.

Lightweight connection pooler for PostgreSQL. Sources, bug tracking:
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              pgbouncer has a medium active ecosystem.
              It has 2059 star(s) with 374 fork(s). There are 61 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 157 open issues and 402 have been closed. On average issues are closed in 167 days. There are 54 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of pgbouncer is pgbouncer_1_19_1

            kandi-Quality Quality

              pgbouncer has 0 bugs and 0 code smells.

            kandi-Security Security

              pgbouncer has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              pgbouncer code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              pgbouncer has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              pgbouncer releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.
              It has 142 lines of code, 8 functions and 3 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of pgbouncer
            Get all kandi verified functions for this library.

            pgbouncer Key Features

            No Key Features are available at this moment for pgbouncer.

            pgbouncer Examples and Code Snippets

            No Code Snippets are available at this moment for pgbouncer.

            Community Discussions

            QUESTION

            The difference of pgBouncer pooling types
            Asked 2022-Mar-17 at 12:47

            I was reading about pgBouncer and couldn't completely understand how different types of pooling work:

            ...

            ANSWER

            Answered 2022-Mar-17 at 12:47

            With transaction pooling, the connection will go back into the pool after step 4, but it will not be "stopped". Step 5 could be executed through a different database connections.

            "Query" means "statement" in the description of statement pooling.

            In your last example, both transaction and statement pooling can run each statement on a different connection (remember that PostgreSQL uses autocommit, so each statement runs in its own transaction by default).

            Source https://stackoverflow.com/questions/71510984

            QUESTION

            What are the pros and cons of client-side connect pools vs external connection pools for PostgreSQL?
            Asked 2022-Mar-04 at 14:43

            Given a PostgreSQL database that is reasonably configured for its intended load what factors would contribute to selecting an external/middleware connection pool (i.e. pgBouncer, pgPool) vs a client-side connection pool (HikariCP, c3p0). Lastly, in what instances are you looking to apply both client-side and external connection pooling?

            From my experience and understanding, the disadvantages of an external pool are:

            • additional failure point (including from a security standpoint)
            • additional latency
            • additional complexity in deployment
            • security complications w/ user credentials

            In researching the question, I have come across instances where both client-side and external pooling are used. What is the motivation for such a deployment? In my mind that is compounding the majority of disadvantages for a gain that I appear to be missing.

            ...

            ANSWER

            Answered 2022-Mar-04 at 14:43

            Usually, a connection pool on the application side is a good thing for the reasons you detail. An external connection pool only makes sense if

            • your application server does not have a connection pool

            • you have several (many) instances of the application server, so that you cannot effectively limit the number of database connections with a connection pool in the application server

            Source https://stackoverflow.com/questions/71352508

            QUESTION

            Why can't I see my NGINX log's when my app is deployed to Azure app services, but it works fine locally?
            Asked 2022-Jan-27 at 12:22

            I have a Dockerized Django application, which I'm orchestrating with Supervisor, which is not optimal but needed when hosting on Azure app services as their multi-app support with docker-compose is still in preview mode (aka. beta).

            According to best-practises I have configured each application within supervisord to emit the logs to STDOUT. It works fine when I create the Docker image locally, run it and check the docker logs. However, when I have deployed it to Azure app services and check the logs, my web-application (Gunicorn) is logging as expected, however, the logs from NGINX don't appear at all.

            I have tried different configurations in my Dockerfile for linking the log files generated by NGINX (linking to both /dev/stdout and /dev/fd/1 for example) and I have also gone into the the nginx.conf config and trying to log out directly to /dev/stdout. But whatever I do it work fine locally, but on Azure the logs don't show any NGINX-logs. I've pasted relevant configuration files, where you can see the commented lines with the options I've tried with. Hope someone can help me figure this one out.

            EDIT: I've also tried logging the NGINX app to a log-file in the system, which also works fine locally, but not in Azure app-services. I tried deactivating the "user nginx" part in nginx.conf as I though it could have something to do with permissions, but that didn't help either.

            EDIT 2: I also tried creating the log files in my home-directory in the web-app at Azure, thinking it may had to do with not being able to create logs in other directories - again, it works locally, but the logs in Azure are empty.

            Dockerfile

            ...

            ANSWER

            Answered 2022-Jan-25 at 11:27

            Solved it. The issue was that the Azure App service had the configuration setting WEBSITES_PORT=8000 set, which made the app go straight to gunicorn and bypsasing NGINX, thus not creating any logs. Simply removing the setting fixed the issue.

            Source https://stackoverflow.com/questions/70845825

            QUESTION

            Starting supervisor with Docker and seeing its logs in docker logs, but not finding the service with service supervisor status in the container
            Asked 2021-Dec-27 at 11:12

            I want to run supervisor to have multiple processes in the same container, as I can't use docker-compose in our current hosting environment. Things seems to work when I look in the docker logs, but I can't see the supervisor service inside the linux system when I've attached my terminal to the container.

            When I check the logs for the container I get:

            ...

            ANSWER

            Answered 2021-Dec-22 at 09:50

            You are starting supervisord manually. service command won't report its status correctly.

            Source https://stackoverflow.com/questions/70446439

            QUESTION

            airflow 2.1.3 using pgbouncer for postgresql issue
            Asked 2021-Dec-25 at 10:28

            back ground info: recently we upgraded the airflow from 2.10.14 to 2.1.3, the pgbouncer was using customised container built from azure microsoft image (mcr.microsoft.com/azure-oss-db-tools/pgbouncer-sidecar:latest).

            the customised pgbouncer stopped working, it instead connects to the main postgresql server now.

            so I now try to use pgbouncer deployed by airflow 2.1.3 (helm chart 8.5.2) instead (https://artifacthub.io/packages/helm/airflow-helm/airflow/8.5.0#how-to-use-an-external-database), and have problems.

            Below is the key info

            in my values.yaml file, key info is like below

            ...

            ANSWER

            Answered 2021-Dec-25 at 10:28

            We had 2 options to the problem (note, our airflow chart is the community chart version 8.5.2), and we chose the 1st option. When looking back, option 2 would have been easier, and would have required almost no change, once the next release has it fixed properly.

            1. Given the fact that community airflow chart version 8.5.2 built-in pgbouncer defaults the auth type to a fixed value, which if the pgbouncer connects to azure postgresql single server, it will fail, one can choose to not use pgbouncer provided by the 8.5.2 version chart, i.e. pgbouncer=false, and then deploy their own pgbouncer (use helm and kubecetl etc), and in the airflow values.yaml file point externalDatabase host to the pgbouncer service. we chose this approach:

            Source https://stackoverflow.com/questions/69740396

            QUESTION

            Set GUC parameter or use PGOPTIONS environment variable with PgBouncer
            Asked 2021-Oct-11 at 12:25

            Can I make PgBouncer preserve the PGOPTIONS environment variable in transaction pooling to configure GUC parameters? Or is there another way to configure these parameters in PgBouncer so that it applies to all connections?

            I specifically need to set some pg_trgm parameters

            ...

            ANSWER

            Answered 2021-Oct-11 at 12:25

            You can use the connect_query option in database definitions, like

            Source https://stackoverflow.com/questions/69417955

            QUESTION

            How configure Kubernetes with external servers?
            Asked 2021-Sep-16 at 08:51

            I have external Postgres and I want connect hem with me Kubernetes claster throuth pgbouncer. What i need configure Kubernetes claster with Postgres?.

            ...

            ANSWER

            Answered 2021-Sep-16 at 08:51

            There seems to be many tutorials on how to achieve this, you can use this one for example.

            Source https://stackoverflow.com/questions/69196554

            QUESTION

            terraform create k8s secret from gcp secret
            Asked 2021-Aug-26 at 18:58

            I have managed to achieve the flow of creating sensitive resources in terraform, without revealing what the sensitive details are at any point and therefore won't be stored in plain text in our github repo. I have done this by letting TF create a service account, it's associated SA key, and then creating a GCP secret that references the output from the SA key for example.

            I now want to see if there's any way to do the same for some pre-defined database passwords. The flow will be slightly different:

            • Manually create the GCP secret (in secrets manager) which has a value of a list of plain text database passwords which our PGbouncer instance will use (more info later in the flow)
            • I import this using terraform import so terraform state is now aware of this resource even though it was created outside of TF, but the secret version I've just added as secret_data = "" (otherwise putting the plain text password details here defeat the object!)
            • I now want to grab the secret_data from the google_secret_manager_version to add into the kubernetes_secret so it can be used within our GKE cluster.

            However, when I run terraform plan, it wants to change the value of my manually created GCP secret

            ...

            ANSWER

            Answered 2021-Aug-26 at 18:58

            If you just want to retrieve/READ the secret without actively managing it, then you can use the associated data instead:

            Source https://stackoverflow.com/questions/68941378

            QUESTION

            PDO throws connection error about "trust" authentication when using pgbouncer for one database, but not another. Settings are identical
            Asked 2021-Aug-22 at 23:20

            This problem is driving me batty. I have a PHP script which connects to a postgresql database, and I have pgbouncer running for connection pooling.

            I've tested it using two databases, and it works just fine for both when I connect directly. Here is my connection code:

            ...

            ANSWER

            Answered 2021-Aug-22 at 23:20

            I eventually solved this myself. Apparently it is also necessary to edit the /etc/pgbouncer/userlist.txt file:

            This file had a listing for db1, but not db2. Adding the second line (format is "username" "password"):

            Source https://stackoverflow.com/questions/68761639

            QUESTION

            Why do some of my kubernetes nodes fail to connect to my postgres cluster while others succeed?
            Asked 2021-Aug-02 at 14:48

            So I am running a k8s cluster with 3 pod postgres cluster fronted by a 3 pod pgbouncer cluster. Connecting to that is a batch job with multiple parallel workers which stream data into the database via pgbouncer. If I run 10 of these batch job pods everything works smoothly. If I go up an order of magnitude to 100 job pods, a large portion of them fail to connect to the database with the error got error driver: bad connection. Multiple workers run on the same node (5 worker pods per node) So it's only ~26 pods in the k8s cluster.

            What's maddening is I'm not seeing any postgres or pgbouncer error/warning logs in Kibana and their pods aren't failing. Also Prometheus logging shows it to be well under the max connections.

            Below are the postgres and pgbouncer configs along with the connection code of the workers.

            Relevant Connection Code From Worker:

            ...

            ANSWER

            Answered 2021-Aug-02 at 14:48

            This ended up being an issue of postgres not actually using the configmap I had set. The map was for 200 connections but the actual DB was still at the default of 100.

            Not much to learn here other than make sure to check that the configs you set actually propagate to the actual service.

            Source https://stackoverflow.com/questions/68579735

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install pgbouncer

            You can download it from GitHub.

            Support

            PgBouncer does host name lookups at connect time instead of just once at configuration load time. This requires an asynchronous DNS implementation. The following table shows supported backends and their probing order:. | backend | parallel | EDNS0 (1) | /etc/hosts | SOA lookup (2) | note | |----------------------------|----------|-----------|------------|----------------|---------------------------------------| | c-ares | yes | yes | yes | yes | IPv6+CNAME buggy in ⇐1.10 | | udns | yes | yes | no | yes | IPv4 only | | evdns, libevent 2.x | yes | no | yes | no | does not check /etc/hosts updates | | getaddrinfo_a, glibc 2.9+ | yes | yes (3) | yes | no | N/A on non-glibc | | getaddrinfo, libc | no | yes (3) | yes | no | requires pthreads |. c-ares is the most fully-featured implementation and is recommended for most uses and binary packaging (if a sufficiently new version is available). Libevent’s built-in evdns is also suitable for many uses, with the listed restrictions. The other backends are mostly legacy options at this point and don’t receive much testing anymore. By default, c-ares is used if it can be found. Its use can be forced with configure --with-cares or disabled with --without-cares. If c-ares is not used (not found or disabled), then specify --with-udns to pick udns, else Libevent is used. Specify --disable-evdns to disable the use of Libevent’s evdns and fall back to a libc-based implementation.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link