Explore all SQL Database open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in SQL Database

dbeaver

22.0.3

sequelize

v7.0.0-alpha.11

knex

1.0.7

mysql

Version 1.6.0

dolt

0.39.1

Popular Libraries in SQL Database

dbeaver

by dbeaver doticonjavadoticon

star image 26064 doticonApache-2.0

Free universal database tool and SQL client

sequelize

by sequelize doticonjavascriptdoticon

star image 26011 doticonMIT

An easy-to-use and promise-based multi SQL dialects ORM tool for Node.js | Postgres, MySQL, MariaDB, SQLite, MSSQL, Snowflake & DB2

flink

by apache doticonjavadoticon

star image 18609 doticonApache-2.0

Apache Flink

knex

by knex doticonjavascriptdoticon

star image 15613 doticonMIT

A query builder for PostgreSQL, MySQL, CockroachDB, SQL Server, SQLite3 and Oracle, designed to be flexible, portable, and fun to use.

Dapper

by DapperLib doticoncsharpdoticon

star image 14058 doticonNOASSERTION

Dapper - a simple object mapper for .Net

Dapper

by StackExchange doticoncsharpdoticon

star image 13619 doticonNOASSERTION

Dapper - a simple object mapper for .Net

sqlx

by jmoiron doticongodoticon

star image 11733 doticonMIT

general purpose extensions to golang's database/sql

mysql

by go-sql-driver doticongodoticon

star image 11402 doticonMPL-2.0

Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package

dolt

by dolthub doticongodoticon

star image 10500 doticonApache-2.0

Dolt – It's Git for Data

Trending New libraries in SQL Database

sqlmodel

by tiangolo doticonpythondoticon

star image 7183 doticonMIT

SQL databases in Python, designed for simplicity, compatibility, and robustness.

ts-sql

by codemix doticontypescriptdoticon

star image 2169 doticon

A SQL database implemented purely in TypeScript type annotations.

arrow-datafusion

by apache doticonrustdoticon

star image 1900 doticonApache-2.0

Apache Arrow DataFusion and Ballista query engines

sqli

by x-ream doticonjavadoticon

star image 1642 doticonApache-2.0

orm sql interface, Criteria, CriteriaBuilder, ResultMapBuilder

xgenecloud

by xgenecloud doticonjavascriptdoticon

star image 1561 doticonApache-2.0

•:fire: •:fire: •:fire: • Instantly generate REST & GraphQL APIs on any Database (Supports : MySQL, PostgreSQL, MsSQL, SQLite, MariaDB & Aurora)

steampipe

by turbot doticongodoticon

star image 1272 doticonAGPL-3.0

Use SQL to instantly query your cloud services (AWS, Azure, GCP and more). Open source CLI. No DB required.

roapi

by roapi doticonrustdoticon

star image 1112 doticonApache-2.0

Create full-fledged APIs for static datasets without writing a single line of code.

flink-cdc-connectors

by ververica doticonjavadoticon

star image 1064 doticonApache-2.0

Change Data Capture (CDC) Connectors for Apache Flink

flink-streaming-platform-web

by zhp8341 doticonjavadoticon

star image 1022 doticonMIT

基于flink-sql的实时流计算web平台

Top Authors in SQL Database

1

microsoft

38 Libraries

star icon16502

2

MicrosoftDocs

17 Libraries

star icon1083

3

apache

13 Libraries

star icon23980

4

turbot

12 Libraries

star icon1375

5

aws-samples

11 Libraries

star icon225

6

Azure-Samples

10 Libraries

star icon174

7

couchbaselabs

8 Libraries

star icon354

8

kvesteri

8 Libraries

star icon1912

9

ErikEJ

7 Libraries

star icon1968

10

jackc

7 Libraries

star icon681

1

38 Libraries

star icon16502

2

17 Libraries

star icon1083

3

13 Libraries

star icon23980

4

12 Libraries

star icon1375

5

11 Libraries

star icon225

6

10 Libraries

star icon174

7

8 Libraries

star icon354

8

8 Libraries

star icon1912

9

7 Libraries

star icon1968

10

7 Libraries

star icon681

Trending Kits in SQL Database

Node.js SQLite ORM (Object-Relational Mapping) libraries simplify the database working process. It reduces the boilerplate code amount required by providing a higher abstraction level. These libraries play a crucial role in developing applications and interacting with databases.  

 

SQLite ORM libraries enable developers to write code independent of the underlying database. If you switch from SQLite to a different database engine, you can do so with minimal code changes. An interface is provided by ORM libraries regardless of the underlying database. It makes it easier to work with different database systems.  

 

SQLite ORM libraries provide a higher abstraction level and simplify database operations. It improves productivity and enhances portability, promoting best practices. It offers performance optimization. They are tools for developers working with SQLite databases in Node.js applications. It helps focus on building features rather than dealing with low-level database operations.  

 

Let's look at each library in detail. The links allow you to access package commands, installation notes, and code snippets. 

sequelize:  

  • It provides a high-level abstraction over the underlying database system like SQLite.  
  • It promotes a model-driven development approach.  
  • It generates SQL queries based on the defined models and their relationships.  
  • It includes built-in validation mechanisms for ensuring data integrity.  

typeorm:  

  • It simplifies working with databases by providing a high-level abstraction layer.  
  • It supports many database systems, including SQLite, MySQL, PostgreSQL, and more.  
  • It helps you define entities, which are JavaScript/TypeScript classes representing database tables.  
  • It provides a migration system that allows you to version. It applies database schema changes using TypeScript or JavaScript files. 

waterline:  

  • The waterline concept in Node.js SQLite ORM libraries might not be applicable.  
  • It is a database-agnostic ORM that supports SQLite and other databases.  
  • It provides a unified API for working with different databases. It offers features like query building, associations, and automatic schema management.  
  • It often provides query builders and optimization techniques to generate efficient SQL queries.  

objection.js:  

  • It follows the Model-View-Controller (MVC) architectural pattern and encourages model-driven development.  
  • It includes data validation and modeling features. It is crucial for maintaining data integrity and consistency in your SQLite database.  
  • It also supports middleware and hooks. It allows you to add custom logic and behavior to the database operations.  
  • It helps define models that represent database tables. These models encapsulate the logic for interacting with the database. 

massive.js:  

  • It is a JavaScript database library. It provides a convenient and efficient way to interact with databases, particularly PostgreSQL.  
  • It makes it easy to integrate your Node.js applications with PostgreSQL databases.  
  • It simplifies the mapping of data between JavaScript objects and database tables.  
  • It helps build scalable and maintainable applications by promoting the separation of concerns. 

node-orm2:  

  • This programming technique interacts with a relational database using object-oriented paradigms.  
  • It is a specific ORM library for Node.js designed to work with SQLite databases.  
  • It maps database tables to JavaScript objects. It provides a more intuitive and object-oriented way to use data.  
  • They are designed to work with database systems like SQLite, MySQL, and PostgreSQL.  

bookshelf:  

  • It is a popular object-relational mapping (ORM) library for Node.js. It is designed to work with SQL databases such as SQLite.  
  • It simplifies interacting with an SQLite database by providing an intuitive API.  
  • It offers an ORM layer, which maps JavaScript objects to database tables and vice versa.  
  • It includes a powerful query builder. It simplifies the creation of complex database queries. 

FAQ:  

1. What are the benefits of using a sqlite ORM library with type-safe and modern JavaScript?  

Type-safe and modern JavaScript can bring several benefits. Here are some of them:  

  • Type Safety  
  • Productivity  
  • Simplicity  
  • Portability  
  • Maintainability  
  • Testing  


2. How does the query builder work in nodejs sqlite?  

In Node.js, the SQLite query builder is a library or module. It provides a set of functions or methods to build SQL queries. It helps simplify constructing complex SQL statements by providing an intuitive API. Here's a general overview of how a query builder for SQLite in Node.js might work:  

  • Installation  
  • Database Connection  
  • Query Builder Initialization  
  • Table Selection  
  • Column Selection  
  • Conditions  
  • Sorting  
  • Limit and Offset  
  • Query Execution  
  • Handling Results  


3. What is Sequelize ORM, and how does it compare to other nodejs libraries?  

Sequelize is an Object-Relational Mapping (ORM) library for Node.js. It provides an interface for interacting with relational databases. It supports many database systems, including PostgreSQL, MySQL, SQLite, and MSSQL. It helps write database queries and manipulate data using JavaScript. You can do so instead of raw SQL statements.  

 

4. Is there any toolkit for managing database schema through an Object Relational Mapper?  

Several toolkits are available for managing database schema in SQLite using an ORM. Here are a few popular options:  

  • SQLAlchemy  
  • Peewee  
  • Django ORM  
  • Pony ORM  


5. How do I use Prisma Client with my nodejs application?  

To use Prisma Client with your Node.js application, you need to follow these steps:  

  • Install Prisma Client. 
  • Configure Prisma. 
  • Generate Prisma Client. 
  • Use Prisma Client in your application. 
  • Run your application. 

Java is a programming language that is used to create applications for various platforms, including Windows, Mac OS, UNIX and Linux. SQL databases are the most common data storage systems for modern web applications. They are the backbone of any web application. Java SQL Database is a language-independent, vendor-neutral, technology-neutral specification intended to standardize the interaction between a database management system and a programming language. Java Database Connectivity (JDBC) API provides the ability to interact with databases that support the SQL standard. Java-JDBC is a pure Java package that provides both a JDBC driver and a rich set of classes for building and manipulating SQL statements, as well as for accessing data from relational database management systems. There are several popular Java SQL Database open source libraries available for developers: dbeaver - Free universal database tool and SQL client; questdb - open source SQL database designed to process time series data; zeppelin - Webbased notebook that enables datadriven.

JavaScript is an object-oriented programming language that can be used to create applications that run in web browsers, on servers, and in a variety of other environments. A JavaScript library that allows to work with data structures in the same way you work with objects in JavaScript while providing access to relational databases like MySQL, PostgreSQL or SQL Server using NodeJs or the browser. A JavaScript SQL database library is a JavaScript library for creating and managing databases. A JavaScript SQL database is a database that uses SQL to store, retrieve and manipulate data. These are free to use, open source and have no cost. Popular JavaScript SQL Database open source libraries include: lovefield - relational database for web apps; alasql - AlaSQL.js - JavaScript SQL database for browser and Node.js. Handles both traditional relational tab; cordova-sqlite-storage - PhoneGap plugin to open and use sqlite databases.

C++ is a general-purpose programming language that supports procedural, object-oriented and generic programming. It was developed by Bjarne Stroustrup starting in 1979 at Bell Labs as an enhancement to the C programming language. C++ is one of the most popular programming languages in use today, primarily for its efficiency and flexibility. A database is a program that stores, organizes and provides access to data. Database management systems (DBMS) are software programs designed to manage databases. A database management system is a software system that provides data definition, data manipulation, and data control services for a database. Some of the most widely used open source libraries for C++ SQL Database among developers include: mysql-server - MySQL Server, the world's most popular open source database; oceanbase - OceanBase is an enterprise distributed relational database with high availability, high performance; mysql-5.6 - Facebook's branch of the Oracle MySQL v5.6 database.

Ruby is considered as one of the popular programming languages for web development in 2020. It was designed and developed in the mid-1990s by Yukihiro “Matz” Matsumoto in Japan. Ruby is a dynamic, multi-paradigm, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write. It's used by many large companies and websites, including Airbnb, GitHub, Hulu and Twitter. Ruby is also used as a scripting language in software development. Ruby SQL Database Libraries are the most popular and used libraries in the Ruby programming language. These libraries have been used by many programmers to connect their application with databases. Popular Ruby SQL Database open source libraries among developers include: scenic - Versioned database views for Rails; activerecord-sqlserver-adapter - SQL Server Adapter For Rails; sqlite3-ruby - Ruby bindings for the SQLite3 embedded database.

Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code. The language provides constructs intended to enable clear programs on both a small and large scale. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive standard library. A database is an organized collection of data that's stored on a computer so that it can be accessed easily. Databases are used for many different purposes, but they're often used to store large amounts of information and retrieve it quickly when needed. Many developers depend on the following Python SQL Database open source libraries: sqlmap - Automatic SQL injection and database takeover tool; q - q Run SQL directly on CSV or TSV files; sqlmodel - SQL databases in Python, designed for simplicity, compatibility, and robustness.

C# is a general-purpose programming language developed by Microsoft. C# was designed to be simple, object-oriented and type-safe. It has been widely used for enterprise applications and has many other uses, including video games, desktop applications and embedded systems. C# SQL Database libraries are important to any development project. They provide a clean and simple API for working with data from a SQL database. SQL Database is a managed, relational database service that makes it easy to set up, manage and scale your database. It does not require a separate server process and interacts directly with the filesystem. C# SQL Database libraries are used for interacting with a database in C#. These libraries provide an easier way to connect to a database, create tables and store data. Some of the most popular C# SQL Database Open Source libraries among developers are: querybuilder - SQL query builder; DbUp - NET library that helps you to deploy changes; chinook-database - Sample database for SQL Server, Oracle, MySQL, PostgreSQL, SQLite, DB2.

Go programming language has been around for a number of years and has gained popularity in recent years. It is an open source, general-purpose programming language that makes it easy to build simple, reliable, and efficient software. Go SQL Database is a database driver for Go. It supports the popular databases MySQL, MariaDB, PostgreSQL, SQLite and MSSQL. Go is a compiled language which means that it produces machine code on the fly. Go's core features include: Concurrency without data races, Rich standard library, Fast compilation, clean syntax, and fast execution. The Go language is simple, but powerful enough to run a huge amount of existing production code and create new applications that are lightweight, flexible and scalable. Some of the most popular Go SQL Database Open Source libraries among developers are: tidb - open source distributed HTAP database compatible; cockroach - open source, cloudnative distributed SQL database; sqlx - general purpose extensions to golang's database/sql.

PHP is a powerful, flexible programming language for web development. It’s easy to learn and use, making it one of the most popular languages on the web. PHP has a number of built-in functions for working with SQL databases. A major part of any web application is the database. PHP is one of the most popular server-side scripting languages. It has a lot of features and libraries that make it easy for developers to create complex websites and applications. PHP allows developers to work with databases either directly or through an abstraction layer called PDO (PHP Data Objects). Database libraries are packages that provide additional functionality to PDO, making it easier for developers to work with relational databases like MySQL or PostgreSQL. Some of the most widely used open source libraries for PHP SQL Database among developers include: php-crud-api - Single file PHP script that adds a REST API to a SQL database; countries-states-cities-database - World countries; php-my-sql-pdo-database-class - A database class for PHP MySQL which utilizes PDO.

Trending Discussions on SQL Database

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory

How to obtain MongoDB version using Golang library?

Issue while trying to set enum data type in MySQL database

Unable to resolve service for type Microsoft.EntityFrameworkCore.Diagnostics.IDiagnosticsLogger

Should I close an RDS Proxy connection inside an AWS Lambda function?

PostgreSQL conditional select throwing error

How to programmatically detect auto failover on AWS mysql aurora?

System.NotSupportedException: Character set 'utf8mb3' is not supported by .Net Framework

Debugging a Google Dataflow Streaming Job that does not work expected

move Odoo large database (1.2TB)

QUESTION

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory

Asked 2022-Apr-04 at 15:46

Not really sure what caused this but most likely exiting the terminal while my rails server which was connected to PostgreSQL database was closed (not a good practice I know but lesson learned!)

I've already tried the following:

  1. Rebooting my machine (using MBA M1 2020)
  2. Restarting PostgreSQL using homebrew brew services restart postgresql
  3. Re-installing PostgreSQL using Homebrew
  4. Updating PostgreSQL using Homebrew
  5. I also tried following this link but when I run cd Library/Application\ Support/Postgres terminal tells me Postgres folder doesn't exist, so I'm kind of lost already. Although I have a feeling that deleting postmaster.pid would really fix my issue. Any help would be appreciated!

ANSWER

Answered 2022-Jan-13 at 15:19
Resetting PostgreSQL

My original answer only included the troubleshooting steps below, and a workaround. I now decided to properly fix it via brute force by removing all clusters and reinstalling, since I didn't have any data there to keep. It was something along these lines, on my Ubuntu 21.04 system:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6

Now I have:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9

And sudo -u postgres psql works fine. The service was started automatically but it can be done manually with sudo systemctl start postgresql.

Incidentally, I can recommend the PostgreSQL docker image, which eliminates the need to bother with a local installation.

Troubleshooting

Although I cannot provide an answer to your specific problem, I thought I'd share my troubleshooting steps, hoping that it might be of some help. It seems that you are on Mac, whereas I am running Ubuntu 21.04, so expect things to be different.

This is a client connection problem, as noted by section 19.3.2 in the docs.

The directory in my error message is different:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12

I checked what unix sockets I had in that directory:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21

Makes sense, there is a socket for 5433 not 5432. I confirmed this by running:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25

This explains how it got into this mess on my system. The default port is 5432, but after I upgraded from version 12 to 14, the server was setup to listen to 5433, presumably because it considered 5432 as already taken. Two alternatives here, get the server to listen on 5432 which is the client's default, or get the client to use 5433.

Let's try it by changing the client's parameters:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30

It worked! Now, to make it permanent I'm supposed to put this setting on a psqlrc or ~/.psqlrc file. The thin documentation on this (under "Files") was not helpful to me as I was not sure on the syntax and my attempts did not change the client's default, so I moved on.

To change the server I looked for the postgresql.conf mentioned in the documentation but could not find the file. I did however see /var/lib/postgresql/14/main/postgresql.auto.conf so I created it on the same directory with the content:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31

Restarted the server: sudo systemctl restart postgresql

But the error persisted because, as the logs confirmed, the port did not change:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37

After other attempts did not succeed, I eventually decided to use a workaround: to redirect the client's requests on 5432 to 5433:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
38

This is what I have now:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
38$ ls -lah /var/run/postgresql/
39total 8.0K
40drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
41drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
42drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
43drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
44-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
45lrwxrwxrwx  1 postgres postgres   33 Oct 29 16:40 .s.PGSQL.5432 -> /var/run/postgresql/.s.PGSQL.5433
46srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
47-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
48

This means I can now just run psql without having to explicitly set the port to 5433. Now, this is a hack and I would not recommend it. But in my development system I am happy with it for now, because I don't have more time to spend on this. This is why I shared the steps and the links so that you can find a proper solution for your case.

Source https://stackoverflow.com/questions/69754628

QUESTION

How to obtain MongoDB version using Golang library?

Asked 2022-Mar-28 at 05:54

I am using Go's MongodDB driver (https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.8.0/mongo#section-documentation) and want to obtain the version of the mongoDB server deployed.

For instance, if it would been a MySQL database, I can do something like below:

1db, err := sql.Open("mysql", DbUser+":"+DbPwd+"@tcp("+Host+")/"+DbName)
2if err != nil {
3    log.Printf("Error while connecting to DB: %v", err)
4}
5defer db.Close()
6
7var dbVersion string
8if err := db.QueryRow("SELECT VERSION()").Scan(&dbVersion); err != nil {
9    dbVersion = "NA"
10    log.Printf("Couldnt obtain db version: %w", err)
11}
12fmt.Println("DB Version: ", dbVersion)
13

I went through the documentation but am not able to find a clue.

I also need to fetch other metadata like Size of a particular database etc.

Any help would be appreciated. Thanks!

ANSWER

Answered 2022-Mar-26 at 08:04

The MongoDB version can be acquired by running a command, specifically the buildInfo command.

Using the shell, this is how you could do it:

1db, err := sql.Open("mysql", DbUser+":"+DbPwd+"@tcp("+Host+")/"+DbName)
2if err != nil {
3    log.Printf("Error while connecting to DB: %v", err)
4}
5defer db.Close()
6
7var dbVersion string
8if err := db.QueryRow("SELECT VERSION()").Scan(&dbVersion); err != nil {
9    dbVersion = "NA"
10    log.Printf("Couldnt obtain db version: %w", err)
11}
12fmt.Println("DB Version: ", dbVersion)
13db.runCommand({buildInfo: 1})
14

The result is a document whose version property holds the server version, e.g.:

1db, err := sql.Open("mysql", DbUser+":"+DbPwd+"@tcp("+Host+")/"+DbName)
2if err != nil {
3    log.Printf("Error while connecting to DB: %v", err)
4}
5defer db.Close()
6
7var dbVersion string
8if err := db.QueryRow("SELECT VERSION()").Scan(&dbVersion); err != nil {
9    dbVersion = "NA"
10    log.Printf("Couldnt obtain db version: %w", err)
11}
12fmt.Println("DB Version: ", dbVersion)
13db.runCommand({buildInfo: 1})
14{
15    "version" : "5.0.6",
16    ...
17}
18

To run commands using the official driver, use the Database.RunCommand() method.

For example:

1db, err := sql.Open("mysql", DbUser+":"+DbPwd+"@tcp("+Host+")/"+DbName)
2if err != nil {
3    log.Printf("Error while connecting to DB: %v", err)
4}
5defer db.Close()
6
7var dbVersion string
8if err := db.QueryRow("SELECT VERSION()").Scan(&dbVersion); err != nil {
9    dbVersion = "NA"
10    log.Printf("Couldnt obtain db version: %w", err)
11}
12fmt.Println("DB Version: ", dbVersion)
13db.runCommand({buildInfo: 1})
14{
15    "version" : "5.0.6",
16    ...
17}
18// Connect to MongoDB and acquire a Database:
19
20ctx := context.Background()
21opts := options.Client().ApplyURI("mongodb://localhost")
22client, err := mongo.Connect(ctx, opts)
23if err != nil {
24    log.Fatalf("Failed to connect to db: %v", err)
25}
26defer client.Disconnect(ctx)
27
28db := client.Database("your-db-name")
29
30// And now run the buildInfo command:
31
32buildInfoCmd := bson.D{bson.E{Key: "buildInfo", Value: 1}}
33var buildInfoDoc bson.M
34if err := db.RunCommand(ctx, buildInfoCmd).Decode(&buildInfoDoc); err != nil {
35    log.Printf("Failed to run buildInfo command: %v", err)
36    return
37}
38log.Println("Database version:", buildInfoDoc["version"])
39

Source https://stackoverflow.com/questions/71616694

QUESTION

Issue while trying to set enum data type in MySQL database

Asked 2022-Mar-22 at 07:40

What am I trying to do?

Django does not support setting enum data type in mysql database. Using below code, I tried to set enum data type.

Error Details

_mysql.connection.query(self, query) django.db.utils.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'NOT NULL, created_at datetime(6) NOT NULL, user_id bigint NOT NULL)' at line 1")

Am I missing anything?

Enumeration class with all choices

1class enumTokenTypes(models.TextChoices):
2    Registration = "Registration"
3    ForgotPassword = "Forgot Password"
4

User Token class in model

1class enumTokenTypes(models.TextChoices):
2    Registration = "Registration"
3    ForgotPassword = "Forgot Password"
4class tblusertokens(models.Model):
5    token_id = AutoField(primary_key=True)
6    token_type = EnumField(max_length=20, choices=enumTokenTypes.choices)
7    created_at = DateTimeField(auto_now_add=True, blank=True)
8    user = ForeignKey(tblusers, on_delete = models.CASCADE)    
9

User token create model in migration

1class enumTokenTypes(models.TextChoices):
2    Registration = "Registration"
3    ForgotPassword = "Forgot Password"
4class tblusertokens(models.Model):
5    token_id = AutoField(primary_key=True)
6    token_type = EnumField(max_length=20, choices=enumTokenTypes.choices)
7    created_at = DateTimeField(auto_now_add=True, blank=True)
8    user = ForeignKey(tblusers, on_delete = models.CASCADE)    
9class EnumField(CharField):
10    def db_type(self, connection):
11        return "enum"
12
13
14migrations.CreateModel(
15    name='tblusertokens',
16    fields=[
17        ('token_id', models.AutoField(primary_key=True, serialize=False)),
18        ('token_type', clientauth.models.EnumField(choices=[('Registration', 'Registration'), ('Forgot Password', 'Forgotpassword')], max_length=20)),
19        ('created_at', models.DateTimeField(auto_now_add=True)),
20        ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clientauth.tblusers')),
21    ],
22)
23

Bounty Question

set 2 parameters to the function to pass comma separated values and a default value.

ANSWER

Answered 2021-Sep-29 at 19:39

You can print out the sql for that migration to see specifically whats wrong, but defining db_type to return "enum" is definitely not the right way to approach it.

1class enumTokenTypes(models.TextChoices):
2    Registration = "Registration"
3    ForgotPassword = "Forgot Password"
4class tblusertokens(models.Model):
5    token_id = AutoField(primary_key=True)
6    token_type = EnumField(max_length=20, choices=enumTokenTypes.choices)
7    created_at = DateTimeField(auto_now_add=True, blank=True)
8    user = ForeignKey(tblusers, on_delete = models.CASCADE)    
9class EnumField(CharField):
10    def db_type(self, connection):
11        return "enum"
12
13
14migrations.CreateModel(
15    name='tblusertokens',
16    fields=[
17        ('token_id', models.AutoField(primary_key=True, serialize=False)),
18        ('token_type', clientauth.models.EnumField(choices=[('Registration', 'Registration'), ('Forgot Password', 'Forgotpassword')], max_length=20)),
19        ('created_at', models.DateTimeField(auto_now_add=True)),
20        ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clientauth.tblusers')),
21    ],
22)
23    ('token_type', CharField(choices=enumTokenTypes.choices, max_length=22)),
24

Does the recommended syntax from the docs on Enumeration types not work for you for some reason?

Source https://stackoverflow.com/questions/69365678

QUESTION

Unable to resolve service for type Microsoft.EntityFrameworkCore.Diagnostics.IDiagnosticsLogger

Asked 2022-Mar-18 at 09:52

I am having difficulties to scaffold an existing MySQL database using EF core. I have added the required dependencies as mentioned in the oracle doc:

1<PackageReference Include="Microsoft.EntityFrameworkCore" Version="6.0.0" />
2<PackageReference Include="Microsoft.EntityFrameworkcore.Tools" Version="6.0.0">
3

and then, I ran this code in the package manager console:

1<PackageReference Include="Microsoft.EntityFrameworkCore" Version="6.0.0" />
2<PackageReference Include="Microsoft.EntityFrameworkcore.Tools" Version="6.0.0">
3Scaffold-Dbcontext "server=the.server.ip.address;user=user_name;database=db_name;password=db_password;port=3306" MySql.EntityFrameworkCore -o Data -v
4

It shows this error:

Unable to resolve service for type 'Microsoft.EntityFrameworkCore.Diagnostics.IDiagnosticsLogger`1[Microsoft.EntityFrameworkCore.DbLoggerCategory+Scaffolding]' while attempting to activate 'MySql.EntityFrameworkCore.Scaffolding.Internal.MySQLDatabaseModelFactory'

Here are the relevant logs in the output window:

1<PackageReference Include="Microsoft.EntityFrameworkCore" Version="6.0.0" />
2<PackageReference Include="Microsoft.EntityFrameworkcore.Tools" Version="6.0.0">
3Scaffold-Dbcontext "server=the.server.ip.address;user=user_name;database=db_name;password=db_password;port=3306" MySql.EntityFrameworkCore -o Data -v
4Finding design-time services referenced by assembly 'Test2'...
5Finding design-time services referenced by assembly 'Test2'...
6No referenced design-time services were found.
7Finding design-time services for provider 'MySql.EntityFrameworkCore'...
8Using design-time services from provider 'MySql.EntityFrameworkCore'.
9Finding IDesignTimeServices implementations in assembly 'Test2'...
10No design-time services were found.
11

I don't know how shall I implement the design time classes and nor did I find any useful links in the web.

Note that I can access and run query on the database using MySQL Workbench.

ANSWER

Answered 2021-Dec-12 at 10:11

I came across the same issue trying to scaffold an existing MySQL database. It looks like the latest version of MySql.EntityFrameworkCore (6.0.0-preview3.1) still uses the EFCore 5.0 libraries and has not been updated to EFCore 6.0.

It also seems Microsoft.EntityFrameworkCore.Diagnostics was last implemented in EFCore 5 and removed in 6.

When I downgraded all the packages to the 5 version level, I was able to run the scaffold command without that error.

Source https://stackoverflow.com/questions/70224907

QUESTION

Should I close an RDS Proxy connection inside an AWS Lambda function?

Asked 2022-Mar-16 at 16:03

I'm using Lambda with RDS Proxy to be able to reuse DB connections to a MySQL database.

Should I close the connection after executing my queries or leave it open for the RDS Proxy to handle?

And if I should close the connection, then what's the point of using an RDS Proxy in the first place?

Here's an example of my lambda function:

1const mysql = require("mysql2/promise")
2
3exports.handler = async (event) => {
4  let connection
5
6  try {
7    connection = await mysql.createConnection({
8      host: process.env.RDS_HOST, // RDS Proxy endpoint here
9      user: process.env.RDS_USER,
10      database: process.env.RDS_DATABASE,
11      password: process.env.RDS_PASSWORD,
12      ssl: "Amazon RDS"
13    })
14    console.log(`Connected to db. ConnectionId: ${connection.threadId}`)
15
16    // Do some queries
17  } catch (err) {
18    return handleError(err)
19  } finally {
20    if (connection) await connection.end() // Should I close the connection here?
21  }
22
23  return response(200, "Success")
24}
25

EDIT: Beware that initializing the connection outside the handler (global scope) will make the lambda function retain the value of the connection variable between invocations in the same execution environment which will result in the following error Can't add new command when connection is in closed state when calling the lambda function multiple times in a short period of time because the connection is already closed in the first invocation, so I'd better suggest defining the connection inside the handler not outside.

ANSWER

Answered 2021-Dec-11 at 18:10
TDLR: always close database connections

The RDS proxy sits between your application and the database & should not result in any application change other than using the proxy endpoint.


Should I close the connection after executing my queries or leave it open for the RDS Proxy to handle?

You should not leave database connections open regardless of if you use or don't use a database proxy.

Connections are a limited and relatively expensive resource.

The rule of thumb is to open connections as late as possible & close DB connections as soon as possible. Connections that are not explicitly closed might not be added or returned to the pool. Closing database connections is being a good database client.

Keep DB resources tied up with many open connections & you'll find yourself needing more vCPUs for your DB instance which then results in a higher RDS proxy price tag.


And if I should close the connection, then what's the point of using an RDS Proxy in the first place?

The point is that your Amazon RDS Proxy instance maintains a pool of established connections to your RDS database instances for you - it sits between your application and your RDS database.

The proxy is not responsible for closing local connections that you make nor should it be.

It is responsible for helping by managing connection multiplexing/pooling & sharing automatically for applications that need it.

An example of an application that needs it is clearly mentioned in the AWS docs:

Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources.


To prevent any doubt, also feel free to check out an AWS-provided example that closes connections here (linked to from docs), or another one in the AWS Compute Blog here.

Source https://stackoverflow.com/questions/70317250

QUESTION

PostgreSQL conditional select throwing error

Asked 2022-Feb-13 at 22:21

I have a PostgreSQL database hosted on Heroku which is throwing me this error that I can't wrap my head around.

1CREATE TABLE people (id INTEGER PRIMARY KEY AUTOINCREMENT, age SMALLINT, right_handed BOOLEAN)
2SELECT 1 id FROM people WHERE age IN (1) AND right_handed IN (1)
3

It gives me the error:

Error in query: ERROR: syntax error at or near "IN"

Don't know how to proceed, any help would be greatly appreciated.

ANSWER

Answered 2022-Feb-13 at 22:21

AUTOINCREMENT is not a valid option for CREATE TABLE in Postgres

You can use SERIAL or BIGSERIAL:

1CREATE TABLE people (id INTEGER PRIMARY KEY AUTOINCREMENT, age SMALLINT, right_handed BOOLEAN)
2SELECT 1 id FROM people WHERE age IN (1) AND right_handed IN (1)
3ALTER TABLE myTable ADD COLUMN myColumn BIGSERIAL PRIMARY KEY;
4

Source https://stackoverflow.com/questions/71105172

QUESTION

How to programmatically detect auto failover on AWS mysql aurora?

Asked 2022-Feb-04 at 12:22

Our stack is nodejs with MySQL we're using MySQL connections pooling our MySQL database is managed on AWS aurora . in case of auto failover the master DB is changed the hostname stays the same but the connections inside the pool stays connected to the wrong DB. The only why we found in order to reset the connection is to roll our servers.

this is a demonstration of a solution I think could solve this issue but I prefer a solution without the set interval

1const mysql = require('mysql');
2
3
4class MysqlAdapter {
5    constructor() {
6        this.connectionType = 'MASTER';
7        this.waitingForAutoFaileOverSwitch = false;
8        this.poolCluster = mysql.createPoolCluster();
9        this.poolCluster.add(this.connectionType, {
10            host: 'localhost',
11            user: 'root',
12            password: 'root',
13            database: 'app'
14        });
15
16        this.intervalID = setInterval(() => {
17            if(this.waitingForAutoFaileOverSwitch) return;
18            this.excute('SHOW VARIABLES LIKE \'read_only\';').then(res => {
19                // if MASTER is set to read only is on then its mean a fail over is accoure and swe need to switch all connection in poll to secondry database
20                if (res[0].Value === 'ON') {
21                    this.waitingForAutoFaileOverSwitch = true
22                    this.poolCluster.end(() => {
23                        this. waitingForAutoFaileOverSwitch = false
24                    });
25                };
26            });
27        }, 5000);
28
29    }
30    async excute(query) {
31        // delay all incoming request until pool kill all connection to read only database
32        if (this.waitingForAutoFaileOverSwitch) {
33            return new Promise((resolve, reject) => {
34                setTimeout(() => {
35                    this.excute(query).then(res => {
36                        resolve(res);
37                    });
38                }, 1000);
39            });
40        }
41        return new Promise((resolve, reject) => {
42
43            this.poolCluster.getConnection(this.connectionType, (err, connection) => {
44                if (err) {
45                    reject(err);
46                }
47                connection.query(query, (err, rows) => {
48                    connection.release();
49                    if (err) {
50                        reject(err);
51                    }
52                    resolve(rows);
53                });
54            });
55        });
56    }
57}
58
59
60
61const adapter = new MysqlAdapter();
62

Is there any other programmable way to reset the connection inside the pool?

Is there any notification we can listing to In case of auto-failover?

ANSWER

Answered 2022-Feb-04 at 12:22

Instead of manually monitoring the DB health, as you have also hinted, ideally we subscribe to failover events published by AWS RDS Aurora.

There are multiple failover events listed here for the DB cluster: Amazon RDS event categories and event messages

You can use and test to see which one of them is the most reliable in your use case for triggering poolCluster.end() though.

Source https://stackoverflow.com/questions/70861875

QUESTION

System.NotSupportedException: Character set 'utf8mb3' is not supported by .Net Framework

Asked 2022-Jan-27 at 00:12

I am trying to run a server with a MySQL Database, however I keep getting this huge error and I am not sure why.

1[21:15:49,107] Server Properties Lookup: Error While Initialization
2DOL.Database.DatabaseException: Table DOL.Database.ServerProperty is not registered for Database Connection...
3   at DOL.Database.ObjectDatabase.SelectAllObjects[TObject]()
4   at DOL.GS.ServerProperties.Properties.get_AllDomainProperties()
5   at DOL.GS.ServerProperties.Properties.InitProperties()
6   at DOL.GS.GameServer.InitComponent(Action componentInitMethod, String text)```
7
8also this error
9
10[21:15:35,991] ExecuteSelectImpl: UnHandled Exception for Select Query "DESCRIBE `Specialization`"
11System.NotSupportedException: Character set 'utf8mb3' is not supported by .Net Framework.
12   at MySql.Data.MySqlClient.CharSetMap.GetCharacterSet(DBVersion version, String charSetName)
13   at MySql.Data.MySqlClient.MySqlField.SetFieldEncoding()
14   at MySql.Data.MySqlClient.NativeDriver.GetColumnData(MySqlField field)
15   at MySql.Data.MySqlClient.NativeDriver.GetColumnsData(MySqlField[] columns)
16   at MySql.Data.MySqlClient.Driver.GetColumns(Int32 count)
17   at MySql.Data.MySqlClient.ResultSet.LoadColumns(Int32 numCols)
18   at MySql.Data.MySqlClient.ResultSet..ctor(Driver d, Int32 statementId, Int32 numCols)
19   at MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId, Boolean force)
20   at MySql.Data.MySqlClient.MySqlDataReader.NextResult()
21   at MySql.Data.MySqlClient.MySqlDataReader.Close()
22   at MySql.Data.MySqlClient.MySqlCommand.ResetReader()
23   at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
24   at MySql.Data.MySqlClient.MySqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
25   at System.Data.Common.DbCommand.ExecuteReader()
26   at DOL.Database.SQLObjectDatabase.ExecuteSelectImpl(String SQLCommand, IEnumerable`1 parameters, Action`1 Reader)```
27
28

ANSWER

Answered 2021-Aug-11 at 14:38

Maybe a solution. Source : https://dba.stackexchange.com/questions/8239/how-to-easily-convert-utf8-tables-to-utf8mb4-in-mysql-5-5

Change your CHARACTER SET AND COLLATE to utf8mb4.

For each database:

1[21:15:49,107] Server Properties Lookup: Error While Initialization
2DOL.Database.DatabaseException: Table DOL.Database.ServerProperty is not registered for Database Connection...
3   at DOL.Database.ObjectDatabase.SelectAllObjects[TObject]()
4   at DOL.GS.ServerProperties.Properties.get_AllDomainProperties()
5   at DOL.GS.ServerProperties.Properties.InitProperties()
6   at DOL.GS.GameServer.InitComponent(Action componentInitMethod, String text)```
7
8also this error
9
10[21:15:35,991] ExecuteSelectImpl: UnHandled Exception for Select Query "DESCRIBE `Specialization`"
11System.NotSupportedException: Character set 'utf8mb3' is not supported by .Net Framework.
12   at MySql.Data.MySqlClient.CharSetMap.GetCharacterSet(DBVersion version, String charSetName)
13   at MySql.Data.MySqlClient.MySqlField.SetFieldEncoding()
14   at MySql.Data.MySqlClient.NativeDriver.GetColumnData(MySqlField field)
15   at MySql.Data.MySqlClient.NativeDriver.GetColumnsData(MySqlField[] columns)
16   at MySql.Data.MySqlClient.Driver.GetColumns(Int32 count)
17   at MySql.Data.MySqlClient.ResultSet.LoadColumns(Int32 numCols)
18   at MySql.Data.MySqlClient.ResultSet..ctor(Driver d, Int32 statementId, Int32 numCols)
19   at MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId, Boolean force)
20   at MySql.Data.MySqlClient.MySqlDataReader.NextResult()
21   at MySql.Data.MySqlClient.MySqlDataReader.Close()
22   at MySql.Data.MySqlClient.MySqlCommand.ResetReader()
23   at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
24   at MySql.Data.MySqlClient.MySqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
25   at System.Data.Common.DbCommand.ExecuteReader()
26   at DOL.Database.SQLObjectDatabase.ExecuteSelectImpl(String SQLCommand, IEnumerable`1 parameters, Action`1 Reader)```
27
28ALTER DATABASE
29    database_name
30    CHARACTER SET = utf8mb4
31    COLLATE = utf8mb4_unicode_ci;
32

For each table:

1[21:15:49,107] Server Properties Lookup: Error While Initialization
2DOL.Database.DatabaseException: Table DOL.Database.ServerProperty is not registered for Database Connection...
3   at DOL.Database.ObjectDatabase.SelectAllObjects[TObject]()
4   at DOL.GS.ServerProperties.Properties.get_AllDomainProperties()
5   at DOL.GS.ServerProperties.Properties.InitProperties()
6   at DOL.GS.GameServer.InitComponent(Action componentInitMethod, String text)```
7
8also this error
9
10[21:15:35,991] ExecuteSelectImpl: UnHandled Exception for Select Query "DESCRIBE `Specialization`"
11System.NotSupportedException: Character set 'utf8mb3' is not supported by .Net Framework.
12   at MySql.Data.MySqlClient.CharSetMap.GetCharacterSet(DBVersion version, String charSetName)
13   at MySql.Data.MySqlClient.MySqlField.SetFieldEncoding()
14   at MySql.Data.MySqlClient.NativeDriver.GetColumnData(MySqlField field)
15   at MySql.Data.MySqlClient.NativeDriver.GetColumnsData(MySqlField[] columns)
16   at MySql.Data.MySqlClient.Driver.GetColumns(Int32 count)
17   at MySql.Data.MySqlClient.ResultSet.LoadColumns(Int32 numCols)
18   at MySql.Data.MySqlClient.ResultSet..ctor(Driver d, Int32 statementId, Int32 numCols)
19   at MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId, Boolean force)
20   at MySql.Data.MySqlClient.MySqlDataReader.NextResult()
21   at MySql.Data.MySqlClient.MySqlDataReader.Close()
22   at MySql.Data.MySqlClient.MySqlCommand.ResetReader()
23   at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
24   at MySql.Data.MySqlClient.MySqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
25   at System.Data.Common.DbCommand.ExecuteReader()
26   at DOL.Database.SQLObjectDatabase.ExecuteSelectImpl(String SQLCommand, IEnumerable`1 parameters, Action`1 Reader)```
27
28ALTER DATABASE
29    database_name
30    CHARACTER SET = utf8mb4
31    COLLATE = utf8mb4_unicode_ci;
32ALTER TABLE
33    table_name
34    CONVERT TO CHARACTER SET utf8mb4
35    COLLATE utf8mb4_unicode_ci;
36

For each column:

1[21:15:49,107] Server Properties Lookup: Error While Initialization
2DOL.Database.DatabaseException: Table DOL.Database.ServerProperty is not registered for Database Connection...
3   at DOL.Database.ObjectDatabase.SelectAllObjects[TObject]()
4   at DOL.GS.ServerProperties.Properties.get_AllDomainProperties()
5   at DOL.GS.ServerProperties.Properties.InitProperties()
6   at DOL.GS.GameServer.InitComponent(Action componentInitMethod, String text)```
7
8also this error
9
10[21:15:35,991] ExecuteSelectImpl: UnHandled Exception for Select Query "DESCRIBE `Specialization`"
11System.NotSupportedException: Character set 'utf8mb3' is not supported by .Net Framework.
12   at MySql.Data.MySqlClient.CharSetMap.GetCharacterSet(DBVersion version, String charSetName)
13   at MySql.Data.MySqlClient.MySqlField.SetFieldEncoding()
14   at MySql.Data.MySqlClient.NativeDriver.GetColumnData(MySqlField field)
15   at MySql.Data.MySqlClient.NativeDriver.GetColumnsData(MySqlField[] columns)
16   at MySql.Data.MySqlClient.Driver.GetColumns(Int32 count)
17   at MySql.Data.MySqlClient.ResultSet.LoadColumns(Int32 numCols)
18   at MySql.Data.MySqlClient.ResultSet..ctor(Driver d, Int32 statementId, Int32 numCols)
19   at MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId, Boolean force)
20   at MySql.Data.MySqlClient.MySqlDataReader.NextResult()
21   at MySql.Data.MySqlClient.MySqlDataReader.Close()
22   at MySql.Data.MySqlClient.MySqlCommand.ResetReader()
23   at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
24   at MySql.Data.MySqlClient.MySqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
25   at System.Data.Common.DbCommand.ExecuteReader()
26   at DOL.Database.SQLObjectDatabase.ExecuteSelectImpl(String SQLCommand, IEnumerable`1 parameters, Action`1 Reader)```
27
28ALTER DATABASE
29    database_name
30    CHARACTER SET = utf8mb4
31    COLLATE = utf8mb4_unicode_ci;
32ALTER TABLE
33    table_name
34    CONVERT TO CHARACTER SET utf8mb4
35    COLLATE utf8mb4_unicode_ci;
36ALTER TABLE
37    table_name
38    CHANGE column_name column_name
39    VARCHAR(191)
40    CHARACTER SET utf8mb4
41    COLLATE utf8mb4_unicode_ci;
42

Worked for me with Powershell and MariaDB 10.6. Hope this will help ;)

Source https://stackoverflow.com/questions/68645324

QUESTION

Debugging a Google Dataflow Streaming Job that does not work expected

Asked 2022-Jan-26 at 19:14

I am following this tutorial on migrating data from an oracle database to a Cloud SQL PostreSQL instance.

I am using the Google Provided Streaming Template Datastream to PostgreSQL

At a high level this is what is expected:

  1. Datastream exports in Avro format backfill and changed data into the specified Cloud Bucket location from the source Oracle database
  2. This triggers the Dataflow job to pickup the Avro files from this cloud storage location and insert into PostgreSQL instance.

When the Avro files are uploaded into the Cloud Storage location, the job is indeed triggered but when I check the target PostgreSQL database the required data has not been populated.

When I check the job logs and worker logs, there are no error logs. When the job is triggered these are the logs that logged:

1StartBundle: 4
2Matched 1 files for pattern gs://BUCKETNAME/ora2pg/DEMOAPP_DEMOTABLE/2022/01/11/20/03/7e13ac05aa3921875434e51c0c0c63aaabced31a_oracle-backfill_336860711_1_0.avro
3FinishBundle: 5
4

Does anyone know what the issue is? Is it a configuration issue? If needed I will post the required configurations.

If not could someone aid me on how to properly debug this particular Dataflow job? Thanks

EDIT 1:

When checking the step info for the steps in the pipeline, found the following:

Below are all the steps in the pipeline: enter image description here

First step (DatastreamIO) seems to work as expected with the correct number of element counters in the "Output collection" which is 2. enter image description here

However in the second step, these 2 element counters are not found in the Output collection. On further inspection, it can be seen that the elements seem to be dropped in the following step (Format to Postgres DML > Format to Postgres DML > Map): enter image description here

EDIT 2:

This is a screenshot of the Cloud Worker logs for the above step: enter image description here

EDIT 3:

I individually built and deployed the template from source in order to debug this issue. I found that the code works up to the following line in DatabaseMigrationUtils.java:

1StartBundle: 4
2Matched 1 files for pattern gs://BUCKETNAME/ora2pg/DEMOAPP_DEMOTABLE/2022/01/11/20/03/7e13ac05aa3921875434e51c0c0c63aaabced31a_oracle-backfill_336860711_1_0.avro
3FinishBundle: 5
4return KV.of(jsonString, dmlInfo);
5

Where the jsonString variable contains the dataset read from the .avro file.

But the code does not progress beyond this and seems to abruptly stop without any errors being thrown.

ANSWER

Answered 2022-Jan-26 at 19:14

This answer is accurate as of 19th January 2022.

Upon manual debug of this dataflow, I found that the issue is due to the dataflow job is looking for a schema with the exact same name as the value passed for the parameter databaseName and there was no other input parameter for the job using which we could pass a schema name. Therefore for this job to work, the tables will have to be created/imported into a schema with the same name as the database.

However, as @Iñigo González said this dataflow is currently in Beta and seems to have some bugs as I ran into another issue as soon as this was resolved which required me having to change the source code of the dataflow template job itself and build a custom docker image for it.

Source https://stackoverflow.com/questions/70703277

QUESTION

move Odoo large database (1.2TB)

Asked 2022-Jan-14 at 16:59

I have to move a large Odoo(v13) database almost 1.2TB(DATABASE+FILESTORE), I can't use the UI for that(keeps loading for 10h+ without a result) and I dont want to only move postgresql database so I need file store too, What should I do? extract db and copy past the filestore folder? Thanks a lot.

ANSWER

Answered 2022-Jan-14 at 16:59

You can move database and filestore separately. Move your Odoo PostgreSQL database with normal Postgres backup/restore cycle (not the Odoo UI backup/restore), this will copy the database to your new server. Then move your Odoo filestore to new location as filesystem level copy. This is enough to get the new environment running.

I assume you mean moving to a new server, not just moving to a new location on same filesystem on the same server.

Source https://stackoverflow.com/questions/70713796

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in SQL Database

Tutorials and Learning Resources are not available at this moment for SQL Database

Share this Page

share link

Get latest updates on SQL Database