Explore all Database open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Database

redis

7.0-rc3

tidb

tidb-server v6.0.0

rethinkdb

2.4.1 - Night Of The Living Dead

ClickHouse

Release v22.4.2.1-stable

rocksdb

RocksDB 7.1.2

Popular Libraries in Database

redis

by redis doticoncdoticon

star image 54360 doticonBSD-3-Clause

Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.

tidb

by pingcap doticongodoticon

star image 31064 doticonApache-2.0

TiDB is an open source distributed HTAP database compatible with the MySQL protocol

rethinkdb

by rethinkdb doticonc++doticon

star image 24920 doticonApache-2.0

The open-source database for the realtime web.

cockroach

by cockroachdb doticongodoticon

star image 24311 doticonNOASSERTION

CockroachDB - the open source, cloud-native distributed SQL database.

ClickHouse

by ClickHouse doticonc++doticon

star image 23286 doticonApache-2.0

ClickHouse® is a free analytics DBMS for big data

rocksdb

by facebook doticonc++doticon

star image 22331 doticonNOASSERTION

A library that provides an embeddable, persistent key-value store for fast storage.

prisma

by prisma doticontypescriptdoticon

star image 22118 doticonApache-2.0

Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB (Preview)

mongo

by mongodb doticonc++doticon

star image 21509 doticonNOASSERTION

The MongoDB Database

TDengine

by taosdata doticoncdoticon

star image 18119 doticonAGPL-3.0

An open-source time-series database with high-performance, scalability and SQL support. It can be widely used in IoT, Connected Vehicles, DevOps, Energy, Finance and other fields.

Trending New libraries in Database

litestream

by benbjohnson doticongodoticon

star image 4979 doticonApache-2.0

Streaming replication for SQLite.

oceanbase

by oceanbase doticonc++doticon

star image 4231 doticonNOASSERTION

OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

databend

by datafuselabs doticonrustdoticon

star image 3758 doticonApache-2.0

A modern Elasticity and Performance cloud data warehouse, activate your object storage for real-time analytics.

absurd-sql

by jlongster doticonjavascriptdoticon

star image 2382 doticonMIT

sqlite3 in ur indexeddb (hopefully a better backend soon)

spicedb

by authzed doticongodoticon

star image 2079 doticonApache-2.0

Open source database system for managing security-critical application permissions inspired by Google's Zanzibar paper.

datafuse

by datafuselabs doticonrustdoticon

star image 2005 doticonApache-2.0

An elastic and scalable Cloud Warehouse, offers Blazing Fast Query and combines Elasticity, Simplicity, Low cost of the Cloud, built to make the Data Cloud easy

sea-orm

by SeaQL doticonrustdoticon

star image 1769 doticonNOASSERTION

🐚 An async & dynamic ORM for Rust

vscode-database-client

by cweijan doticontypescriptdoticon

star image 1656 doticonMIT

Database Client For Visual Studio Code

cloudbeaver

by dbeaver doticontypescriptdoticon

star image 1563 doticonApache-2.0

Cloud Database Manager

Top Authors in Database

1

apache

29 Libraries

star icon26875

2

simonw

25 Libraries

star icon7864

3

microsoft

21 Libraries

star icon4526

4

oracle

18 Libraries

star icon2554

5

ip2location

16 Libraries

star icon388

6

PacktPublishing

15 Libraries

star icon221

7

ropensci

15 Libraries

star icon358

8

PerfectlySoft

14 Libraries

star icon386

9

coleifer

14 Libraries

star icon11378

10

arangodb

14 Libraries

star icon13930

1

29 Libraries

star icon26875

2

25 Libraries

star icon7864

3

21 Libraries

star icon4526

4

18 Libraries

star icon2554

5

16 Libraries

star icon388

6

15 Libraries

star icon221

7

15 Libraries

star icon358

8

14 Libraries

star icon386

9

14 Libraries

star icon11378

10

14 Libraries

star icon13930

Trending Kits in Database

We can use Java cloud database libraries to store data in the cloud and access it from anywhere. It is very easy to use and will help you save time and money. You can easily configure your database in the cloud and get started with your application faster. The use of Java Cloud Database libraries like tx-lcn, galaxysql, cloudgraph are increasing day by day. These libraries can be used to store and retrieve data from the cloud. They provide us with the ability to write applications that connect to remote databases on the Internet. There are several Java Cloud Database Libraries available which are used to develop applications that require high scalability and performance. These libraries are designed specifically for cloud computing environments, which helps in reducing the development time and makes it easier for developers to integrate their applications with cloud services such as AWS or Azure. The tx-lcn Cloud Databases are built on top of the Great Big Graph Database, a fully managed graph database that supports real time data integration. Galaxy is a fully transactional SQL engine that provides high performance, reliable, and scalable multi-tenant cloud computing services. Developers tend to use some of the following open source Java Cloud Database libraries

The JavaScript Cloud Database libraries can be used with any database that is supported by the USENET protocol (TCP/IP). It supports SQL Server, MySQL and PostgreSQL databases. These libraries are a great way to store data in the cloud. You can use these libraries to store your data and retrieve it later when you need it. ToolJet is a JavaScript Cloud Database library that provides a fully managed service for working with databases. The service includes tools for creating, migrating, and managing databases from the cloud. It also supports using SQL as well as other programming languages such as JavaScript and Python.sqlpad is a JavaScript library that allows you to quickly build scalable, real-time web applications. It's free and open source. node-gcm is a NodeJS library that allows you to communicate with Google Cloud Datastore from nodejs applications without any external dependencies or server side code. There are several popular open source JavaScript Cloud Database libraries available for developers

Ruby Cloud database libraries are a dime a dozen, and you can find many different tools for developing applications against them. The easiest way to get started with Ruby is to use the official Ruby cloud database libraries, which are available on a number of clouds. Using these cloud database libraries, you can easily create a database and store your data in it. Fog is an open source cloud database library built on top of the PostgreSQL extension for the Ruby programming language. It allows you to write your applications in any language that supports ActiveRecord or SQL. BOSH is a fully-managed cloud database service for Ruby developers. BOSH provides a consistent API across multiple clouds, including Azure, Heroku, Amazon Web Services, Google Cloud Platform and more. Google Cloud Platform Ruby SDKs let you develop applications with Google's services such as App Engine and Compute Engine. Many developers depend on the following open source Ruby Cloud database libraries

Python Cloud Database libraries like jina, gnes, hsds are very useful for building cloud database applications. They allow you to access your data from anywhere in the world, and even provide a variety of security options. They provide support for a wide range of databases such as MySQL, PostgreSQL, MongoDB and more. The jina library (Jini) is a Python interface to Apache HTTP Server's JINI protocol. It allows you to publish information about objects as web pages, or retrieve them as web pages. The hsds library (Hadoop Streaming Data Sources) provides an interface to MapReduce jobs on Hadoop. The hsds library also provides a simple interface that makes it easy to use. It is an open source tool which uses the WebStorage API to store files in the cloud. It can be used for storing images, videos, audio files and other file types in the cloud. The gnes library can be used to store files and folders on the cloud. It provides a simple interface that makes it easy to use. Popular open source Python Cloud Database libraries include

Cloud Database libraries are an essential part of the C# development. They help in making the database operations faster and easier. C# Cloud Database Libraries like squadron, dackup, DarkSoulsCloudSave are available to make your database development process easier. You can access them from within your application using a simple code or you can use them through .NET Framework classes. Squadron is a C# cloud database library that allows you to store data locally and synchronize it with the cloud. It uses SQLite as the backend which means you don’t need to worry about any server side details or setup. Dackup is another open source .NET Core (and .NET Framework) library for working with databases in a cross-platform way. Dackup can be used for managing local SQLite databases as well as accessing remote SQLite databases over http/ftp using either TCP/UDP or localhost connections. Dark Souls Cloud Save is an online game save manager that allows players to keep their savegames on a website and access them from anywhere at any time, even while playing offline! It uses Dark Souls Database Library to store and retrieve data from your games. Popular open source Cloud Database libraries among developers include

The use of the C++ cloud database libraries is very important for the cloud application development. These libraries are used to save and retrieve data from the cloud based database. These libraries provide high-level abstractions that allow you to write code once, and then run it anywhere your application needs to go. C++ Cloud database libraries like polyscope, cilantro and pptk enable developers to create applications that need complex processing tasks like data storage, analytics and data visualization without having to manage complex software themselves. Polyscope is one of the best open source libraries for working with Big Data in Python. It provides a simple API for working with large datasets on Hadoop clusters or other distributed storage systems such as MongoDB and HDFS (Hadoop Distributed File System). Polyscope is also available in C++ under GPLv3 license which allows you to integrate this library into your C++ projects. Cilantro is an open source database library written in Java. It’s designed to be highly scalable, but still allow for low latency performance when bandwidth isn’t an issue. Cilantro has been used to power some of the biggest e-commerce applications on the web, including Amazon, eBay and Alibaba. Pptk is an open source project that provides connectivity between a Python application and any cloud database system via a series of interfaces that can be implemented by developers using various technologies (e.g., RESTful APIs or Java libraries). Full list of the best open source C++ cloud database libraries are below

PHP Cloud Database libraries like phpbu, centreon, cloudsuite, gocdb are very useful to access existing databases on the cloud. The PHP Cloud Database libraries provide a simple API which developers can use to connect with a database service provider. These PHP Cloud Database libraries are based on the Object Relational Mapper (ORM) model that provides an abstraction layer between your application and the underlying database. CloudSuite is one of the most popular PHP Cloud Database libraries on the market today. It's free and open source, so you can download the code from GitHub and run it locally. You don't have to pay any monthly fees or upgrade guarantees with this library either! phpBu is a open source PHP Project with REST API for MySQL, MariaDB, Percona Server and SQLite DB. Centreon is an API for MongoDB, Redis (in development), PostgreSQL and Memcache. GoCdb is a golang library for MySQL database. Some of the most widely used open source PHP Cloud Database libraries among developers include

Go Cloud Database libraries such as cockroach, weaviate, radon and steampipe are great as they are fast and performant. They have a good performance on the cloud because they are built to scale horizontally. They’re easy to use and configure as you don’t need to be an expert in database management or even know what a database is. They’re all self-contained and work with your existing databases and tools. They are inexpensive to run - especially when compared to the cost of running your own server. Weaviate is an efficient, high performance and easy-to-use HTTP service for Go that you can use to run your applications on AWS Lambda. It uses gRPC to communicate with AWS services like DynamoDB and S3, has built-in support for Redis, MySQL and PostgreSQL, as well as integration with Consul and Zookeeper. Cockroach is a distributed key-value store that supports replicated storage across multiple servers. It enables powerful mechanisms such as atomic commits, automatic partitioning (replication), and many more features that are crucial for any distributed application. Steampipe is a distributed SQL engine designed for cloud native applications and designed for horizontal scalability. Some of the most popular Go Cloud Database libraries among developers are

The use of Java database libraries is the best way to access the database from your application. Java Database Libraries are software libraries that provide a set of classes to interact with databases. There are many such libraries available in the market today and it is very difficult to choose the right one for your app development. The main objective of writing JDLs is to provide a common API for various database systems. The API allows developers to write applications using the same code for different database systems. The main objective of writing JDLs is to provide a common API for various database systems. The API allows developers to write applications using the same code for different database systems. Druid is an open source SQL query optimizer and schema management tool. The Druid project aims to provide an open source solution that allows users to write queries in their favorite language, then use the compiler to generate the best possible result from their query plan. Realm-Java is an open source framework for managing Realm databases on servers and mobile devices. Realm offers tools for working with data in mobile apps like native SQLite support, transactions, change tracking, offline support, etc., plus many more features. Popular open source Java database libraries include

JavaScript Database libraries like lowdb, pouchdb, nedb are used in front end web development. They are used to store data on HTML5 local storage and WebSQL storage. The most important thing is that you should use a JavaScript database library which is cross browser compatible. They’re designed to be easily integrated into any project and they can be used as a replacement for traditional database solutions. LowDB is a JavaScript database library with built-in support for the Low-Level API. It's easy to use and supports many of the features that you would expect from a relational database. It also has great documentation and community support. PouchDB is another JavaScript database library that was built to be as fast as possible. It uses indexing to speed up queries and index data efficiently. It also has great documentation and community support. Nedb is another pure JavaScript alternative to MySQL that works in NodeJS environments, but can also be used in web browsers without external dependencies (e.g., Socket.io). Many developers depend on the following open source JavaScript Database libraries

The Ruby programming language is a powerful tool for building database applications. Ruby has evolved into a de facto standard for interacting with databases, extending the capabilities of the standard libraries to include SQL and NoSQL databases, and adding support for various database backends. sequel is a flexible and easy-to-use ORM that allows you to define models in a SQL way and use them to access your database. pgsync is a tool that synchronizes data between multiple PostgreSQL databases using COPY FROM SCM command. activegraph is an API client library for ActiveRecord/Base in Ruby on Rails. Full list of the best open source Ruby database libraries are given below

The Python database libraries are a set of packages that abstract away the complexity of database access, so you can focus on what your application requires. SQLmap is an open source tool that automates the process of finding vulnerabilities in web applications by listening on port 80 and searching for specific strings in web pages. It can be used to find any type of issue in the database and even inject commands into the application, resulting in a successful exploitation. But there are other tools like edgedb or sqlmodel that can help you automate the entire process of finding vulnerabilities in your database. Popular open source Python database libraries among developers include

C# Database libraries are a combination of data access and data manipulation. These libraries help in creating database applications using C#. The C# database library is a powerful tool for manipulating and working with databases. It provides a set of classes that can be used to create, retrieve, insert, update and delete data in a database. The database library also provides functions that are used to manage connections to the database and manipulate the data stored in the database. The C# database library is not just limited to working with SQL databases. The library also supports XML and text-based (non-relational) databases such as MongoDB, Couchbase, Redis etc. The efcore and ravendb libraries are great for connecting to SQL databases, but they're not the only options. Chinook is a C# library that has been developed by Microsoft to make it easy to interact with data in SQL Server. Its API closely resembles the one that your database server will return. It's also possible to create a database using the Microsoft Data Access components, which includes an API similar to that of Chinook or efcore. Some of the most popular C# Database libraries among developers are

The use of C++ Database libraries like mongo, oceanbase, soci is very common in the world. The reason behind its popularity is the high performance and reliability it provides. It also has a good support for large data sets. The most popular database library in C++ is mongoDB which is the open-source database that has a lot of features like high performance and low latency which makes it ideal for web applications. It can be used to store information from different platforms like mobile devices, desktops and servers as well. MongoDB is also an object oriented language so you can use its classes for your data structures. Another popular database library in C++ is oceanbase which provides a powerful framework for building scalable web applications using PHP. Oceanbase has built-in support for various databases such as MySQL, PostgreSQL, Oracle, SQL Server etc. It supports multiple languages including PHP and Java so you don't have to worry about writing any code in these languages to access your database server. The other advantage of using oceanbase is its ability to handle multiple users simultaneously without any performance issues or downtime. Some of the most widely used open source C++ Database libraries among developers include

The first thing to understand is that using a PHP database library like hasbids, monolog or medoo is not only an opportunity to cut development time. It also allows you to do things like load data from different sources, use caching and more. These libraries are mature and well-tested. They have been used by thousands of applications already, so there have been hundreds of millions of lines of code written for them. You can feel safe in your decisions about which library to use. Monolog is a simple logging library for PHP. It's intended to be used by developers who want to log HTTP requests, responses and exceptions that happen on their application. Monolog can be easily integrated into any project, allowing you to easily log your user's activity. Hasbids PHP database libraries are used for interacting with the eBay API to create/update auctions. Medoo PHP database libraries are used for interacting with the Amazon API to create/update products. Popular open source PHP database libraries for developers include

Trending Discussions on Database

Javascript dynamically inserted later on: how to make it run?

Unknown host CPU architecture: arm64 , Android NDK SiliconM1 Apple MacBook Pro

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory

AngularFireModule and AngularFireDatabaseModule not being found in @angular/fire

ASP.NET Core 6 how to access Configuration during startup

How to fix: "@angular/fire"' has no exported member 'AngularFireModule'.ts(2305) ionic, firebase, angular

pymongo [SSL: CERTIFICATE_VERIFY_FAILED]: certificate has expired on Mongo Atlas

java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): ,

How to solve FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore problem?

How do I get details of a veracode vulnerability report?

QUESTION

Javascript dynamically inserted later on: how to make it run?

Asked 2022-Apr-17 at 14:12

I have scripts In my React app that are inserted dynamically later on. The scripts don't load.

In my database there is a field called content, which contains data that includes html and javascript. There are many records and each record can include multiple scripts in the content field. So it's not really an option to statically specify each of the script-urls in my React app. The field for a record could for example look like:

1<p>Some text and html</p>
2<div id="xxx_hype_container">
3    <script type="text/javascript" charset="utf-8" src="https://example.com/uploads/hype_generated_script.js?499892"></script>
4</div>
5<div style="display: none;" aria-hidden="true"> 
6<div>Some text.</div> 
7Etc…
8

I call on this field in my React app using dangerouslySetInnerHTML:

1<p>Some text and html</p>
2<div id="xxx_hype_container">
3    <script type="text/javascript" charset="utf-8" src="https://example.com/uploads/hype_generated_script.js?499892"></script>
4</div>
5<div style="display: none;" aria-hidden="true"> 
6<div>Some text.</div> 
7Etc…
8render() {
9    return (
10        <div data-page="clarifies">
11            <div className="container">
12                <div dangerouslySetInnerHTML={{ __html: post.content }} />
13                ... some other data
14            </div>
15        </div>
16    );
17}
18

It correctly loads the data from the database and displays the html from that data. However, the Javascript does not get executed. I think the script doesn't work because it is dynamically inserted later on. How can I make these scripts work/run?

This post suggest a solution for dynamically inserted scripts, but I don't think I can apply this solution because in my case the script/code is inserted from a database (so how to then use nodeScriptReplace on the code...?). Any suggestions how I might make my scripts work?


Update in response to @lissettdm their answer:

1<p>Some text and html</p>
2<div id="xxx_hype_container">
3    <script type="text/javascript" charset="utf-8" src="https://example.com/uploads/hype_generated_script.js?499892"></script>
4</div>
5<div style="display: none;" aria-hidden="true"> 
6<div>Some text.</div> 
7Etc…
8render() {
9    return (
10        <div data-page="clarifies">
11            <div className="container">
12                <div dangerouslySetInnerHTML={{ __html: post.content }} />
13                ... some other data
14            </div>
15        </div>
16    );
17}
18constructor(props) {
19    this.ref = React.createRef();
20}
21
22componentDidUpdate(prevProps, prevState) {
23    if (prevProps.postData !== this.props.postData) {
24        this.setState({
25            loading: false,
26            post: this.props.postData.data,
27            //etc
28        });
29        setTimeout(() => parseElements());
30
31        console.log(this.props.postData.data.content);
32        // returns html string like: `<div id="hype_container" style="margin: auto; etc.`
33        const node = document.createRange().createContextualFragment(this.props.postData.data.content);
34        console.log(JSON.stringify(this.ref));
35        // returns {"current":null}
36        console.log(node);
37        // returns [object DocumentFragment]
38        this.ref.current.appendChild(node);
39        // produces error "Cannot read properties of null"
40    }
41}
42
43render() {
44    const { history } = this.props;
45    /etc.
46    return (
47        {loading ? (
48            some code
49        ) : (
50            <div data-page="clarifies">
51                <div className="container">
52                    <div ref={this.ref}></div>
53                    ... some other data
54                </div>
55            </div>
56        );
57    );
58}
59

The this.ref.current.appendChild(node); line produces the error:

TypeError: Cannot read properties of null (reading 'appendChild')

ANSWER

Answered 2022-Apr-14 at 19:05

Rendering raw HTML without React recommended method is not a good practice. React recommends method dangerouslySetInnerHTML to render raw HTML.

Source https://stackoverflow.com/questions/71876427

QUESTION

Unknown host CPU architecture: arm64 , Android NDK SiliconM1 Apple MacBook Pro

Asked 2022-Apr-04 at 18:41

I've got a project that is working fine in windows os but when I switched my laptop and opened an existing project in MacBook Pro M1. I'm unable to run an existing android project in MacBook pro M1. first I was getting

Execution failed for task ':app:kaptDevDebugKotlin'. > A failure occurred while executing org.jetbrains.kotlin.gradle.internal.KaptExecution > java.lang.reflect.InvocationTargetException (no error message)

this error was due to the Room database I applied a fix that was adding below library before Room database and also changed my JDK location from file structure from JRE to JDK.

kapt "org.xerial:sqlite-jdbc:3.34.0"

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6

after that now I'm getting an issue which is Unknown host CPU architecture: arm64

there is an SDK in my project that is using this below line.

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16

App Gradle

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23

[CXX1405] error when building with ndkBuild using /Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/Android.mk: Build command failed. Error while executing process /Users/mac/Library/Android/sdk/ndk/21.4.7075529/ndk-build with arguments {NDK_PROJECT_PATH=null APP_BUILD_SCRIPT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/Android.mk APP_ABI=arm64-v8a NDK_ALL_ABIS=arm64-v8a NDK_DEBUG=1 APP_PLATFORM=android-21 NDK_OUT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/build/intermediates/cxx/Debug/4k4s2lc6/obj NDK_LIBS_OUT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/build/intermediates/cxx/Debug/4k4s2lc6/lib APP_SHORT_COMMANDS=false LOCAL_SHORT_COMMANDS=false -B -n} ERROR: Unknown host CPU architecture: arm64

which is causing this issue and whenever I comment on this line

path 'Android.mk'

it starts working fine, is there any way around which will help me run this project with this piece of code without getting this NDK issue?

Update - It seems that Room got fixed in the latest updates, Therefore you may consider updating Room to latest version (2.3.0-alpha01 / 2.4.0-alpha03 or above)

GitHub Issue Tracker

ANSWER

Answered 2022-Apr-04 at 18:41

To solve this on a Apple Silicon M1 I found three options

A

Use NDK 24

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27

You can install it with

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28

or

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29

Depending what where sdkmanager is located enter image description here

B

Change your ndk-build to use Rosetta x86. Search for your installed ndk with

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30

eg

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
31

and change

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
31DIR="$(cd "$(dirname "$0")" && pwd)"
32$DIR/build/ndk-build "$@"
33

to

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
31DIR="$(cd "$(dirname "$0")" && pwd)"
32$DIR/build/ndk-build "$@"
33DIR="$(cd "$(dirname "$0")" && pwd)"
34arch -x86_64 /bin/bash $DIR/build/ndk-build "$@"
35

enter image description here

C

convert your ndk-build into a cmake build

Source https://stackoverflow.com/questions/69541831

QUESTION

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory

Asked 2022-Apr-04 at 15:46

Not really sure what caused this but most likely exiting the terminal while my rails server which was connected to PostgreSQL database was closed (not a good practice I know but lesson learned!)

I've already tried the following:

  1. Rebooting my machine (using MBA M1 2020)
  2. Restarting PostgreSQL using homebrew brew services restart postgresql
  3. Re-installing PostgreSQL using Homebrew
  4. Updating PostgreSQL using Homebrew
  5. I also tried following this link but when I run cd Library/Application\ Support/Postgres terminal tells me Postgres folder doesn't exist, so I'm kind of lost already. Although I have a feeling that deleting postmaster.pid would really fix my issue. Any help would be appreciated!

ANSWER

Answered 2022-Jan-13 at 15:19
Resetting PostgreSQL

My original answer only included the troubleshooting steps below, and a workaround. I now decided to properly fix it via brute force by removing all clusters and reinstalling, since I didn't have any data there to keep. It was something along these lines, on my Ubuntu 21.04 system:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6

Now I have:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9

And sudo -u postgres psql works fine. The service was started automatically but it can be done manually with sudo systemctl start postgresql.

Incidentally, I can recommend the PostgreSQL docker image, which eliminates the need to bother with a local installation.

Troubleshooting

Although I cannot provide an answer to your specific problem, I thought I'd share my troubleshooting steps, hoping that it might be of some help. It seems that you are on Mac, whereas I am running Ubuntu 21.04, so expect things to be different.

This is a client connection problem, as noted by section 19.3.2 in the docs.

The directory in my error message is different:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12

I checked what unix sockets I had in that directory:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21

Makes sense, there is a socket for 5433 not 5432. I confirmed this by running:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25

This explains how it got into this mess on my system. The default port is 5432, but after I upgraded from version 12 to 14, the server was setup to listen to 5433, presumably because it considered 5432 as already taken. Two alternatives here, get the server to listen on 5432 which is the client's default, or get the client to use 5433.

Let's try it by changing the client's parameters:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30

It worked! Now, to make it permanent I'm supposed to put this setting on a psqlrc or ~/.psqlrc file. The thin documentation on this (under "Files") was not helpful to me as I was not sure on the syntax and my attempts did not change the client's default, so I moved on.

To change the server I looked for the postgresql.conf mentioned in the documentation but could not find the file. I did however see /var/lib/postgresql/14/main/postgresql.auto.conf so I created it on the same directory with the content:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31

Restarted the server: sudo systemctl restart postgresql

But the error persisted because, as the logs confirmed, the port did not change:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37

After other attempts did not succeed, I eventually decided to use a workaround: to redirect the client's requests on 5432 to 5433:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
38

This is what I have now:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
38$ ls -lah /var/run/postgresql/
39total 8.0K
40drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
41drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
42drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
43drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
44-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
45lrwxrwxrwx  1 postgres postgres   33 Oct 29 16:40 .s.PGSQL.5432 -> /var/run/postgresql/.s.PGSQL.5433
46srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
47-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
48

This means I can now just run psql without having to explicitly set the port to 5433. Now, this is a hack and I would not recommend it. But in my development system I am happy with it for now, because I don't have more time to spend on this. This is why I shared the steps and the links so that you can find a proper solution for your case.

Source https://stackoverflow.com/questions/69754628

QUESTION

AngularFireModule and AngularFireDatabaseModule not being found in @angular/fire

Asked 2022-Apr-01 at 12:56

I am trying to implement Firebase Realtime Database into a angular project and Im getting stuck at one of the very first steps. Importing AngularFireModule and AngularFireDatabaseModule. It gives me the following error:

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2
1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3

And here is how I am importing them:

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5

Am I missing something here? I have installed @angular/fire via the command

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6

and have also installed firebase tools. Here is a list of the Angular packages I currently have installed and their versions:

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26

I do apologise if this is all excessive information but I am completely stuck as to what the issue is. Any help would be GREATLY appreciated. Right now my suspicion is that its a compatibility issue or perhaps a feature that doesnt exist anymore on the latest versions but I really dont know.

ANSWER

Answered 2021-Aug-26 at 13:20

AngularFire 7.0.0 was launched yesterday with a new API that has a lot of bundle size reduction benefits.

Instead of top level classes like AngularFireDatabase, you can now import smaller independent functions.

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26import { list } from '@angular/fire/database';
27

The initialization process is a bit different too as it has a more flexible API for specifying configurations.

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26import { list } from '@angular/fire/database';
27@NgModule({
28    imports: [
29        provideFirebaseApp(() => initializeApp(config)),
30        provideFirestore(() => {
31            const firestore = getFirestore();
32            connectEmulator(firestore, 'localhost', 8080);
33            enableIndexedDbPersistence(firestore);
34            return firestore;
35        }),
36        provideStorage(() => getStorage()),
37    ],
38})
39

If you want to proceed with the older API there's a compatibility layer.

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26import { list } from '@angular/fire/database';
27@NgModule({
28    imports: [
29        provideFirebaseApp(() => initializeApp(config)),
30        provideFirestore(() => {
31            const firestore = getFirestore();
32            connectEmulator(firestore, 'localhost', 8080);
33            enableIndexedDbPersistence(firestore);
34            return firestore;
35        }),
36        provideStorage(() => getStorage()),
37    ],
38})
39import { AngularFireModule} from '@angular/fire/compat'
40import { AngularFireDatabaseModule } from '@angular/fire/compat/database';
41

Source https://stackoverflow.com/questions/68939014

QUESTION

ASP.NET Core 6 how to access Configuration during startup

Asked 2022-Mar-08 at 11:45

In earlier versions, we had Startup.cs class and we get configuration object as follows in the Startup file.

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23

Now in .NET 6 (With Visual Studio 2022), we don't see the Startup.cs class. Looks like its days are numbered. So how do we get these objects like Configuration(IConfiguration) and Hosting Environment(IHostEnvironment)

How do we get these objects, to say read the configuration from appsettings? Currently the Program.cs file looks like this.

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23using Festify.Database;
24using Microsoft.EntityFrameworkCore;
25
26var builder = WebApplication.CreateBuilder(args);
27
28// Add services to the container.
29builder.Services.AddRazorPages();
30
31builder.Services.AddDbContext<FestifyContext>();
32
33
34////////////////////////////////////////////////
35// The following is Giving me error as Configuration 
36// object is not avaible, I dont know how to inject this here.
37////////////////////////////////////////////////
38
39
40builder.Services.AddDbContext<FestifyContext>(opt =>
41        opt.UseSqlServer(
42            Configuration.GetConnectionString("Festify")));
43
44
45var app = builder.Build();
46
47// Configure the HTTP request pipeline.
48if (!app.Environment.IsDevelopment())
49{
50    app.UseExceptionHandler("/Error");
51    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
52    app.UseHsts();
53}
54
55app.UseHttpsRedirection();
56app.UseStaticFiles();
57
58app.UseRouting();
59
60app.UseAuthorization();
61
62app.MapRazorPages();
63
64app.Run();
65

I want to know how to read the configuration from appsettings.json ?

ANSWER

Answered 2021-Oct-26 at 12:26

WebApplicationBuilder returned by WebApplication.CreateBuilder(args) exposes Configuration and Environment properties:

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23using Festify.Database;
24using Microsoft.EntityFrameworkCore;
25
26var builder = WebApplication.CreateBuilder(args);
27
28// Add services to the container.
29builder.Services.AddRazorPages();
30
31builder.Services.AddDbContext<FestifyContext>();
32
33
34////////////////////////////////////////////////
35// The following is Giving me error as Configuration 
36// object is not avaible, I dont know how to inject this here.
37////////////////////////////////////////////////
38
39
40builder.Services.AddDbContext<FestifyContext>(opt =>
41        opt.UseSqlServer(
42            Configuration.GetConnectionString("Festify")));
43
44
45var app = builder.Build();
46
47// Configure the HTTP request pipeline.
48if (!app.Environment.IsDevelopment())
49{
50    app.UseExceptionHandler("/Error");
51    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
52    app.UseHsts();
53}
54
55app.UseHttpsRedirection();
56app.UseStaticFiles();
57
58app.UseRouting();
59
60app.UseAuthorization();
61
62app.MapRazorPages();
63
64app.Run();
65var builder = WebApplication.CreateBuilder(args);
66
67// Add services to the container.
68...
69ConfigurationManager configuration = builder.Configuration;
70IWebHostEnvironment environment = builder.Environment;
71

WebApplication returned by WebApplicationBuilder.Build() also exposes Configuration and Environment:

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23using Festify.Database;
24using Microsoft.EntityFrameworkCore;
25
26var builder = WebApplication.CreateBuilder(args);
27
28// Add services to the container.
29builder.Services.AddRazorPages();
30
31builder.Services.AddDbContext<FestifyContext>();
32
33
34////////////////////////////////////////////////
35// The following is Giving me error as Configuration 
36// object is not avaible, I dont know how to inject this here.
37////////////////////////////////////////////////
38
39
40builder.Services.AddDbContext<FestifyContext>(opt =>
41        opt.UseSqlServer(
42            Configuration.GetConnectionString("Festify")));
43
44
45var app = builder.Build();
46
47// Configure the HTTP request pipeline.
48if (!app.Environment.IsDevelopment())
49{
50    app.UseExceptionHandler("/Error");
51    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
52    app.UseHsts();
53}
54
55app.UseHttpsRedirection();
56app.UseStaticFiles();
57
58app.UseRouting();
59
60app.UseAuthorization();
61
62app.MapRazorPages();
63
64app.Run();
65var builder = WebApplication.CreateBuilder(args);
66
67// Add services to the container.
68...
69ConfigurationManager configuration = builder.Configuration;
70IWebHostEnvironment environment = builder.Environment;
71var app = builder.Build();
72IConfiguration configuration = app.Configuration;
73IWebHostEnvironment environment = app.Environment;
74

Also check the migration guide and code samples.

Source https://stackoverflow.com/questions/69722872

QUESTION

How to fix: "@angular/fire"' has no exported member 'AngularFireModule'.ts(2305) ionic, firebase, angular

Asked 2022-Feb-11 at 07:31

I'm trying to connect my app with a firebase db, but I receive 4 error messages on app.module.ts:

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5

here is my package.json file:

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79

And here is my app.module.ts:

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79import { NgModule } from '@angular/core';
80import { BrowserModule } from '@angular/platform-browser';
81import { RouteReuseStrategy } from '@angular/router';
82import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
83import { AppRoutingModule } from './app-routing.module';
84import { AppComponent } from './app.component';
85import { ClientPageModule } from './client/client.module';
86import { environment } from '../environments/environment';
87import { AngularFireModule } from '@angular/fire';
88import { AngularFireAuthModule } from '@angular/fire/auth';
89import { AngularFireStorageModule } from '@angular/fire/storage';
90import { AngularFireDatabaseModule } from '@angular/fire/database';
91
92@NgModule({
93  declarations: [AppComponent],
94  entryComponents: [],
95  imports: [
96    BrowserModule,
97    IonicModule.forRoot(),
98    AppRoutingModule,
99    ClientPageModule,
100    AngularFireModule.initializeApp(environment.firebaseConfig),
101    AngularFireAuthModule,
102    AngularFireStorageModule,
103    AngularFireDatabaseModule
104  ],
105  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
106  bootstrap: [AppComponent],
107})
108export class AppModule {}
109

Here is my tsonfig.ts file

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79import { NgModule } from '@angular/core';
80import { BrowserModule } from '@angular/platform-browser';
81import { RouteReuseStrategy } from '@angular/router';
82import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
83import { AppRoutingModule } from './app-routing.module';
84import { AppComponent } from './app.component';
85import { ClientPageModule } from './client/client.module';
86import { environment } from '../environments/environment';
87import { AngularFireModule } from '@angular/fire';
88import { AngularFireAuthModule } from '@angular/fire/auth';
89import { AngularFireStorageModule } from '@angular/fire/storage';
90import { AngularFireDatabaseModule } from '@angular/fire/database';
91
92@NgModule({
93  declarations: [AppComponent],
94  entryComponents: [],
95  imports: [
96    BrowserModule,
97    IonicModule.forRoot(),
98    AppRoutingModule,
99    ClientPageModule,
100    AngularFireModule.initializeApp(environment.firebaseConfig),
101    AngularFireAuthModule,
102    AngularFireStorageModule,
103    AngularFireDatabaseModule
104  ],
105  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
106  bootstrap: [AppComponent],
107})
108export class AppModule {}
109  "compileOnSave": false,
110  "compilerOptions": {
111    "baseUrl": "./",
112    "outDir": "./dist/out-tsc",
113    "sourceMap": true,
114    "declaration": false,
115    "downlevelIteration": true,
116    "experimentalDecorators": true,
117    "moduleResolution": "node",
118    "importHelpers": true,
119    "target": "es2015",
120    "module": "es2020",
121    "lib": ["es2018", "dom"]
122  },
123  "angularCompilerOptions": {
124    "enableI18nLegacyMessageIdFormat": false,
125    "strictInjectionParameters": true,
126    "strictInputAccessModifiers": true,
127    "strictTemplates": true,
128    "skipLibCheck": true 
129  }
130}
131

ANSWER

Answered 2021-Sep-10 at 12:47

You need to add "compat" like this

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79import { NgModule } from '@angular/core';
80import { BrowserModule } from '@angular/platform-browser';
81import { RouteReuseStrategy } from '@angular/router';
82import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
83import { AppRoutingModule } from './app-routing.module';
84import { AppComponent } from './app.component';
85import { ClientPageModule } from './client/client.module';
86import { environment } from '../environments/environment';
87import { AngularFireModule } from '@angular/fire';
88import { AngularFireAuthModule } from '@angular/fire/auth';
89import { AngularFireStorageModule } from '@angular/fire/storage';
90import { AngularFireDatabaseModule } from '@angular/fire/database';
91
92@NgModule({
93  declarations: [AppComponent],
94  entryComponents: [],
95  imports: [
96    BrowserModule,
97    IonicModule.forRoot(),
98    AppRoutingModule,
99    ClientPageModule,
100    AngularFireModule.initializeApp(environment.firebaseConfig),
101    AngularFireAuthModule,
102    AngularFireStorageModule,
103    AngularFireDatabaseModule
104  ],
105  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
106  bootstrap: [AppComponent],
107})
108export class AppModule {}
109  "compileOnSave": false,
110  "compilerOptions": {
111    "baseUrl": "./",
112    "outDir": "./dist/out-tsc",
113    "sourceMap": true,
114    "declaration": false,
115    "downlevelIteration": true,
116    "experimentalDecorators": true,
117    "moduleResolution": "node",
118    "importHelpers": true,
119    "target": "es2015",
120    "module": "es2020",
121    "lib": ["es2018", "dom"]
122  },
123  "angularCompilerOptions": {
124    "enableI18nLegacyMessageIdFormat": false,
125    "strictInjectionParameters": true,
126    "strictInputAccessModifiers": true,
127    "strictTemplates": true,
128    "skipLibCheck": true 
129  }
130}
131import { AngularFireModule } from "@angular/fire/compat";
132import { AngularFireAuthModule } from "@angular/fire/compat/auth";
133import { AngularFireStorageModule } from '@angular/fire/compat/storage';
134import { AngularFirestoreModule } from '@angular/fire/compat/firestore';
135import { AngularFireDatabaseModule } from '@angular/fire/compat/database';
136

Source https://stackoverflow.com/questions/69128608

QUESTION

pymongo [SSL: CERTIFICATE_VERIFY_FAILED]: certificate has expired on Mongo Atlas

Asked 2022-Jan-29 at 22:03

I am using MongoDB(Mongo Atlas) in my Django app. All was working fine till yesterday. But today, when I ran the server, it is showing me the following error on console

1Exception in thread django-main-thread:
2Traceback (most recent call last):
3  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner
4    self.run()
5  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run
6    self._target(*self._args, **self._kwargs)
7  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
8    fn(*args, **kwargs)
9  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
10    self.check_migrations()
11  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations
12    executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
13  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__
14    self.loader = MigrationLoader(self.connection)
15  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__
16    self.build_graph()
17  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph
18    self.applied_migrations = recorder.applied_migrations()
19  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations
20    if self.has_table():
21  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table
22    tables = self.connection.introspection.table_names(cursor)
23  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names
24    return get_names(cursor)
25  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names
26    return sorted(ti.name for ti in self.get_table_list(cursor)
27  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list
28    for c in cursor.db_conn.list_collection_names()
29  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names
30    for result in self.list_collections(session=session, **kwargs)]
31  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections
32    return self.__client._retryable_read(
33  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read
34    server = self._select_server(
35  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server
36    server = topology.select_server(server_selector)
37  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server
38    return random.choice(self.select_servers(selector,
39  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers
40    server_descriptions = self._select_servers_loop(
41  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop
42    raise ServerSelectionTimeoutError(
43pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]>
44

I am using djongo as the database engine

1Exception in thread django-main-thread:
2Traceback (most recent call last):
3  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner
4    self.run()
5  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run
6    self._target(*self._args, **self._kwargs)
7  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
8    fn(*args, **kwargs)
9  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
10    self.check_migrations()
11  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations
12    executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
13  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__
14    self.loader = MigrationLoader(self.connection)
15  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__
16    self.build_graph()
17  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph
18    self.applied_migrations = recorder.applied_migrations()
19  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations
20    if self.has_table():
21  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table
22    tables = self.connection.introspection.table_names(cursor)
23  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names
24    return get_names(cursor)
25  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names
26    return sorted(ti.name for ti in self.get_table_list(cursor)
27  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list
28    for c in cursor.db_conn.list_collection_names()
29  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names
30    for result in self.list_collections(session=session, **kwargs)]
31  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections
32    return self.__client._retryable_read(
33  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read
34    server = self._select_server(
35  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server
36    server = topology.select_server(server_selector)
37  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server
38    return random.choice(self.select_servers(selector,
39  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers
40    server_descriptions = self._select_servers_loop(
41  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop
42    raise ServerSelectionTimeoutError(
43pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]>
44DATABASES = {
45    'default': {
46            'ENGINE': 'djongo',
47            'NAME': 'DbName',
48            'ENFORCE_SCHEMA': False,
49            'CLIENT': {
50                'host': 'mongodb+srv://username:password@cluster0.mny7y.mongodb.net/DbName?retryWrites=true&w=majority'
51            }  
52    }
53}
54

And following dependencies are being used in the app

1Exception in thread django-main-thread:
2Traceback (most recent call last):
3  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner
4    self.run()
5  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run
6    self._target(*self._args, **self._kwargs)
7  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
8    fn(*args, **kwargs)
9  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
10    self.check_migrations()
11  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations
12    executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
13  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__
14    self.loader = MigrationLoader(self.connection)
15  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__
16    self.build_graph()
17  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph
18    self.applied_migrations = recorder.applied_migrations()
19  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations
20    if self.has_table():
21  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table
22    tables = self.connection.introspection.table_names(cursor)
23  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names
24    return get_names(cursor)
25  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names
26    return sorted(ti.name for ti in self.get_table_list(cursor)
27  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list
28    for c in cursor.db_conn.list_collection_names()
29  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names
30    for result in self.list_collections(session=session, **kwargs)]
31  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections
32    return self.__client._retryable_read(
33  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read
34    server = self._select_server(
35  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server
36    server = topology.select_server(server_selector)
37  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server
38    return random.choice(self.select_servers(selector,
39  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers
40    server_descriptions = self._select_servers_loop(
41  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop
42    raise ServerSelectionTimeoutError(
43pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]>
44DATABASES = {
45    'default': {
46            'ENGINE': 'djongo',
47            'NAME': 'DbName',
48            'ENFORCE_SCHEMA': False,
49            'CLIENT': {
50                'host': 'mongodb+srv://username:password@cluster0.mny7y.mongodb.net/DbName?retryWrites=true&w=majority'
51            }  
52    }
53}
54dj-database-url==0.5.0
55Django==3.2.5
56djangorestframework==3.12.4
57django-cors-headers==3.7.0
58gunicorn==20.1.0
59psycopg2==2.9.1
60pytz==2021.1
61whitenoise==5.3.0
62djongo==1.3.6
63dnspython==2.1.0
64

What should be done in order to resolve this error?

ANSWER

Answered 2021-Oct-03 at 05:57

This is because of a root CA Let’s Encrypt uses (and Mongo Atals uses Let's Encrypt) has expired on 2020-09-30 - namely the "IdentTrust DST Root CA X3" one.

The fix is to manually install in the Windows certificate store the "ISRG Root X1" and "ISRG Root X2" root certificates, and the "Let’s Encrypt R3" intermediate one - link to their official site - https://letsencrypt.org/certificates/

Copy from the comments: download the .der field from the 1st category, download, double click and follow the wizard to install it.

Source https://stackoverflow.com/questions/69397039

QUESTION

java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): ,

Asked 2022-Jan-18 at 08:15

i'm having a problem to publish my app on the play store after october 2021, the error says that the table media_store_extension doesn't exist. The thing is: i don't use SQLITE on the project, so i have no idea what may be causing this exception.

The target sdk is 30, and de minimun is 26

The full error:

1FATAL EXCEPTION: latency_sensitive_executor-thread-1
2Process: com.google.android.apps.photos, PID: 29478
3java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
4    at nqo.a(PG:3)
5    at aleu.run(PG:6)
6    at krv.a(PG:17)
7    at krw.run(Unknown Source:6)
8    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
9    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
10    at java.lang.Thread.run(Thread.java:764)
11    at ksa.run(PG:5)
12Caused by: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
13    at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method)
14    at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:890)
15    at android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:501)
16    at android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588)
17    at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:58)
18    at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:37)
19    at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:46)
20    at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1392)
21    at android.database.sqlite.SQLiteDatabase.queryWithFactory(SQLiteDatabase.java:1239)
22    at android.database.sqlite.SQLiteDatabase.query(SQLiteDatabase.java:1110)
23    at agcm.a(PG:8)
24    at nnw.run(PG:17)
25    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457)
26    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
27    ... 4 more
28

ANSWER

Answered 2021-Nov-18 at 11:41

This error is reported not only from Flutter developers, but also from Unity (https://forum.unity.com/threads/getting-an-odd-error-in-internal-android-build-after-updating-iap.1104352/ and https://forum.unity.com/threads/error-when-submitting-app-to-google-play.1098139/) and in my case - for a native android app.

We first got this error 6 months ago and applied the fix that was suggested by the unity guys:

1FATAL EXCEPTION: latency_sensitive_executor-thread-1
2Process: com.google.android.apps.photos, PID: 29478
3java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
4    at nqo.a(PG:3)
5    at aleu.run(PG:6)
6    at krv.a(PG:17)
7    at krw.run(Unknown Source:6)
8    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
9    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
10    at java.lang.Thread.run(Thread.java:764)
11    at ksa.run(PG:5)
12Caused by: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
13    at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method)
14    at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:890)
15    at android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:501)
16    at android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588)
17    at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:58)
18    at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:37)
19    at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:46)
20    at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1392)
21    at android.database.sqlite.SQLiteDatabase.queryWithFactory(SQLiteDatabase.java:1239)
22    at android.database.sqlite.SQLiteDatabase.query(SQLiteDatabase.java:1110)
23    at agcm.a(PG:8)
24    at nnw.run(PG:17)
25    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457)
26    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
27    ... 4 more
28aaptOptions {
29    noCompress 'db'
30    ...
31}
32

However, yesterday we received the same error again, so the "fix" did not work for us.

The error occurs:

  1. (so far) only during internal testing
  2. only on Xiaomi Redmi 6A.
  3. from time to time(it is not reproduced each time)
  4. always in process com.google.android.apps.photos

The most reasonable explanation that I have seen so far is that the exception occurs when the testing bot attempts to take a screenshot.

This explains why the process is Google Photos', why the error is not reproduced each time and why it is "fixed" by just resubmitting a new build.

This also means that just ignoring the error should be OK.

Source https://stackoverflow.com/questions/69919198

QUESTION

How to solve FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore problem?

Asked 2022-Jan-11 at 15:08

I am trying to set up Firebase with next.js. I am getting this error in the console.

FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore

This is one of my custom hook

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38

I also have this firebase.ts file where I initialized my firebase app

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64

I don't know whether the problem is in the project initialization. I am pretty sure the error is generated from my custom hook file. I also found out that there must be something wrong with onSnapshot function. Am I passing the docRef wrong or something? What am I doing wrong here?

The console.log(firestore) log:

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97

ANSWER

Answered 2022-Jan-07 at 19:07

Using getFirestore from lite library will not work with onSnapshot. You are importing getFirestore from lite version:

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97import { getFirestore } from 'firebase/firestore/lite'
98

Change the import to:

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97import { getFirestore } from 'firebase/firestore/lite'
98import { getFirestore } from 'firebase/firestore'
99

From the documentation,

The onSnapshot method and DocumentChange, SnapshotListenerOptions, SnapshotMetadata, SnapshotOptions and Unsubscribe objects are not included in lite version.


Another reason for this error to show up could be passing invalid first argument to collection() or doc() functions. They both take a Firestore instance as first argument.

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97import { getFirestore } from 'firebase/firestore/lite'
98import { getFirestore } from 'firebase/firestore'
99// Ensure that "db" is defined and initialized
100const db = getFirestore();
101// console.log(db);
102
103const colRef = collection(db, "collection_name");
104

Source https://stackoverflow.com/questions/69047904

QUESTION

How do I get details of a veracode vulnerability report?

Asked 2022-Jan-07 at 21:46

How do I get details of a veracode vulnerability report?

I'm a maintainer of a popular JS library, Ramda, and we've recently received a report that the library is subject to a prototype pollution vulnerability. This has been tracked back to a veracode report that says:

ramda is vulnerable to prototype pollution. An attacker can inject properties into existing construct prototypes via the _curry2 function and modify attributes such as __proto__, constructor, and prototype.

I understand what they're talking about for Prototype Pollution. A good explanation is at snyk's writeup for lodash.merge. Ramda's design is different, and the obvious analogous Ramda code is not subject to this sort of vulnerability. That does not mean that no part of Ramda is subject to it. But the report contains no details, no code snippet, and no means to challenge their findings.

The details of their description are clearly wrong. _curry2 could not possibly be subject to this problem. But as that function is used as a wrapper to many other functions, it's possible that there is a real vulnerability hidden by the reporter's misunderstanding.

Is there a way to get details of this error report? A snippet of code that demonstrates the problem? Anything? I have filled out their contact form. An answer may still be coming, as it was only 24 hours ago, but I'm not holding my breath -- it seems to be mostly a sales form. All the searching I've done leads to information about how to use their security tool and pretty much nothing about how their custom reports are created. And I can't find this in CVE databases.

ANSWER

Answered 2022-Jan-07 at 21:46

Ok, so to answer my own question, here's how to get the details on a Veracode vulnerability report in less than four weeks and in only fifty-five easy steps.


Pre-work Day 1
  • Receive a comment on the issue that says that the user has received

    a VULN ticket to fix this Prototype Pollution vulnerability found in ramda.

  • Carry on a discussion regarding this comment to learn that there is a report that claims that

    ramda is vulnerable to prototype pollution. An attacker can inject properties into existing construct prototypes via the _curry2 function and modify attributes such as __proto__, constructor, and prototype.

    and eventually learn that this is due to a report from the software security company Veracode.

Days 2 & 3
  • Examine that report to find that it has no details, no explanation of how to trigger the vulnerability, and no suggested fix.

  • Examine the report and other parts of the Veracode site to find there is no public mechanism to challenge such a report.

Day 4
  • Report back to the library's issue that the report must be wrong, as the function mentioned could not possibly generate the behavior described.

  • Post an actual example of the vulnerability under discussion and a parallel snippet from the library to demonstrate that it doesn't share the problem.

  • Find Veracode's online support form, and submit a request for help. Keep your expectations low, as this is probably for the sales department.

  • Post a StackOverflow Question2 asking how to find details of a Veracode vulnerability report, using enough details that if the community has the knowledge, it should be easy to answer.

Days 5 & 6
  • Try to enjoy your Friday and Saturday. Don't obsessively check your email to see if Veracode has responded. Don't visit the StackOverflow question every hour to see if anyone has posted a solution. Really, don't do these things; they don't help.
Day 7
  • Add a 250-reputation point bounty to the StackOverflow question, trying to get additional attention from the smart people who must have dealt with this before.
Day 8
  • Find direct email support addresses on the Veracode site, and send an email asking for details of the supposed vulnerability, a snippet that demonstrates the issue, and procedures to challenge their findings.
Day 9
  • Receive a response from a Veracode Support email addressthat says, in part,

    Are you saying our vuln db is not correct per your github source? If so, I can send it to our research team to ensure it looks good and if not, to update it.

    As for snips of code, we do not provide that.

  • Reply, explaining that you find the report missing the details necessary to challenge it, but that yes, you expect it is incorrect.

  • Receive a response that this has been "shot up the chain" and that you will be hearing from them soon.

Days 10 - 11
  • Again, don't obsessively check your email or the StackOverflow question. But if you do happen to glance at StackOverflow, notice that while there are still no answers to it, there are enough upvotes to cover over half the cost of the bounty. Clearly you're not alone in wanting to know how to do this.
Day 12
  • Receive an email from Veracode:

    Thank you for your interest in Application Security and Veracode.

    Do you have time next week to connect?

    Also, to make sure you are aligned with the right rep, where is your company headquartered?

  • Respond that you're not a potential customer and explain again what you're looking for.

  • Add a comment to the StackOverflow to explain where the process has gotten to and expressing your frustration.

Days 13 - 14
  • Watch another weekend go by without any way to address this concern.

  • Get involved in a somewhat interesting discussion about prototype pollution in the comments to the StackOverflow post.

Day 15
  • Receive an actually helpful email from Veracode, sent by someone new, whose signature says he's a sales manager. The email will look like this:

    Hi Scott, I asked my team to help out with your question, here was their response:

    We have based this artifact from the information available in https://github.com/ramda/ramda/pull/3192. In the Pull Request, there is a POC (https://jsfiddle.net/3pomzw5g/2/) clearly demonstrating the prototype pollution vulnerability in the mapObjIndexed function. In the demo, the user object is modified via the __proto__​ property and is
    considered a violation to the Integrity of the CIA triad. This has been reflected in our CVSS scoring for this vulnerability in our vuln db.

    There is also an unmerged fix for the vulnerability which has also been
    included in our artifact (https://github.com/ramda/ramda/pull/3192/commits/774f767a10f37d1f844168cb7e6412ea6660112d )

    Please let me know if there is a dispute against the POC, and we can look further into this.

  • Try to avoid banging your head against the wall for too long when you realize that the issue you thought might have been raised by someone who'd seen the Veracode report was instead the source of that report.

  • Respond to this helpful person that yes you will have a dispute for this, and ask if you can be put directly in touch with the relevant Veracode people so there doesn't have to be a middleman.

  • Receive an email from this helpful person -- who needs a name, let's call him "Kevin" -- receive an email from Kevin adding to the email chain the research team. (I told you he was helpful!)

  • Respond to Kevin and the team with a brief note that you will spend some time to write up a response and get back to them soon.

  • Look again at the Veracode Report and note that the description has been changed to

    ramda is vulnerable to prototype pollution. An attacker is able to inject and modify attributes of an object through the mapObjIndexed function via the proto property.

    but note also that it still contains no details, no snippets, no dispute process.

  • Receive a bounced-email notification because that research team's email is for internal Veracode use only.

  • Laugh because the only other option is to cry.

  • Tell Kevin what happened and make sure he's willing to remain as an intermediary. Again he's helpful and will agree right away.

  • Spend several hours writing up a detailed response, explaining what prototype pollution is and how the examples do not display this behavior. Post it ahead of time on the issue. (Remember the issue? This is a story about the issue.3) Ask those reading for suggestions before you send the email... mostly as a way to ensure you're not sending this in anger.

  • Go ahead and email it right away anyway; if you said something too angry you probably don't want to be talked out of it now, anyhow.

  • Note that the nonrefundable StackOverflow bounty has expired without a single answer being offered.

Days 16 - 21
  • Twiddle your thumbs for a week, but meanwhile...

  • Receive a marketing email from Veracode, who has never sent you one before.

  • Note that Veracode has again updated the description to say

    ramda allows object prototype manipulation. An attacker is able to inject and modify attributes of an object through the mapObjIndexed function via the proto property. However, due to ramda's design where object immutability is the default, the impact of this vulnerability is limited to the scope of the object instead of the underlying object prototype. Nonetheless, the possibility of object prototype manipulation as demonstrated in the proof-of-concept under References can potentially cause unexpected behaviors in the application. There are currently no known exploits.

    If that's not clear, a translation would be, "Hey, we reported this, and we don't want to back down, so we're going to say that even though the behavior we noted didn't actually happen, the behavior that's there is still, umm, err, somehow wrong."

  • Note that a fan of the library whose employer has a Veracode account has been able to glean more information from their reports. It turns out that their details are restricted to logged-in users, leaving it entirely unclear how they thing such vulnerabilities should be fixed.

Day 22
  • Send a follow-up email to Kevin4 saying

    I'm wondering if there is any response to this.

    I see that the vulnerability report has been updated but not removed.
    I still dispute the altered version of it. If this behavior is a true vulnerability, could you point me to the equivalent report on JavaScript's Object.assign, which, as demonstrated earlier, has the exact same issue as the function in question.

    My immediate goal is to see this report retracted. But I also want to point out the pain involved in this process, pain that I think Veracode could fix:

    I am not a customer, but your customers are coming to me as Ramda's maintainer to fix a problem you've reported. That report really should have enough information in it to allow me to confirm the vulnerability reported. I've learned that such information is available to a logged- in customer. That doesn't help me or others in my position to find the information. Resorting to email and filtering it through your sales department, is a pretty horrible process. Could you alter your public reports to contain or point to a proof of concept of the vulnerability?
    And could you further offer in the report some hint at a dispute process?

Day 23
  • Receive an email from the still-helpful Kevin, which says

    Thanks for the follow up [ ... ], I will continue to manage the communication with my team, at this time they are looking into the matter and it has been raised up to the highest levels.

    Please reach back out to me if you don’t have a response within 72 hrs.

    Thank you for your patience as we investigate the issue, this is a new process for me as well.

  • Laugh out loud at the notion that he thinks you're being patient.

  • Respond, apologizing to Kevin that he's caught in the middle, and read his good-natured reply.

Day 25
  • Hear back from Kevin that your main objective has been met:

    Hi Scott, I wanted to provide an update, my engineering team got back
    to me with the following:

    “updating our DB to remove the report is the final outcome”

    I have also asked for them to let me know about your question regarding the ability to contend findings and will relay that back once feedback is received.

    Otherwise, I hope this satisfies your request and please let me know if any further action is needed from us at this time.

  • Respond gratefully to Kevin and note that you would still like to hear about how they're changing their processes.

  • Reply to your own email to apologize to Kevin for all the misspelling that happened when you try to type anything more than a short text on your mobile device.

  • Check with that helpful Ramda user with Veracode log-in abilities whether the site seems to be updated properly.

  • Reach out to that same user on Twitter when he hasn't responded in five minutes. It's not that you're anxious and want to put this behind you. Really it's not. You're not that kind of person.

  • Read that user's detailed response explaining that all is well.

  • Receive a follow-up from the Veracode Support email address telling you that

    After much consideration we have decided to update our db to remove this report.

    and that they're closing the issue.

  • Laugh about the fact that they are sending this after what seem likely the close of business for the week (7:00 PM your time on a Friday.)

  • Respond politely to say that you're grateful for the result, but that you would still like to see their dispute process modernized.

Day 27
  • Write a 2257-word answer5 to your own Stack Overflow question explaining in great detail the process you went through to resolve this issue.

And that's all it takes. So the next time you run into this, you can solve it too!




Update

(because you knew it couldn't be that easy!)

Day 61
  • Receive an email from a new Veracode account executive which says

    Thanks for your interest! Introducing myself as your point of contact at Veracode.

    I'd welcome the chance to answer any questions you may have around Veracode's services and approach to the space.

    Do you have a few minutes free to touch base? Please let me know a convenient time for you and I'll follow up accordingly.

  • Politely respond to that email suggesting a talk with Kevin and including a link to this list of steps.



1 This is standard behavior with Ramda issues, but it might be the main reason Veracode chose to report this.

2 Be careful not to get into an infinite loop. This recursion does not have a base case.

3 Hey, this was taking place around Thanksgiving. There had to be an Alice's Restaurant reference!

4 If you haven't yet found a Kevin, now would be a good time to insist that Veracode supply you with one.

5 Including footnotes.

Source https://stackoverflow.com/questions/69936667

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Database

Tutorials and Learning Resources are not available at this moment for Database

Share this Page

share link

Get latest updates on Database