Explore all Database open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Database

elasticsearch

Elasticsearch 8.1.3

redis

7.0-rc3

prometheus

2.35.0-rc1 / 2022-04-14

etcd

v3.5.0

tidb

tidb-server v6.0.0

Popular Libraries in Database

elasticsearch

by elastic doticonjavadoticon

star image 59266 doticonNOASSERTION

Free and Open, Distributed, RESTful Search Engine

redis

by redis doticoncdoticon

star image 54360 doticonBSD-3-Clause

Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.

prometheus

by prometheus doticongodoticon

star image 42027 doticonApache-2.0

The Prometheus monitoring system and time series database.

etcd

by etcd-io doticongodoticon

star image 37139 doticonApache-2.0

Distributed reliable key-value store for the most critical data of a distributed system

tidb

by pingcap doticongodoticon

star image 31064 doticonApache-2.0

TiDB is an open source distributed HTAP database compatible with the MySQL protocol

leveldb

by google doticonc++doticon

star image 28077 doticonBSD-3-Clause

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

dbeaver

by dbeaver doticonjavadoticon

star image 26064 doticonApache-2.0

Free universal database tool and SQL client

sequelize

by sequelize doticonjavascriptdoticon

star image 26011 doticonMIT

An easy-to-use and promise-based multi SQL dialects ORM tool for Node.js | Postgres, MySQL, MariaDB, SQLite, MSSQL, Snowflake & DB2

rethinkdb

by rethinkdb doticonc++doticon

star image 24920 doticonApache-2.0

The open-source database for the realtime web.

Trending New libraries in Database

sqlmodel

by tiangolo doticonpythondoticon

star image 7183 doticonMIT

SQL databases in Python, designed for simplicity, compatibility, and robustness.

litestream

by benbjohnson doticongodoticon

star image 4979 doticonApache-2.0

Streaming replication for SQLite.

whoogle-search

by benbusby doticonpythondoticon

star image 4865 doticonMIT

A self-hosted, ad-free, privacy-respecting metasearch engine

OpenSearch

by opensearch-project doticonjavadoticon

star image 4846 doticonApache-2.0

🔎 Open source distributed and RESTful search engine.

oceanbase

by oceanbase doticonc++doticon

star image 4231 doticonNOASSERTION

OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

databend

by datafuselabs doticonrustdoticon

star image 3758 doticonApache-2.0

A modern Elasticity and Performance cloud data warehouse, activate your object storage for real-time analytics.

Kats

by facebookresearch doticonpythondoticon

star image 3638 doticonMIT

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

pml-book

by probml doticonjupyter notebookdoticon

star image 3037 doticonMIT

"Probabilistic Machine Learning" - a book series by Kevin Murphy

absurd-sql

by jlongster doticonjavascriptdoticon

star image 2382 doticonMIT

sqlite3 in ur indexeddb (hopefully a better backend soon)

Top Authors in Database

1

microsoft

69 Libraries

star icon26370

2

apache

55 Libraries

star icon62850

3

oracle

27 Libraries

star icon6702

4

simonw

25 Libraries

star icon7864

5

PacktPublishing

24 Libraries

star icon529

6

maxdemarzi

23 Libraries

star icon741

7

aws-samples

22 Libraries

star icon440

8

cloudflare

22 Libraries

star icon5616

9

elastic

21 Libraries

star icon59840

10

ankane

20 Libraries

star icon6042

1

69 Libraries

star icon26370

2

55 Libraries

star icon62850

3

27 Libraries

star icon6702

4

25 Libraries

star icon7864

5

24 Libraries

star icon529

6

23 Libraries

star icon741

7

22 Libraries

star icon440

8

22 Libraries

star icon5616

9

21 Libraries

star icon59840

10

20 Libraries

star icon6042

Trending Kits in Database

Python has quickly gone up the ranks to become the most sought-after language for statistics and data science. It is a high-level, object-oriented language.

We also have a thriving open-source Python community that keeps developing various unique libraries for maths, data analysis, mining, exploration, and visualization.


Keeping that in mind, here are some of the best Python libraries helpful for implementing statistical data. Pandas is a high-performance Python package with easy-to-grasp and expressive data structures. It is designed for rapid data manipulation and visualization and is the best tool when it comes to data munging or wrangling. With this 30k stars+ Github repository, you also get time series-specific functionality. Seaborn is essentially an extension of the Matplotlib plotting library with various advanced features and shorter syntax. With Seaborn, you can determine relationships between various variables, observe and determine aggregate statistics, and plot high-level and multi-plot grids. We also have Prophet, which is a forecasting procedure developed using Python and R. It’s quick and offers automated forecasts for time series data to be used by analysts.

pandas:  

  • Pandas offers robust structures like DataFrames for easy storage and manipulation of data.  
  • Efficient tools for aligning and managing data, simplifying data cleaning and preparation.  
  • Provides diverse functions for flexible data manipulation and analysis.  


prophet:  

  • Specialized in predicting future values in time series data.  
  • Can handle missing data and outliers effectively for reliable forecasting.  
  • Captures recurring patterns in data, especially those tied to seasons or cycles.  

seaborn:  

  • Simplifies the creation of statistical graphics for a better understanding of data.  
  • Seamlessly works with Pandas DataFrames for easy data visualization.  
  • Allows users to tailor plots for a visually appealing presentation.  

statsmodels:  

  • Offers a variety of statistical models and hypothesis tests.  
  • Well-suited for economic and financial data analysis.  
  • Provides tools to visualize and summarize statistical information.

altair:  

  • Enables concise and declarative creation of interactive visualizations.  
  • Leverages a powerful JSON specification for describing visualizations.  
  • Emphasizes simplicity and minimal code for creating sophisticated visualizations.  

pymc3:  

  • Allows expressing complex statistical models using a probabilistic programming approach.  
  • Focuses on Bayesian statistical methods for uncertainty estimation.  
  • Integrates with Aesara for efficient symbolic mathematical expressions.  

imbalanced-learn:  

  • Tools for addressing imbalances in class distribution within machine learning datasets.  
  • Integrates smoothly with Pandas DataFrames for preprocessing imbalanced data.  
  • Offers flexibility through customizable algorithms for imbalanced data handling.  

sktime:  

  • Specializes in analyzing and forecasting time series data.  
  • Provides a modular framework for easy extension and customization.  
  • Seamlessly integrates with other machine learning and deep learning libraries.  

httpstat:  

  • Visualizes statistics related to HTTP requests made with the curl tool.  
  • Implemented as a compact Python script for simplicity.  
  • Works seamlessly with Python 3 for compatibility with the latest Python environments. 

darts:  

  • Tools for manipulating time series data facilitating data preprocessing.  
  • Specialized in making predictions on time series data.  
  • Integrates with deep learning frameworks for advanced forecasting using neural networks.  

gluon-ts:  

  • Focuses on modeling uncertainty in time series predictions.  
  • Integrates with Apache MXNet for efficient deep learning capabilities.  
  • Allows users to experiment with various modeling approaches and customize their models. 

selfspy:  

  • Monitors and logs personal data continuously for self-analysis.  
  • Compatible with various platforms for versatility in data tracking.  
  • Aids in tracking and analyzing personal habits and activities for self-improvement.

stumpy:  

  • Implements algorithms for efficient time series analysis using matrix profiles.  
  • Identifies recurring patterns or motifs in time series data.  
  • Utilizes parallel computing for faster and more efficient computations.  

gitinspector:  

  • Analyzes and provides insights into Git repositories.  
  • Features an interactive command-line interface for user-friendly exploration.  
  • Allows users to customize analysis output format.  

Mycodo:  

  • Logs data from sensors for environmental monitoring.  
  • Provides a user-friendly interface accessible through a web browser.  
  • Enables automation and control of devices based on collected sensor data.  

pyFlux:  

  • Implements models for probabilistic time series analysis.  
  • Scales efficiently for large datasets and complex models.  
  • Provides tools for diagnosing and evaluating the performance of statistical models.  

sweetviz:  

  • Automates the process of exploring and analyzing datasets.  
  • Allows for easy comparison of two datasets to identify differences.  
  • Provides flexibility in generating and customizing analysis reports.  

vectorbt:  

  • Enables efficient backtesting of trading strategies using vectorized operations.  
  • Provides tools for analyzing and visualizing trading strategy performance.  
  • Allows for flexible management of investment portfolios. 

gitStats:  

  • Analyzes and presents historical metrics related to code development.  
  • Generates visual representations of code-related metrics.  
  • Includes metrics related to code contributor diversity.  

pmdarima:  

  • Automatically selects suitable ARIMA models for time series data.  
  • Decomposes time series data into seasonal components for analysis.  
  • Integrates with the scikit-learn library for seamless machine learning workflows.

covid-19:  

  • Provides up-to-date information on the COVID-19 pandemic.  
  • Offers data at both global and country-specific levels.  
  • Presents COVID-19 data in a visual format for better understanding.  

spacy-models:  

  • Includes pre-trained natural language processing models for various tasks.  
  • Supports multiple languages for broader applicability.  
  • Allows users to customize and fine-tune models for specific tasks.

nba_py:  

  • Retrieves data related to the National Basketball Association (NBA).  
  • Integrates seamlessly with NBA APIs for data access.  
  • Provides tools for analyzing and interpreting statistical aspects of NBA data.  

pingouin:  

  • Offers a library for conducting various statistical analyses.  
  • Includes tools for analysis of variance (ANOVA) and regression analysis.  
  • Provides measures for quantifying the magnitude of observed effects in statistical tests.  

FAQ

1. What makes Pandas a valuable tool for data manipulation and visualization?  

Pandas is a high-performance Python package with expressive data structures. It carries out rapid data manipulation and visualization. Its design and specialized time series functions make it ideal for data munging.  

   

2. How does Seaborn extend the functionality of the Matplotlib plotting library?  

Seaborn is an extension of Matplot lib, offering advanced features and shorter syntax. It enables users to determine relationships between variables, observe statistics, and plot high-level. This provides a more streamlined approach to data visualization.  

   

3. What unique features does Seaborn bring to data visualization?  

Seaborn provides advanced features for statistical data visualization. This includes 

  • the ability to determine relationships between variables, 
  • observe aggregate statistics, and 
  • easily create high-level and multi-plot grids. 

Its syntax is designed for simplicity and efficiency in plotting.  

   

4. What is the role of Prophet in time series forecasting, and why is it notable?  

Prophet is a forecasting procedure developed in Python and R. It offers quick and automated forecasts for time series data. It is user-friendly for analysts and generates accurate forecasts. It does not require extensive manual intervention.  

   

5. How can the Python community contribute to developing and improving these libraries?  

The Python community can contribute to library development. Contribute by participating in open-source projects, submitting bug reports, and engaging in discussions. Contributing code, documentation, or insights in forums continuously enhances these libraries. 

Around the globe, people seek the help of doctors when they are ill, but they are not available due to some reason. The Cancer Disease Prediction application is end-user support and online consultation treatment. The application allows user to share their health-related metrics for cancer prediction, which processes patient details to check for various conditions that could be associated. We can use intelligent data mining techniques to guess the most accurate state of a patient's details. Various steps involved in this kit are: 1. Data Preparation 2. Data Mining 3. Pattern Recognition 4. Knowledge Representation Based on the result, the system automatically shows the result-specific doctors for further treatment. The system allows users to view doctor's details. Listed below are the best libraries that can help data mining and build a prediction system for cancer.

Pattern Recognition

Data Mining

Data preparation

Knowledge Representation

Python Time Series Analysis libraries offer features for working with time series analysis. It offers functions to import, export, and manipulate time series data. It supports functions like aggregation, resampling, and time series cleaning. It offers functions to create various visualizations. It can offer functions like scatter plots, histograms, heat maps, and time series plots. 


It offers a tool from data cleaning and visualization. It can help with statistical forecasting and analysis. It offers statistical functions like ARIMA modeling, seasonal decomposition, and regression analysis. It offers functions like seasonality modeling, outlier detection, and trend estimation. It provides time series classification, regression, and clustering. These libraries can be integrated with another Python more to offer more functionality. These libraries have extensive documentation and are supported by active communities of users.


Here are the 23 best Python Time Series Analysis Libraries for helping developers:

sktime: 

  • Is a Python machine learning library for time series analysis.  
  • Offers various models and algorithms, preprocessing, and feature engineering of time series data.  
  • Includes various algorithms and models like time series classification, regression, forecasting, and clustering.  
  • Includes preprocessing and feature engineering tools. 
  • Includes tools like aggregating, scaling, and transforming time series data. 

darts: 

  • Is a Python library for time series modeling and forecasting. 
  • Offers various models and methods like classical statistical and modern machine learning models. 
  • Includes various models and algorithms like ARIMA, exponential smoothing, Prophet, LSTM, and more. 
  • Includes tools like functions for scaling, rolling windows, and differencing.  

autogluon:

  • Is an open source Python library for automated machine learning (AutoML).
  • Is designed to offer accessible interface to train and deploy machine learning models.
  • Includes automated model selection and hyperparameter tuning using gradient-based and Bayesian optimization.
  • Is a powerful tool for automating machine learning tasks.

gluonts: 

  • Is an open source Python library for time series forecasting. 
  • Offers various algorithms for time series analysis and deep learning models. 
  • Includes various deep learning models like LSTNet, Transformer, and deepAR. 
  • Includes visualization tools like probability distribution plots, time series plots, and more. 

Informer2020:

  • Is an open source Python library for time series forecasting.
  • Includes a deep learning model.
  • Helps handle a long-time series with complex seasonality and patterns. 
  • Includes support for both multivariate and univariate time series data.
  • Is a powerful and specialized library for time series forecasting with deep learning. 

Merlion:

  • Is an open source Python library for time series forecasting and analysis.
  • Is designed to offer an extensive and modular framework.
  • Helps build, evaluate, and deploy time series models.
  • Includes various models like machine learning, deep learning, and traditional statistical models.
  • Includes built-in support for change point detection, anomaly detection, and other tasks.

neural_prophet: 

  • Is an open source Python library for time series forecasting. 
  • Includes various neural network models like recurrent, convolutional, and feedforward networks. 
  • Offers built-in uncertainty estimation support, allowing users to generate probabilistic forecasts.   
  • Includes various performance metrics like mean absolute error, mean squared error and accuracy. 

vectorbt: 

  • Is an open source Python library for analyzing algorithmic trading strategies and backtesting. 
  • Offers various tools for analyzing financial time series data. 
  • Includes tools for backtesting trading methods like calculating performance metrics. 
  • Includes tools for generating visualizations, and simulating trades. 
  • Supports various financial instruments like futures, stocks, cryptocurrencies, and options. 

forecasting: 

  • Offers various machine learning and traditional models for time series analysis. 
  • Includes various traditional models like Exponential smoothing, seasonal decomposition, and ARIMA. 
  • Offers various models like Gradient boosting, neural networks and random forests. 
  • Includes built-in preprocessing and data cleaning support. 
  • Supports functions like filtering outliers, handling missing data, and more. 

statsforecast: 

  • Offers various statistical models for time series forecastings like SARIMA, ARIMA, and VAR. 
  • Includes methods for model evaluation and selection like Akaike and Bayesian Information Criterion. 
  • Offers tools for handling missing data and performing seasonal decomposition. 

tslearn: 

  • Is a Python library for machine learning tasks and time series analysis on time series data. 
  • Offers various tools for feature extraction, model selection and evaluation, and data preprocessing. 
  • Includes algorithms for classification, time series, clustering, and regression forecasting. 

Mycodo: 

  • Is a platform for automating and monitoring aquaponic and hydroponic systems. 
  • Allows users to create complex automation workflows using control devices and visual interfaces. 
  • Includes a web interface for controlling and monitoring your system. 
  • Offers various features like graphing, altering, and data logging.  

pyflux: 

  • Is a Python library for time series forecasting and analysis. 
  • Offers various statistical models like state space, dynamic regression, and ARIMA models. 
  • Includes tools for model evaluation, selection, and visualization. 
  • Offers various other models like Dynamic Linear Regression, GARCH, and Local Level models. 

hypertools:

  • Is a Python library for visualizing high-dimensional data.
  • Offers various tools for analyzing and exploring high-dimensional datasets.
  • Offers tools like dimensionality clustering, visualization, and reduction.
  • Offers several methods like t-SNE, UMAP, and dimensionality reduction.
  • Includes clustering algorithms like Spectral Clustering and K-means for grouping similar data points.

alibi-detect:

  • Is an open source Python library for outlier and anomaly detection. 
  • Offers algorithms for detecting anomalies and outliers.
  • Offers algorithms like statistical, rule-based, deep, and shallow learning methods. 
  • Includes various algorithms like Local Outlier Factor, One-Class SVM, and Isolation Forest. 
  • Offers several explainability methods like Counterfactual explanations algorithms and Anchors algorithms.

orbit: 

  • Is a Python library for probabilistic time series forecasting. 
  • Offers various statistical models like deep learning and Bayesian models.  
  • Includes models like Deep State Space, Gaussian Process Regression, and Bayesian Structural Model.  
  • Allows users to incorporate uncertainty for long-term or forecasting in volatile environments. 

carbon:

  • Is a Python library for working with times and dates.
  • Offers a simple and intuitive API for manipulating times and dates.
  • Support for localization and time zones. 
  • Includes features interval calculations, time zone conversion, and human-readable data formatting. 
  • Includes several features like generating a range of dates.
  • Includes working with Unix timestamps and the difference between the two dates.

pyts:

  • Is a Python library for time series classification and analysis.
  • Uses machine learning methods.
  • Offers various tools for transforming, analyzing, and preprocessing time series data.
  • Includes several methods for transforming data into a format.
  • Includes algorithms like Continuous Wavelet Transform, Symbolic, and Piecewise Aggregate Approximation methods. 

flow-forecast:

  • Helps with time series anomaly detection and forecasting in water distribution systems.
  • Offers various tools for modeling, preprocessing, and visualizing time series data.
  • Includes functions like time series normalization, outlier removal, and aggregation.
  • Can be used for making predictions based on historical time series data.

pmdarima:

  • Is a Python library for fitting and selecting ARIMA models to time series data.
  • Offers an interface for fitting ARIMA models.
  • Offers tools for selecting the optimal model parameters.
  • Offers methods like Bayesian and Akaike Information Criterion.

neuralforecast: 

  • Is a Python library for time series forecasting using neural networks. 
  • Offers an interface for building and training models for time series data. 
  • Provides tools for visualizing and evaluating the model’s performance. 
  • Includes various models like Convolutional Neural Networks, Long-Short Term Memory, and Multi-Layer Perception. 

whisper:

  • Is a database library for storing time series data.
  • Is designed for handling large volumes of data with high write and read throughput.
  • Uses a fixed-size database schema where the data is stored in archives.
  • Can cover different time ranges. 
  • Includes various tools for querying and manipulating data.
  • Includes tools like calculating the sum and average over a specific time range.

arch: 

  • Is a Python library for econometric time series modeling, forecasting, and analysis. 
  • Offers various methods and models like Generalized Autoregressive, Autoregressive Conditional Heteroscedasticity, etc. 
  • Includes several functions for modeling time data. 
  • Includes functions for estimating the parameters of the models. 
  • Includes tools for simulating data from the models and forecasting future values.   

Go Database libraries offer a generic SQL database interface. It supports databases like MySQL, SQLite, PostgreSQL, and many more. This library offers an effortless way to connect various databases and manage those connections. It supports the execution of SQL queries and prepared statements, allowing easy and secure access to databases.  


Go Database libraries offer a simple and intuitive way to manage database transactions, allowing the execution of different SQL statements as a single unit of work. This library offers detailed error handling, providing developers with more information on what went wrong in case of any error. This library supports prepared statements, which will help prevent SQL injection attacks and make it a secure way to interact with databases. It is a generic SQL database interface and supports multiple database drivers making it a flexible and versatile choice for building database-driven applications.  


Here is the list of the 13 best Go Database libraries that are handpicked to help developers: 

prometheus: 

  • Is designed for collecting metrics from various sources, storing, and querying them to provide insights into a system's health and performance.  
  • Offers a set of tools to build applications that exposes metrics in a format that prometheus can understand.  
  • Supports several metrics like gauges, summaries, counters, and histograms.  

tidb: 

  • Is an open source, MySQL-compatible, distributed RDBMS built for scalability and performance. 
  • Offers a way for executing SQL queries against TiDB databases. 
  • Offers a way to create, modify, and delete database schemas, indexes, and tables.  

cockroach: 

  • Is an open source, distributed SQL DBMS designed to provide reliable, scalable, and universally available transactional data storage.  
  • Offers a way for executing SQL queries against CockroachDB databases.  
  • Offers a simple and intuitive way to manage transactions in CockroachDB databases, allowing you to execute multiple SQL statements as a single unit of work. 

influxdb: 

  • Offers an easy-to-use interface to perform common database operations like creating and deleting databases, reading, and writing data, and managing users.  
  • Is available in various programming languages like Java, JavaScript, Go, and Python.  
  • Every library implementation offers a set of classes and functions which can be used for interacting with InfluxDB. 

dgraph: 

  • Is a distributed graph database that supports a flexible query and schema and is optimized for handling large-scale graph data.  
  • Offers an easy-to-use interface to perform common database operations like creating and deleting edges and nodes, reading, and writing data, and managing indexes. 
  • Includes writing data to the database, querying data from the database, and managing schema and indexes. 

milvus: 

  • Is an open source vector database designed for storing and searching large-scale vector data like videos, images, and audio.  
  • Supports similarity search, allowing users to search for similar vectors in a dataset.  
  • Is a software library that will allow developers to interact with Milvus from within their programs.  

sqlx: 

  • Is an open source library that offers an enhanced interface for working with SQL databases in Rust.  
  • Offers a simple and ergonomic API for interacting with databases, making it easier to write robust and efficient database code and reducing boilerplate code.  
  • Supports Rust’s async/await syntax, allowing non-blocking I/O operations and efficient use of system resources. 

teleport: 

  • Is an open source library that offers secure access to computing resources like containers, Kubernetes clusters, and servers.  
  • Offers a unified access layer that can be used for authorized and authenticated users, managing access to resources, and auditing user activity. 
  • Can be integrated into existing applications and supports different authentication like MFA and SSO.  

rqlite: 

  • Is an open source distributed SQL database designed for use in high-throughput and low-latency environments.  
  • Offers consistent performance, fault tolerance, and scalability by distributing SQL queries and data across different nodes in a cluster.  
  • Can make it easier for developers to build applications that use rqlite as their data store.  

immudb: 

  • Is a key-value database, open source transactional with built-in cryptographic verification.  
  • Offers tamper-evident storage, is immutable, and supports ACID transactions.  
  • Includes executing transactions that modify the database, configuring the cryptographic verification settings, and monitoring database health and performance.  

db: 

  • Is a productive data access layer for the Go programming language, which offers agnostic tools to work with various data sources.  
  • Provides tools for common operations with databases and stays out of the way with advanced cases.  
  • Offers a common interface for developers to work with different NoSQL and SQL database engines.  

cayley: 

  • Allows developers to store and query graph data using various query languages like SPARQL, Gremlin, and GraphQL.  
  • Is a software library that will allow developers to interact with Cayley from within their programs.  
  • Can make it easier for developers to build applications that use Cayley as their data store.  

vitess: 

  • Offers horizontal resharding, scaling, and failover capabilities for MySQL databases. 
  • Is a software library that will allow developers to interact with Vitess from within their programming.  
  • Developers can focus on building their application's logic and let the library handle the low-level details of communicating with the Vitess database cluster.  

Object Relational Mappers enable developers to interact with a relational database. ORMs provide an abstraction layer between the database engine and the database model. It helps interact with the database instead of SQL commands. ORMs offer convenience and flexibility when performing database operations. It can make development easier by reducing the code to interact with the database.  


NodeJS ORM library allows developers to interact with databases in an organized way. It simplifies creating, reading, updating, and deleting data from a MySQL database. It provides an object-oriented interface for interacting with the data. The library provides features like validation, association, and model definition. It allows developers to design their applications. Using a NodeJS MySQL orm library reduces the time and complexity of dealing with a MySQL database.  


In NodeJS ORM Libraries, these tools are all for managing a NodeJS database. Prisma Migrate allows developers to apply changes to databases in a repeatable manner. Prisma Studio provides an intuitive UI to view and manage data. The Prisma Toolkit provides a unified interface for managing databases. The Prisma Schema helps define data models and write queries. The Prisma Client provides a convenient API for querying and mutating data. And the Prisma Documentation provides tutorials and API references. With these tools, developers can manage their NodeJS MySQL databases.  


Data Mapper, Active Record, and NodeJS ORM libraries are essential components. Data Mapper maps the data between the application and the database. The layer of the ORM mediates between the application and the database. Active Record is an object-relational mapping pattern implementation. An architectural pattern maps an object-oriented domain model to a relational database. NodeJS MySQL ORM libraries provide a way to interact with MySQL databases. These libraries provide an easy way to perform database operations. The operations include creating and updating records, running queries, and managing transactions. They also offer a convenient way to interact with MySQL data in a Node.js environment. Developers can use these libraries to create applications interacting with databases with minimal overhead.  


Different types of nodes that we can use in NodeJS MySQL orm libraries are:  

Server Nodes:  

  • Database Server: This node runs the database server. It provides access to the database. We can host on a remote server.  
  • Database Client: This node connects to the database server and sends queries. The database client can be a web application, a command line interface (CLI), or an API.  
  • ORM Server: This node handles running the ORM library. It provides access to the database through the ORM library.  

Client Nodes:  

  • Database Client: This node connects to the database server and sends queries. The database client can be a web application, a command line interface (CLI), or an API.  
  • ORM Client: This node connects to the ORM server and sends queries. The ORM client can be a web application, a command line interface (CLI), or an API.  


Different modules that we can include in a NodeJS MySQL orm library are:  

  • Connection Module: Allows you to establish connections to a MySQL database.  
  • Query Module: Allows you to perform queries on the database.  
  • Model Module: Provides an object-oriented interface for representing database tables and records.  
  • Schema Module: Allows you to define and apply structure to the database.  
  • Migration Module: Allows you to automate creating and modifying database tables.  
  • Validation Module: It validates data before we write it to the database.  
  • Transaction Module: Allows you to manage transactions across many database operations.  
  • Debug Module: This provides debugging tools for tracking down errors in your code.  


Different connection types that we can make between a NodeJS MySQL orm library and a database can be:  

  • Single Connection:  

A single connection is the most basic type of connection. We can connect between a NodeJS MySQL orm library and a database. This type of connection will provide a single persistent connection to the database. It allows the application to execute queries without re-establishing the connection each time.  

  • Pooled Connection:  

A pooled connection is a connection to the database created. This allows using the same connection pool, improving performance and scalability. We can maintain it with the NodeJS MySQL ORM library.  

  • Connection Pooling:  

Connection pooling establishes many connections to the database. It maintains them for use by many applications. This type of connection is beneficial for applications. It allows the application to use many connections to the database. It needs a high level of scalability and performance.  


To troubleshoot issues with a library, one should remember some key points:  

  • Ensure that the version of NodeJS and the appropriate MySQL ORM library we install.  
  • Check the connection string for any typos or incorrect information.  
  • Ensure the database is running and accessible.  
  • Check that the username and password used in the connection string are correct.  
  • If we host the database, ensure the server is reachable and the appropriate ports are open.  
  • Ensure that we install all the required packages.  
  • Check the log files for any errors related to the connection.  
  • Try restarting the database server.  
  • Try restarting the NodeJS server.  
  • Try resetting the connection string and then re-establishing the connection.  

Conclusion: 

NodeJS MySQL ORM library is a powerful tool to interact with databases. It provides an intuitive API for working with complex data models. It helps with performing CRUD operations. The library supports database concepts like schemas, tables, and relational databases. It also supports basic operations such as creating, reading, updating, and deleting records. NodeJS MySQL ORM library helps to interact with a database. It reduces the code necessary for database operations.  


Some of the best NodeJS ORM Libraries are:  

typeorm:

  • It supports Active Record and Data Mapper patterns. It helps you select the one that best fits your application.  
  • Written in TypeScript and supports TypeScript for type-safe database interactions.  
  • It supports data validation with built-in validators. It supports custom validators.  

sequelize:  

  • Powerful query and data manipulation APIs.  
  • Built-in connection pooling.  
  • Easy-to-use CLI to automate database migrations.  

knex: 

  • It allows you to observe the exact SQL queries. We can generate it by the queries constructed with the query builder API.  
  • It supports both callbacks and promises for API calls. It allows easy integration into existing applications.  
  • Allows you to construct SQL queries of varying complexity.  

bookshelf:  

  • It manages relationships between related models.  
  • It supports both promise-based and traditional callback-based async APIs.  
  • Integrates with Knex.js, which allows you to use a wide range of database dialects.  

waterline:  

  • Built-in support for models and associations to create and manage relationships between data.  
  • Built-in migration support to keep track of changes to database structure over time.  
  • Built-in support for advanced aggregation functions to perform complex data analysis.  

node-mysql2:

  • Support for many databases and versions of MySQL.  
  • A streaming API for high-performance data retrieval.  
  • A rich set of features and plugins, including query building, debugging, and more.  

node-orm2:

  • It provides an ORM that makes mapping models to tables and objects to rows easy.  
  • It provides a powerful query builder. It allows you to construct complex queries without writing raw SQL.  
  • Provides transaction support, allowing you to group operations into a single transaction.  

node-sql:

  • Highly extensible and customizable, allowing developers to extend its functionality with custom plugins.  
  • It has a simple, intuitive interface to get up and running.  
  • It supports both callbacks and promises. It allows developers to choose the style that best suits their development needs. 

FAQ:  

What is Knex SQL, and how does it compare to other nodejs, MySQL, or libraries?  

Knex.js is an SQL query builder for Node.js. It is a "batteries included" SQL query builder. It provides a powerful and versatile tool for creating and running database queries. We can write it in JavaScript and use it in Node.js applications. We can use it for relational databases (e.g., MySQL, Postgres, SQLite) and NoSQL databases (e.g., MongoDB).  


Knex SQL is to other popular Node.js ORMs like Sequelize and Objection.js, but it has a few advantages: 

  1. It supports a wide range of databases. It gives the flexibility to work with the database of their choice. 
  2. It provides a straightforward API for writing SQL queries to get up and running. 
  3. Knex SQL offers an interface for writing advanced questions like joins and subqueries.  

 

What are the pros and drawbacks of the Sequelize ORM library for NodeJS MySQL development?  

Pros:  

  • Sequelize is a well-documented ORM library, making it easy to learn and use.  
  • It supports databases to switch from one database to another with minimal effort.  
  • It simplifies queries to focus on the application's business logic instead of the database.  
  • It supports various features, such as validations, associations, and migrations.  
  • It is open-source and well-maintained, with a large community of active users.  

Cons:  

  • Debugging complex queries can be difficult because of the abstraction layer.  
  • It needs a configuration and setup before the application we can use.  
  • There may be better choices that require low latency or complex queries.  
  • It is less well-suited for real-time applications than other Node.js ORMs.  
  •  

What are Object Relational Mappers? Why do they matter for database concepts in NodeJS MySQL?  

Object Relational Mappers help developers to interact with a database using objects. An ORM can simplify the querying and manipulating of the database to build applications. ORMs offer a layer of abstraction between the database and the application. It helps understand the underlying database structure. It helps write code in a common language instead of having to write SQL queries. This helps create applications, as they need to learn a new language.  

 

How is using a Node ORM different than using SQL?  

A Node ORM provides an abstraction layer over SQL. It allows interaction with database objects and functions instead of SQL queries. This can make the code easier to maintain, and it can also help to prevent common SQL injection attacks. Additionally, ORMs provide features to automate tasks like creating tables and querying data.  

 

Can you explain what a Data Mapper is and how it works in an ORM library?  

A Data Mapper is an object-oriented programming pattern. It separates the domain/business layer from the data layer. It is a software layer. It maps data between objects and a database while keeping them independent. Object-relational mappers map objects to database tables and perform CRUD operations. The ORM uses the Data Mapper pattern to move data between the object and relational worlds. ORMs provide a high-level abstraction of the data layer. It allows developers to interact with the data layer without writing SQL queries.  

 

Can we consider new ORM libraries over more established ones?  

There are several reasons to consider new ORM libraries over more established ones. We build Newer ORM libraries with the technologies, making them faster and more efficient. They also often have better support for newer databases, such as NoSQL databases. Additionally, they may have more flexible or intuitive APIs, making them easier to use. Finally, unique libraries may have better documentation and more active development communities. It helps have bug fixes and feature updates.  

 

Are there particular database clients that work better with libraries than others?  

Yes, different ORM libraries are compatible with varying clients of databases. For example, Sequelize is a popular ORM library for Node.js that works best with MySQL and PostgreSQL. Mongoose is a popular ORM library that works best with MongoDB databases.  

LocalStorage is a browser-native API that helps to store data in the user's browser. It's a widely supported Web Storage API specification interface. In Node.js, the "localStorage" module wraps the browser's native localStorage API.  

 

LocalStorage enables client-side content storage, persisting data even after the browser window closes. It offers a simple API with various methods to manipulate data items asynchronously. Dot-property syntax allows easy data access and modification using JavaScript functions.  

 

When working with LocalStorage, remember that it has a read-only property. The client-side code can only access and modify the data stored by itself. Moreover, implement error handlers for any access or modification issues.  

 

In Node.js, LocalStorage stores user data and application data for specific functionalities. It's useful for temporary data during asynchronous tasks or operations.  

 

Using LocalStorage in Node.js applications can enhance performance. Caching retrieved data from a web server improves application responsiveness, reducing HTTP requests. It benefits frequent data fetching, code editors, and file manipulation.  

 

A web server can cache data using LocalStorage, improving load and response times. The LocalStorage stores authentication tokens or session info for user authentication.  

Conclusion:

LocalStorage in Node.js provides efficient client-side data storage and retrieval. It enhances application performance, offers a simple API, and supports various browsers. Node.js and Linux developers enjoy LocalStorage's versatility in browser and server-side technologies. 

localForage   

- It helps in storing data locally within the browser.  

- It supports key-value pair storage for small to medium-sized data.  

- It stores user preferences, session data, or caching. 

lowdb:   

- It helps create and manage a lightweight JSON database on the local file system.  

- It supports querying, filtering, and updating data using a familiar syntax.  

- It is useful for small to medium-sized applications that require a simple database.  

nedb:   

- It helps in creating an in-memory or persistent embedded database.  

- It supports indexing, querying, and sorting data.  

- It is suitable for small to medium-sized apps requiring lightweight database solutions.  

store:   

- It helps in managing a simple key-value stored in memory.  

- It supports storing and retrieving data with an easy-to-use API.  

- It is used for temporary storage or caching data within a Node.js application. 

keyv:   

- It helps create a key-value store that supports various storage backends (e.g., memory, Redis, SQLite).  

- It supports TTL (Time-to-Live) for automatic data expiration.  

- It is suitable for distributed applications or scenarios with different storage options.  

level:   

- It helps in creating a fast and simple key-value store.  

- It supports various storage backends, including in-memory, disk-based, and cloud-based options.  

- It is useful for applications that require efficient data storage and retrieval.  

memcached:   

- It helps in utilizing a Memcached server for distributed caching.  

- It supports storing and retrieving data using a simple API.  

- It is helpful for applications with high read-intensive workloads or caching requirements.  

node-persist:   

- It helps in persisting data on the local file system.  

- It supports automatic serialization and deserialization of JavaScript objects.  

- It is suitable for storing larger data sets or complex data structures.

keyv-file:   

- It helps in creating a file-based key-value store.  

- It supports the persistence and synchronization of data across multiple instances.  

- It is suitable for scenarios where a file-based storage solution is preferred.  

FAQ 

1. What is the localStorage API, and what are its advantages?   

The localStorage API is a browser-native API that allows web apps to store data on the client side. Its simple key-value storage mechanism helps developers save user browser data. The advantages of using the localStorage API include:  

Simplicity:  

The API offers a straightforward interface with methods like setItem(), getItem(), and removeItem(). This helps with storing, retrieving, and removing data.  

Persistence:   

Data stored using localStorage persists even after the browser window is closed. This provides a reliable storage solution for web applications.  

Larger Storage Capacity:  

localStorage allows for larger storage capacity compared to other client-side storage options.  

Better Performance:   

Accessing data from localStorage is faster than making server requests. Thus, it helps in improving the performance of the web application.  

Security:   

Data stored in localStorage is only accessible by the web application itself. This is because it is tied to the origin and cannot be accessed by other websites or scripts.  

 

2. How does browser local storage work with Web Storage API?   

Browser local storage works hand in hand with the Web Storage API. It includes two storage mechanisms: localStorage and sessionStorage. Both mechanisms provide a key-value storage interface but differ in data persistence. localStorage stores data that remains available even after the browser window is closed. On the other hand, sessionStorage stores data for the duration of the page session.  

 

To use local browser storage, you can interact with it through the Web Storage API methods. These can include localStorage.setItem(), localStorage.getItem(), and localStorage.removeItem(). These methods allow storing, retrieving, and removing data from the localStorage object. Data is stored as strings and can be accessed using unique keys.  

 

3. What are the considerations for JavaScript files when using the Node.js LocalStorage library?   

Considerations when using JavaScript files with the Node.js localStorage library:  

  • Firstly, ensure to import the localStorage library into your JavaScript file. Do this using the require() function or include it in your project dependencies.  
  • Secondly, localStorage being browser-native API, is default unavailable in Node.js runtime environment. To use localStorage in Node.js, install & import a library emulating browser functionality.  

 

4. What browser support is available for the nodejs localstorage library?   

The browser support for Node.js localStorage library depends on the chosen module. Libraries vary in compatibility with browsers. Some support major browsers, and others have limitations or specific requirements.  

 

Check the localStorage library documentation for the browser support matrix. Modern browsers have good support for localStorage, including Chrome, Firefox, Safari, and more. However, testing your application across different browsers is good to ensure consistency.  

 

5. How do web requests interact with the nodejs localstorage library?   

The nodejs localStorage library does not interact with web requests. Node.js LocalStorage enables local storage within a runtime environment, like the browser localStorage. It stores and retrieves server data locally, avoiding external HTTP requests.  

 

When using web requests in Node.js, the localStorage library can cache fetched data. Storing frequently accessed data locally minimizes repetitive requests. This enhances performance and reduces network latency.  

 

Thus, the nodejs localStorage library provides local storage functionality within its runtime environment. Web requests in Node.js can use local storage to optimize data fetching and performance. 

Node.js SQLite ORM (Object-Relational Mapping) libraries simplify the database working process. It reduces the boilerplate code amount required by providing a higher abstraction level. These libraries play a crucial role in developing applications and interacting with databases.  

 

SQLite ORM libraries enable developers to write code independent of the underlying database. If you switch from SQLite to a different database engine, you can do so with minimal code changes. An interface is provided by ORM libraries regardless of the underlying database. It makes it easier to work with different database systems.  

 

SQLite ORM libraries provide a higher abstraction level and simplify database operations. It improves productivity and enhances portability, promoting best practices. It offers performance optimization. They are tools for developers working with SQLite databases in Node.js applications. It helps focus on building features rather than dealing with low-level database operations.  

 

Let's look at each library in detail. The links allow you to access package commands, installation notes, and code snippets. 

sequelize:  

  • It provides a high-level abstraction over the underlying database system like SQLite.  
  • It promotes a model-driven development approach.  
  • It generates SQL queries based on the defined models and their relationships.  
  • It includes built-in validation mechanisms for ensuring data integrity.  

typeorm:  

  • It simplifies working with databases by providing a high-level abstraction layer.  
  • It supports many database systems, including SQLite, MySQL, PostgreSQL, and more.  
  • It helps you define entities, which are JavaScript/TypeScript classes representing database tables.  
  • It provides a migration system that allows you to version. It applies database schema changes using TypeScript or JavaScript files. 

waterline:  

  • The waterline concept in Node.js SQLite ORM libraries might not be applicable.  
  • It is a database-agnostic ORM that supports SQLite and other databases.  
  • It provides a unified API for working with different databases. It offers features like query building, associations, and automatic schema management.  
  • It often provides query builders and optimization techniques to generate efficient SQL queries.  

objection.js:  

  • It follows the Model-View-Controller (MVC) architectural pattern and encourages model-driven development.  
  • It includes data validation and modeling features. It is crucial for maintaining data integrity and consistency in your SQLite database.  
  • It also supports middleware and hooks. It allows you to add custom logic and behavior to the database operations.  
  • It helps define models that represent database tables. These models encapsulate the logic for interacting with the database. 

massive.js:  

  • It is a JavaScript database library. It provides a convenient and efficient way to interact with databases, particularly PostgreSQL.  
  • It makes it easy to integrate your Node.js applications with PostgreSQL databases.  
  • It simplifies the mapping of data between JavaScript objects and database tables.  
  • It helps build scalable and maintainable applications by promoting the separation of concerns. 

node-orm2:  

  • This programming technique interacts with a relational database using object-oriented paradigms.  
  • It is a specific ORM library for Node.js designed to work with SQLite databases.  
  • It maps database tables to JavaScript objects. It provides a more intuitive and object-oriented way to use data.  
  • They are designed to work with database systems like SQLite, MySQL, and PostgreSQL.  

bookshelf:  

  • It is a popular object-relational mapping (ORM) library for Node.js. It is designed to work with SQL databases such as SQLite.  
  • It simplifies interacting with an SQLite database by providing an intuitive API.  
  • It offers an ORM layer, which maps JavaScript objects to database tables and vice versa.  
  • It includes a powerful query builder. It simplifies the creation of complex database queries. 

FAQ:  

1. What are the benefits of using a sqlite ORM library with type-safe and modern JavaScript?  

Type-safe and modern JavaScript can bring several benefits. Here are some of them:  

  • Type Safety  
  • Productivity  
  • Simplicity  
  • Portability  
  • Maintainability  
  • Testing  


2. How does the query builder work in nodejs sqlite?  

In Node.js, the SQLite query builder is a library or module. It provides a set of functions or methods to build SQL queries. It helps simplify constructing complex SQL statements by providing an intuitive API. Here's a general overview of how a query builder for SQLite in Node.js might work:  

  • Installation  
  • Database Connection  
  • Query Builder Initialization  
  • Table Selection  
  • Column Selection  
  • Conditions  
  • Sorting  
  • Limit and Offset  
  • Query Execution  
  • Handling Results  


3. What is Sequelize ORM, and how does it compare to other nodejs libraries?  

Sequelize is an Object-Relational Mapping (ORM) library for Node.js. It provides an interface for interacting with relational databases. It supports many database systems, including PostgreSQL, MySQL, SQLite, and MSSQL. It helps write database queries and manipulate data using JavaScript. You can do so instead of raw SQL statements.  

 

4. Is there any toolkit for managing database schema through an Object Relational Mapper?  

Several toolkits are available for managing database schema in SQLite using an ORM. Here are a few popular options:  

  • SQLAlchemy  
  • Peewee  
  • Django ORM  
  • Pony ORM  


5. How do I use Prisma Client with my nodejs application?  

To use Prisma Client with your Node.js application, you need to follow these steps:  

  • Install Prisma Client. 
  • Configure Prisma. 
  • Generate Prisma Client. 
  • Use Prisma Client in your application. 
  • Run your application. 

Here are the top NodeJS MongoDB libraries you can use to offer features to connect and manage connections to MongoDB databases, like encryption and authentication options. The features of MongoDB libraries depend on the programming language and the specific libraries used. It supports MongoDB’s GridFS feature, which allows storing and retrieving binary data and large files.  


MongoDB library offers an easy-to-use interface for performing basic CRUD operations on the MongoDB collections. You can also get features for using MongoDB’s powerful Aggregation Framework for performing advanced data analysis and queries. This library provides schema validation to ensure data confirms a specific format or rule. It offers features that help create and manage indexes on MongoDB collections to optimize query performance. It offers support for MongoDB’s sharding and replication features, which will allow for high scalability and availability of the database. It also offers features for debugging and testing applications, like mock debuggers and databases.  


Here is the list of the top 22 NodeJS MongoDB libraries that are handpicked to help developers choose the appropriate one as per their requirements:  

prisma: 

  • Is a next-generation ORM (Object Relational Mapping) with tools like Prisma Studio, Migrate, and Client. 
  • Can be used in any NodeJS or TypeScript backend applications, like microservices and serverless applications, like GraphQL API, gRPC API, REST API, and anything that requires a database.  
  • Allows developers to define their application models in an intuitive data modeling language connected to a database and will define a generation. 

mongoose: 

  • Is a MongoDB object modeling tool designed for working in an asynchronous environment that supports Deno and NodeJS. 
  • For delivering commercial maintenance and support for open source dependencies, you can use Tidelift for building your applications.  
  • Buffers all the commands until it is connected to the database so that you do not have to wait until it connects to MongoDB for defining models, running queries, and many more. 

reaction: 

  • Is a headless commerce platform, API-first, built using MongoDB, GraphQL, and NodeJS, which plays nicely with Docker, Kubernetes, and npm.  
  • Returns data in split seconds, and quicker queries mean quicker web pages. 
  • A flexible plugin system will allow you to pick and select which integrations work best for you.  

node-elm: 

  • Is a package that offers functionality for using the Elm programming language with NodeJS.  
  • You can compile Elm code to JavaScript, run tests, and generate documentation.  
  • Offers various utility functions for working with Elm code, like parsing source files and generating Elm code programmatically.  

node-mongodb-native: 

  • Is the official MongoDB driver for NodeJS, maintained and developed by MongoDB Inc. 
  • Is designed to be scalable and performant, with support for features like streaming cursors and connection pooling. 
  • Also offers various utility functions for working with the binary format and BSON used by MongoDB. 

payload: 

  • Is a CMS designed for developers from the ground up to deliver what they require to build a great digital product.  
  • Provides you with everything you need but takes a step back and lets you build what you want in JavaScript or TypeScript with no unnecessary complexity brought by GUIs.  
  • Completely control the admin panel by using your own React components and will swap out fields or even entire views easily.  

nodeclub: 

  • Is a free and open-source community platform built on top of the NodeJS platform. 
  • A web-based discussion and forum board allows users to create and participate in discussions on diverse topics.  
  • Supports features like user authentication and authorization, search functionality, and email notifications. 

mikro-orm: 

  • Is an open source TypeScript ORM which simplifies working with databases in NodeJS applications. 
  • Supports different databases like PostgreSQL, SQLite, MongoDB, and MySQL. 
  • You can easily define your data models using TypeScript decorators and classes and perform common CURD operations on your data with a few lines of code.  

mean: 

  • Is a full-stack JavaScript open source solution that provides a solid starting point for NodeJS, AngularJS, MongoDB, and Express based applications. 
  • Builds a robust framework to support daily development requirements, which will help developers use better practices when working with JavaScript components.  
  • Offers reusable tools, guidelines, and modules to help developers build web applications efficiently and quickly. 

mongo-express: 

  • Is a web-based administrative UI for MongoDB, a popular NoSQL database that provides an intuitive GUI that allows users to view and edit database indexes, documents, and collections.  
  • You can perform common administrative tasks like deleting or creating collections and databases, managing users and permissions, and removing or adding indexes. 
  • Allows you to execute MongoDB shell commands directly from the interface.  

project_mern_memories: 

  • Is a code repository that contains the source code for web applications built using the MERN stack.  
  • Is a popular web development stack that includes four technologies, Express, NodeJS, MongoDB, and React. 
  • Includes features like image uploading and storage, pagination, and user authorization and authentication.  

uptime: 

  • Monitor thousands of websites, check the presence of a pattern in the response body and tweak the frequency of monitoring on a per-check basis up to the second.  
  • Helps receive notifications whenever a check goes down by email, on the console, and screen. 
  • Can record availabilities statistics for further reporting and detailed uptime reports with animated charts.  

builderbook: 

  • Is a comprehensive set of resources and tools for helping developers build and launch web applications efficiently and quickly. 
  • Includes various components and features which can be customized and integrated into any web application, like user management and authentication, email handling, and payment processing.  
  • Is focused on helping developers create a high-quality web application using modern technologies and best practices. 

DoraCMS: 

  • Is a free and open source CMS built on top of the NodeJS platform, which offers a web-based administrative interface for managing users, site settings, and content.  
  • Is designed to be extensible and modular with a flexible plugin option, allowing developers to add new functionality and features to the CMS.  
  • Offers a RESTful API that can be used for interacting with the CMS programmatically.  

mongodb-memory-server: 

  • Is an open source NodeJS library which offers an in-memory MongoDB server for development and testing purposes. 
  • Allows you to quickly spin up a MongoDB server instance in memory without requiring a separate MongoDB configuration or installation.  
  • You can write tests and develop applications that rely on MongoDB without requiring a separate MongoDB instance running on a remote server or your local machine.  

monk: 

  • Is a lightweight MongoDB driver for NodeJS, offering an easy-to-use and simple API for interacting with MongoDB databases.  
  • Offers a high-level API that abstracts away many low-level details for working with MongoDB making it easy for developers to get started with minimal setup.  
  • Provides a simple and flexible query API that allows you to build any query easily.  

mongojs: 

  • Is a lightweight NodeJS library that offers a simple API for interacting with MongoDB databases.  
  • Allows you to perform common database operations like map-reduce, CRUD operations, etc.  
  • Offers an intuitive and simple API that is easy to learn and use and supports various features like replication, sharding, and authentication. 

myDrive: 

  • Is an open source cloud file storage server for hosting myDrive on your server or trusted platform and then accessing myDrive through your web browser.  
  • Uses MongoDB for storing folder/file metadata and will support multiple databases for storing the file chunks, like the Filesystem and Amazon S3. 
  • Is built using TypeScript and NodeJS and can help upload files, google drive support, and download files. 

node-login: 

  • Is a NodeJS module that offers a simple authentication system for web applications. 
  • Is designed to be easy to use and customize and supports various authentication strategies, like password authentication and email, two-factor authentication, and OAuth authentication. 
  • Includes features like email verification, user profile management, password hashing, and session management. 

mongorito: 

  • Is based on Redux, which opens the doors for customizing anything from model’s state to the behavior of core methods.  
  • Every model instance has a different Redux store, ensuring isolation between other models and easy extensibility.  
  • Ships with a barebones model with basic set/get, remove/save, and querying functionality and lets you be in control of what is included. 

nodercms: 

  • Is an open source CMS built on top of the NodeJS platform, which offers a web-based administrative interface for managing users, content, and other site settings.  
  • You can create and manage posts, pages, and other content types and supports features like theme customization, plugin support, user authorization, and authentication.  
  • Is designed to be extensible and modular, with a flexible plugin architecture that allows developers to add new functionality and features to the CMS. 

rest-api-nodejs-mongodb: 

  • Is a ready-to-use boilerplate for REST API Development with Express, MongoDB, and NodeJS. 
  • Offers pre-defined response structures with a proper status code, including CORS. 
  • Includes API collection for Postman, linting with Eslint, CI with Travis CI, and test cases with Chai and Mocha. 

Nest.js is a popular framework. It is used for building efficient and scalable server-side apps with Node.js.

It requires robust service discovery mechanisms to ease scalability in distributed systems. It offers tools and methodologies to manage the dynamic nature of microservices architectures. It ensures seamless communication and efficient load balancing across services.   

Here are some of the benefits of these libraries:  

  • Nest.js Integration   
  • Dynamic Service Registration  
  • Service Discovery Mechanisms  
  • Load Balancing  
  • Health Checking  
  • Configuration Management  
  • Fault Tolerance and Resilience  
  • Scalability  

kubernetes:

  • Kubernetes presents a sturdy platform for dealing with containerized applications.  
  • Kubernetes provides built-in service discovery capabilities through its DNS-based service discovery mechanism.   
  • It is used for deploying and managing containerized applications at scale. 

redis:

  • Redis can cache the results of database queries and API responses.  
  • Redis can be used to install rate limiting and throttling mechanisms in Nest.js services.  
  • Redis can be an essential component in the scalability of Nest.js services.  

etcd:

  • It is a central registry for storing information about available services.  
  • It can store configuration data that is shared across many instances of Nest.js services.  
  • etcd provides support for distributed consensus algorithms, such as Raft.  

traefik:

  • Traefik is a contemporary day opposite proxy and cargo balancer.  
  • Traefik supports dynamic routing based on service discovery mechanisms like Docker labels.  
  • Traefik provides built-in support for various load-balancing algorithms.  

istio:

  • Istio is a powerful service mesh platform.  
  • Istio enables sophisticated traffic routing and load-balancing strategies.  
  • Istio includes built-in fault tolerance features such as automatic retries and circuit breaking.  

consul:

  • It allows Nest.js services to register themselves with its agent running on each node in the cluster.  
  • The consul maintains a catalog of all registered services and their associated endpoints.  
  • Consul can be integrated with service mesh frameworks such as Envoy or Istio.  

rancher:

  • Rancher is a container management platform.  
  • Rancher includes built-in service discovery features.  
  • It allows services to be registered and discovered within the Rancher environment.  

eureka:

  • Eureka, advanced via Netflix, is a carrier discovery tool.  
  • It plays a significant role in building scalable microservices architectures.  
  • It allows Nest.js services to register themselves with its server upon startup.  

zookeeper:

  • ZooKeeper may be used to sign in and find out services.  
  • It provides a distributed key-value store that can be used for configuration management.  
  • It is designed to be to be hand-tolerant and fault-tolerant.

coredns:

  • It is a versatile DNS server used as a service discovery tool.  
  • It is used to perform load balancing by distributing DNS queries among many services.  
  • CoreDNS is used as the default DNS server in Kubernetes clusters.  

haproxy:

  • HAProxy is a high-performance TCP/HTTP load balancer.  
  • It is instrumental in the scalability of Nest.js service discovery libraries.  
  • HAProxy distributes incoming traffic across many instances of Nest.js services.  

FAQ

1. What is service discovery, and why is it important in Nest.js applications?  

It is the process of locating and connecting to services in a distributed system. In Nest.js applications, It is crucial to enable seamless communication between microservices. It allows them to scale and adapt to changing workloads.  


2. What are some popular service discovery libraries used with Nest.js for scalability?  

Some popular service discovery libraries used with Nest.js include:  

  • Consul  
  • etcd  
  • ZooKeeper  
  • Eureka.  

These libraries provide features such as:  


  • Dynamic service registration  
  • Discovery  
  • Health checking  
  • Load balancing  

These are essential for building scalable microservices architectures.  


3. How does dynamic scaling work in Nest.js applications using service discovery libraries?  

Dynamic scaling involves adding instances of services based on demand. Service discovery libraries enable dynamic scaling by allowing new instances to register themselves. Other services can discover it. This ensures that the application can handle increasing workloads.  


4. What role do load balancers play in scalable Nest.js architectures?  

Load balancers such as HAProxy are essential components in scalable Nest.js architectures. They distribute incoming traffic across many instances of services. It ensures optimal resource use and prevents any single instance from becoming overwhelmed. Load balancers help improve the scalability, reliability, and performance of Nest.js applications.  


5. How do service discovery libraries handle fault tolerance and resilience in Nest.js?  

Service discovery libraries use mechanisms such as health checking and automatic failover. Nest.js applications use it to ensure fault tolerance and resilience. They check the health of service instances and remove or replace unhealthy instances. This ensures that clients are always directed to available and responsive services.  

Hangman game is a word, string guessing game where each player plays to build a missing word, string, by guessing one letter at a time. For each mistake made, the hangman figure will be formed: starts with gallows, then head, body, and finally, the arms and legs. You will win the game if you choose letters to create the correct word before the figure of the hangman is completed. If not, it will be hanged and will finish the game. Follow the following steps to build your own hangman, 1. Development Environment 2. Database 3. String Match 4. Hangman Creation 5. Score Calculation

Database

Database is used to store the words.

Hangman Creation

If the player enters the wrong letter, the figure of hangman will be formed by starting with gallows and if he does it again then head is created, likewise body , arms and legs is created on entering wrong letter.

Development Environment

PyCharm is used for development of game.

String Match

String match is used to check whether the user given word is present in the database or not.

Score Calculation

On entering each letter which is present in word the player will get a score.

A time series database stores data based on time and its characteristics. It also retrieves the data. Time series data is a set of observations taken regularly over time.


In business, it's commonly used to study metrics like finances, operations, and more. Time series databases store and analyze time series data using standard SQL queries. Scientists and manufacturing control systems use C# time series databases. They are also used in sensor networks and logistics management systems. TSDB is a database that stores and retrieves time series data efficiently.


Developers have many choices of open source C# Time Series Database libraries like:

InfluxDBStudio  

  • It is a UI management tool for the InfluxDB time series database.  
  • It is a popular open source time series database that handles high write and query loads.  
  • Many use it to save and search for data with timestamps, like sensor readings and events.  

InfluxDB.Net  

  • Simplifies the process of working with InfluxDB from .NET applications.  
  • Providing a higher-level interface to interact with the InfluxDB HTTP API.  
  • This tool can add data, manage databases, measure things, ask questions, and set rules.  

daany  

  • This is a library for analyzing data in .NET. The package includes data frames, time series decompositions, and linear algebra routines. BLASS and LAPACK call them.  
  • Supports continuous queries and downsampling.  
  • Can store and analyze time series data using standard SQL queries.  

Pisces   

  • It is a software application used for hydrologic modeling and analysis.  
  • It is primarily designed for water resource management and related activities.  
  • We simulate water behavior in different systems, like groundwater, reservoirs, and rivers.  

GraphIoT  

  • It is a .NET 5 project to poll and store historical IoT and smart home sensor data.  
  • Visualizes IoT sensor data in time series graphs.  
  • Includes .NET Core clients for Digitalstorm, Netatmo, WeConnect, Viessmann, and Sonnen APIs.  

projectalpha  

  • Many things can help protect grids, related domains, and power systems. These things can be activities, projects, or tools.  
  • The Grid Solutions Framework - Time-Series library helps jump-start new product development.  
  • It is designed to collect, store, and query timestamped data.  

FAQ  

1. How do I use the NET library to set up a database instance for my project?  

Here is a step-by-step on how you might set up a database instance using Entity Framework Core:  

  • Install Entity Framework Core  
  • Create DbContext Class  
  • Define Data Models (Entities)  
  • Migrations and Database Creation  
  • Using the Database  

  

2. Can I connect my relational database with C# Time Series databases?  

You can connect a relational database with a time series database in a C# application. However, there are additional things to consider and steps to take. Relational and time series databases have different structures and uses, so that's why.  

You can use these steps to connect a relational database with a time series database in a C# application. 

  • Understand data requirements.  
  • Choose a time series database.  
  • Design data schema  
  • C# libraries or APIs  
  • Data Extraction and transformation  
  • Data loading  
  • Querying and analysis  
  • Synchronization and automation  

  

3. How can I efficiently process data stored in C# Time Series Database?  

When designing your application, remember to optimize your queries and consider performance. Here are some strategies to help you process data efficiently from a C# time series database:  

  • Data Model Optimization  
  • Indexing  
  • Time-Based Partitioning  
  • Query Optimization  
  • Batch Processing  
  • Parallelism and Asynchronous Operations  
  • Caching  
  • Compression and Storage Optimization  
  • Use of aggregations  
  • Optimize Network Traffic  
  • Profiling and monitoring  
  • Database Tuning  

  

4. Can Big Data be stored in C Sharp Time Series Databases be stored?  

Yes, it is possible to store and manage large volumes of data. It is often called Big Data in C# time series databases. However, there are several strategies you should consider handling Big Data efficiently:  

  • Data partitioning  
  • Scalability  
  • Retention policies  
  • Compression and encoding  
  • Data archival  
  • Caching  
  • Aggregations and Down-sampling  
  • Optimized queries  
  • Clustered storage  
  • Hardware considerations  
  • Vendor-specific features  
  • Monitoring and performance tuning  

  

5. Does this technology support Machine Learning algorithms or applications?  

Many time series databases can work with machine learning algorithms and applications. You can predict, recognize trends, and identify unusual patterns by analyzing previous data.  

Here is how you can integrate machine learning with a C# time series database:  

  • Anomaly detection  
  • Data preparation  
  • Model training  
  • Feature engineering  
  • Model evaluation  
  • Continuous learning  
  • Visualization and interpretation  
  • Model deployment  
  • Real-time prediction  

Nest.js is a progressive Node.js framework for building efficient. It's a reliable and scalable server-side application. It uses TypeScript, a superset of JavaScript static typing and other advanced features.


It enhances developer productivity and application robustness. 


Key features and characteristics of Nest.js include: 

  1. Modularity 
  2. Dependency Injection 
  3. Express.js Compatibility 
  4. Built-in Support for TypeScript 
  5. Support for GraphQL and REST APIs 
  6. Middleware and Interceptors 
  7. WebSocket SupportTesting Support 


Nest.js offers a powerful and expressive framework for building server-side applications. Node.js, leveraging TypeScript's features, Express.js' flexibility, and a modular architecture. To simplify development and maintainability. It is well-suited for a wide range of applications of web servers, and microservices. 

typeorm: 

  • TypeORM is an Object-Relational Mapping (ORM) library for TypeScript and JavaScript. 
  • TypeORM allows developers to define database models as TypeScript or JavaScript classes. 
  • TypeORM provides lifecycle hooks and events for entities. 


sequelize: 

  • Sequelize is an Object-Relational Mapping (ORM) library for Node.js. 
  • It provides a powerful set of features for interacting with databases using JavaScript objects. 
  • Sequelize allows developers to define database models using JavaScript classes or JSON objects. 


prisma: 

  • Prisma is a modern database toolkit and ORM for Node.js, TypeScript, and other languages. 
  • Prisma generates type-safe client code based on the database schema defined using Prisma. 
  • Prisma generates a query builder API based on the database schema. 


typegoose: 

  • Typegoose is a library for Mongoose, an Object Data Modeling (ODM) library for MongoDB and Node.js. 
  • Typegoose allows developers to define Mongoose models using TypeScript classes and decorators. 
  • Typegoose provides a declarative syntax for defining MongoDB schemas using TypeScript decorators. 


nestjs-typegoose: 

  • nestjs-typegoose is a Nest.js module that integrates Typegoose with Nest.js applications. 
  • nestjs-typegoose integrates Typegoose models with Mongoose. 
  • Nest.js promotes a modular codebase, and nestjs-typegoose follows the same principles. 


waterline: 

  • Waterline is an object-relational mapping library for Node.js that has high-level abstraction. 
  • Waterline abstracts away the differences between different database systems. 
  • Waterline provides a fluent query builder interface for constructing database queries. 


rxdb: 

  • RxDB is a real-time, offline-first database for JavaScript applications. 
  • RxDB enables real-time synchronization of data between clients and servers. 
  • RxDB allows developers to define database schemas using JSON-like objects or TypeScript classes. 


nestjs-prisma: 

  • nestjs-prisma leverages Prisma's type-safe client to interact with the database. 
  • nestjs-prisma integrates Prisma client instances with Nest.js applications using the built-in dependency. 
  • Prisma supports schema migration and versioning out of the box. 


nestjs-typeorm-paginate: 

  • nestjs-typeorm-paginate is a module for Nest.js applications that use TypeORM as ORM. 
  • nestjs-typeorm-paginate provides utilities for paginating database query results in Nest.js apps. 
  • nestjs-typeorm-paginate offers customizable configuration options to control the pagination behavior and appearance. 


nestjs-objection: 

  • nests-objection seamlessly integrates Objection.js with NestJS applications. 
  • nests-objection provides decorators for defining database models using TypeScript classes. 
  • nests-objection leverages a built-in dependency injection system to manage the lifecycle of database connections. 


nest-couchdb: 

  • nest-couchdb allows developers to connect their NestJS applications with CouchDB. 
  • nest-couchdb provides mechanisms for handling errors that may occur during database operations. 
  • NestJS middleware can be used in conjunction with nest-couchdb to install cross-cutting. 


FAQ

1.What is Nest.js? 

Nest.js is a progressive Node.js framework for building efficient. It's reliable, and scalable server-side applications. It leverages modern JavaScript features and architectural patterns. Those such as Dependency Injection (DI), decorators, and modules, to streamline applications. To develop and maintainability of Nest.js Application. 


2.Which databases does Nest.js support? 

Nest.js supports various databases, including relational databases. It's like MySQL, PostgreSQL, SQLite, and MariaDB, as well as NoSQL databases. It offers integrations with popular ORMs and ODMs (Object-Document Mapping). It's such as TypeORM, Sequelize, Mongoose, and Prisma, for interacting with databases. 


3.How can I perform efficient data management in Nest.js applications? 

To achieve efficient data management in Nest.js applications, consider the following: 

  • Use database indexes to optimize query performance. 
  • Install data caching mechanisms to reduce database load and improve response times. 
  • Optimize database schema design for data integrity and query efficiency. 
  • Use asynchronous programming techniques, such as async/await and Promises, to handle the database. 
  • Install pagination, filtering, and sorting mechanisms to retrieve and display data in manageable. 
  • Check database performance and query execution using profiling tools and performance metrics. 


4.How can I handle database transactions in Nest.js applications? 

Nest.js provides built-in support for handling database transactions using various database libraries. It can use transaction management features provided by ORMs and ODMs. Those are Such as TypeORM, Sequelize, Mongoose, and Prisma. They group many database operations into atomic units of work data consistency. 


5.What are some recommended Nest.js database libraries for efficient data management? 

Some recommended Nest.js database libraries for efficient data management include: 

  • TypeORM: A powerful ORM library for TypeScript and JavaScript applications. 
  • Sequelize: An ORM library for Node.js applications, compatible with various SQL databases. 
  • Mongoose: An ODM library for MongoDB, providing schema-based data modeling and validation. 
  • Prisma: A modern database toolkit for TypeScript and Node.js applications. It offers type-safe database access and schema migrations. 

 

Real-time streaming data processing with NuPIC involves leveraging the capabilities of the NuPIC.  

It is used to analyze and make predictions on data streams without significant delay. NuPIC employs principles inspired by the human brain's neocortex, particularly HTM. It is used to perform tasks such as anomaly detection and prediction in real-time. 

Here's a general description of the process: 

  • Data Ingestion: Real-time streaming data from various sources are ingested into the system. 
  • Preprocessing: This might involve handling missing values, scaling features, or extracting relevant information. 
  • Model Training: NuPIC requires trained models to perform its tasks. 
  • Real-time Processing: These models are deployed to process incoming data streams in real-time. 
  • Anomaly Detection: One of the key capabilities of NuPIC is anomaly detection. 
  • Prediction: It predicts future values based on the patterns from data. 
  • Feedback and Adaptation: These need to be updated to maintain their effectiveness. 
  • Visualization and Monitoring: It ensures that any anomalies are identified and addressed. 

tensorflow:  

  • It is an open-source machine learning framework. 
  • It can be used alongside NuPIC for tasks such as preprocessing, or post-processing of data. 
  • Its serving capabilities can be leveraged to deploy NuPIC models in production environments. 

pytorch: 

  • PyTorch is a powerful deep-learning framework. 
  • It provides a rich set of tools for data preprocessing and transformation. 
  • It offers various options for deploying models, including PyTorch Serve, TorchScript, and ONNX. 

elasticsearch: 

  • It is a powerful distributed search and analytics engine. 
  • It is used for real-time data processing, storage, and retrieval. 
  • Elasticsearch provides robust mechanisms for data ingestion from various sources. 

grafana: 

  • Grafana is an effective open-supply analytics and visualization platform. 
  • It is used for monitoring and analyzing time-series data. 
  • It allows us to create customizable and interactive dashboards for visualizing real-time data. 

scikit-learn: 

  • scikit-learn is a popular machine-learning library in Python. 
  • It provides a wide range of preprocessing techniques. 
  • It provides a comprehensive suite of tools for model evaluation and validation. 

prometheus: 

  • Prometheus is an open-supply tracking and alerting toolkit designed for reliability and scalability. 
  • Prometheus can be used to check various metrics related to NuPIC's performance. 
  • It can be integrated with long-term storage solutions for storing historical data. 

influxdb: 

  • It is a time-series database designed for handling high volumes of time-stamped data. 
  • It offers powerful querying capabilities that enable real-time analysis of time-series data. 
  • InfluxDB is designed to be scalable and performant. 

kafka: 

  • Apache Kafka is a distributed streaming platform. 
  • It is used for building real-time streaming data pipelines. 
  • Kafka enables real-time stream processing using frameworks like Kafka Streams and Apache Flink. 

flink: 

  • Apache Flink is an effective flow-processing framework. 
  • It is designed for high-throughput, low-latency, and fault-tolerant processing of streaming data. 
  • Flink integrates with diverse facts reassets and sinks. 

beam: 

  • It is a unified programming version and set of libraries. 
  • It is used for building both batch and stream processing pipelines. 
  • it integrates with various data sources and sinks, including Kafka, Pub, and BigQuery. 

NAB: 

  • The Numenta Anomaly Benchmark (NAB) is a benchmarking framework. 
  • It is designed to test the performance of anomaly detection algorithms on data. 
  • NAB provides a framework for parameter tuning and optimization of anomaly detection algorithms. 

FAQ 

1. What is NuPIC and how does it relate to real-time streaming data processing? 

NuPIC for Intelligent Computing is an open-source framework developed by Numenta. It is used for building systems that mimic the neocortex's structure and function. It specializes in temporal pattern recognition and anomaly detection. This makes it suitable for processing streaming data in real-time. 


2. What types of data can be processed in real-time with NuPIC? 

NuPIC can process various types of streaming data. It includes time-series data. It includes sensors, log files, IoT devices, financial transactions, and more. It is particularly effective for detecting anomalies and patterns in sequential data. 


3. How does NuPIC handle real-time streaming data processing? 

NuPIC employs an HTM algorithm inspired by neuroscience principles. It is used to learn and recognize temporal patterns in streaming data. It updates its models based on incoming data and adapts to changing patterns over time. This makes it appropriate for real-time processing. 


4. What are some common use cases for real-time streaming data processing with NuPIC? 

Common use cases for NuPIC in real-time streaming data processing include: 

  • Anomaly detection in cybersecurity 
  • Predictive maintenance in IoT systems 
  • Fraud detection in financial transactions 
  • Health monitoring in medical devices. 


5. How do I get started with real-time streaming data processing using NuPIC? 

You can discover the legitimate documentation, tutorials, and examples. Those are available on the Numenta website. Additionally, you can join the NuPIC community forums and mailing lists. It is used to connect with other users and developers for support and guidance.

Here are the best Python database access libraries for your web application. You can use these libraries in Python to directly interact with and access a variety of databases and perform a variety of operations, such as create, read, update, and delete records.


You can develop a straightforward application using these Python libraries to interact with SQLite, MySQL, and PostgreSQL databases. With some basic knowledge of Python and SQL, as well as the knowhow to work with database management systems, you can develop applications across different databases using a Python script in three simple steps:

  • Connect with a variety of database management systems using Python libraries.
  • Interact with various databases such as SQLite, MySQL, and PostgreSQL.
  • Execute some common database queries using these Python libraries.


We have handpicked top and trending Python libraries based on popularity, licensing and unique features to build database access functions in your applications:

SQLAlchemy:

  • Used for efficient and high-performing database access.
  • A comprehensive SQL toolkit and Object Relational Mapper (ORM).
  • Provides a high-level API to interact with databases.

Django ORM:

  • Used to connect with the database backend of your choice with ORM functionality.
  • It’s the built-in ORM for the Django web framework.
  • Provides an easy-to-use API for performing database operations.
  • Can be used with regular python scripts.

PyMySQL:

  • Used for fast, secure, and reliable interaction with the MySQL databases.
  • A pure Python MySQL client library for MySQL database access.
  • It implements the Python Database API v2.0.

psycopg2:

  • Used in Database, SQL Database, PostgresSQL applications.
  • A PostgreSQL database adapter for Python.
  • Provides a fast and reliable way to interact with PostgreSQL databases.

peewee:

  • Used for basic operations like storing data and retrieving data.
  • A minimalistic ORM that supports SQLite, MySQL, and PostgreSQL databases.
  • Peewee provides a magical helper fn(), used to call any SQL function.

Aiomysql:

  • Used in Database, SQL Database, MariaDB applications.
  • It depends on and reuses most parts of PyMySQL.
  • Preserve the same api, look and feel as the awesome aiopg library.

Queries:

  • Used for interacting with PostgreSQL by reducing the complexity of psycopg2 library.
  • Makes writing PostgreSQL client applications both fast and easy.
  • It’s a BSD licensed opinionated wrapper of the Python psycopg2 library.

Predictive modeling and forecasting with NuPIC involves leveraging the principles of neuroscience. 

It is to create intelligent systems. It is capable of learning and making predictions from streaming data. It is based on the idea of Hierarchical Temporal Memory (HTM), an inspired model of the neocortex. 

Here's a general description of how predictive modeling and forecasting with NuPIC: 

  • Understanding Hierarchical Temporal Memory (HTM) 
  • Encoding and Processing Temporal Data 
  • Training the Model 
  • Predictive Modeling 
  • Anomaly Detection 
  • Integration with Other Libraries 
  • Real-World Applications 

tensorflow: 

  • It is a gadget gaining knowledge of framework advanced via way of means of Google. 
  • It is optimized for efficient computation, especially when running on GPUs or TPUs. 
  • It can be integrated with NuPIC to enhance its capabilities. 

pytorch: 

  • It is another popular open-source machine-learning library. 
  • It is thought for its dynamic computational graph and simplicity of use. 
  • It can build and train neural networks for predictive modeling tasks. 

scikit-learn:  

  • It is a widely used machine learning library in Python. 
  • It affords easy and green gear for statistics mining and statistics analysis. 
  • You can use it for various predictive modeling tasks. 

prophet

  • Prophet is a forecasting library advanced with the aid of using Facebook. 
  • It is designed to make it easier to produce high-quality forecasts with time series data. 
  • It is particularly used in those with strong seasonal patterns. 

dask: 

  • It is a parallel computing library that enables scalable computation in Python. 
  • It integrates with other libraries used in the Python data science ecosystem. 
  • It has a vibrant community of users who develop and maintain the library. 

statsmodels: 

  • It gives instructions and capabilities for the estimation of many one-of-a-kind statistical models. 
  • It is used for conducting statistical tests, and statistical data exploration. 
  • It includes various time series analysis and forecasting tools. 

catboost: 

  • It is a popular gradient-boosting library that excels in handling structured data. 
  • It is used for predictive modeling tasks, especially in Kaggle competitions. 
  • It offers high performance and scalability. 

tensortrade:  

  • Tensortrade is a reinforcement learning library for algorithmic trading. 
  • It provides tools and utilities for handling financial time series data. 
  • It includes tools for backtesting and simulating trading strategies in historical market conditions. 

gluonts:  

  • Gluon Time Series is a library for time series modeling and forecasting in Python. 
  • It provides tools for loading and manipulating time series datasets. 
  • It is used in building and training deep-learning models for forecasting. 

neural_prophet: 

  • It is built on top of Facebook's Prophet. 
  • It extends capabilities by adding support for neural networks for time series forecasting. 
  • It allows for more complex and flexible modeling. 

ARIMA: 

  • ARIMA is a classical time series forecasting technique. 
  • It models provide interpretable parameters such as AR coefficients, I, and MA coefficients. 
  • It is well-suited for stationary time series data with linear trends and autocorrelation. 

FAQ

1. What is NuPIC, and how does it relate to predictive modeling and forecasting? 

NuPIC, or Numenta Platform for Intelligent Computing. It is a ML library inspired by the structure and function of the neocortex. It specializes in temporal pattern recognition. It is suitable for predictive modeling and forecasting tasks involving time series data. 


2. What types of data are suitable for predictive modeling and forecasting with NuPIC? 

NuPIC is particularly well-suited for time series data. It is such as stock prices, weather observations, and sensor data. It also includes other sequential data with temporal dependencies. 


3. How does NuPIC differ from traditional machine learning algorithms in predictive modeling? 

Unlike traditional ML algorithms that rely on static models and labeled training data. NuPIC uses HTM to learn temporal patterns. It makes predictions from streaming data in an unsupervised manner. 


4. What are some common applications of predictive modeling and forecasting with NuPIC? 

Common applications include: 

  • Anomaly detection 
  • Predictive maintenance 
  • Financial forecasting 
  • Energy consumption prediction 

There are other tasks involving the analysis and prediction of time series data. 


5. Can NuPIC be combined with other machine learning libraries for enhanced predictive modeling? 

Yes, NuPIC can be integrated with other libraries. Those are TensorFlow, scikit-learn, and statsmodels. It is to complement its capabilities. It also leverages more algorithms for feature engineering, model evaluation, and ensemble learning. 

We can use Java cloud database libraries to store data in the cloud and access it from anywhere. It is very easy to use and will help you save time and money. You can easily configure your database in the cloud and get started with your application faster. The use of Java Cloud Database libraries like tx-lcn, galaxysql, cloudgraph are increasing day by day. These libraries can be used to store and retrieve data from the cloud. They provide us with the ability to write applications that connect to remote databases on the Internet. There are several Java Cloud Database Libraries available which are used to develop applications that require high scalability and performance. These libraries are designed specifically for cloud computing environments, which helps in reducing the development time and makes it easier for developers to integrate their applications with cloud services such as AWS or Azure. The tx-lcn Cloud Databases are built on top of the Great Big Graph Database, a fully managed graph database that supports real time data integration. Galaxy is a fully transactional SQL engine that provides high performance, reliable, and scalable multi-tenant cloud computing services. Developers tend to use some of the following open source Java Cloud Database libraries

The JavaScript Cloud Database libraries can be used with any database that is supported by the USENET protocol (TCP/IP). It supports SQL Server, MySQL and PostgreSQL databases. These libraries are a great way to store data in the cloud. You can use these libraries to store your data and retrieve it later when you need it. ToolJet is a JavaScript Cloud Database library that provides a fully managed service for working with databases. The service includes tools for creating, migrating, and managing databases from the cloud. It also supports using SQL as well as other programming languages such as JavaScript and Python.sqlpad is a JavaScript library that allows you to quickly build scalable, real-time web applications. It's free and open source. node-gcm is a NodeJS library that allows you to communicate with Google Cloud Datastore from nodejs applications without any external dependencies or server side code. There are several popular open source JavaScript Cloud Database libraries available for developers

Ruby Cloud database libraries are a dime a dozen, and you can find many different tools for developing applications against them. The easiest way to get started with Ruby is to use the official Ruby cloud database libraries, which are available on a number of clouds. Using these cloud database libraries, you can easily create a database and store your data in it. Fog is an open source cloud database library built on top of the PostgreSQL extension for the Ruby programming language. It allows you to write your applications in any language that supports ActiveRecord or SQL. BOSH is a fully-managed cloud database service for Ruby developers. BOSH provides a consistent API across multiple clouds, including Azure, Heroku, Amazon Web Services, Google Cloud Platform and more. Google Cloud Platform Ruby SDKs let you develop applications with Google's services such as App Engine and Compute Engine. Many developers depend on the following open source Ruby Cloud database libraries

Python Cloud Database libraries like jina, gnes, hsds are very useful for building cloud database applications. They allow you to access your data from anywhere in the world, and even provide a variety of security options. They provide support for a wide range of databases such as MySQL, PostgreSQL, MongoDB and more. The jina library (Jini) is a Python interface to Apache HTTP Server's JINI protocol. It allows you to publish information about objects as web pages, or retrieve them as web pages. The hsds library (Hadoop Streaming Data Sources) provides an interface to MapReduce jobs on Hadoop. The hsds library also provides a simple interface that makes it easy to use. It is an open source tool which uses the WebStorage API to store files in the cloud. It can be used for storing images, videos, audio files and other file types in the cloud. The gnes library can be used to store files and folders on the cloud. It provides a simple interface that makes it easy to use. Popular open source Python Cloud Database libraries include

Cloud Database libraries are an essential part of the C# development. They help in making the database operations faster and easier. C# Cloud Database Libraries like squadron, dackup, DarkSoulsCloudSave are available to make your database development process easier. You can access them from within your application using a simple code or you can use them through .NET Framework classes. Squadron is a C# cloud database library that allows you to store data locally and synchronize it with the cloud. It uses SQLite as the backend which means you don’t need to worry about any server side details or setup. Dackup is another open source .NET Core (and .NET Framework) library for working with databases in a cross-platform way. Dackup can be used for managing local SQLite databases as well as accessing remote SQLite databases over http/ftp using either TCP/UDP or localhost connections. Dark Souls Cloud Save is an online game save manager that allows players to keep their savegames on a website and access them from anywhere at any time, even while playing offline! It uses Dark Souls Database Library to store and retrieve data from your games. Popular open source Cloud Database libraries among developers include

Trending Discussions on Database

Javascript dynamically inserted later on: how to make it run?

Unknown host CPU architecture: arm64 , Android NDK SiliconM1 Apple MacBook Pro

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory

AngularFireModule and AngularFireDatabaseModule not being found in @angular/fire

ASP.NET Core 6 how to access Configuration during startup

How to fix: "@angular/fire"' has no exported member 'AngularFireModule'.ts(2305) ionic, firebase, angular

pymongo [SSL: CERTIFICATE_VERIFY_FAILED]: certificate has expired on Mongo Atlas

java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): ,

How to solve FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore problem?

How do I get details of a veracode vulnerability report?

QUESTION

Javascript dynamically inserted later on: how to make it run?

Asked 2022-Apr-17 at 14:12

I have scripts In my React app that are inserted dynamically later on. The scripts don't load.

In my database there is a field called content, which contains data that includes html and javascript. There are many records and each record can include multiple scripts in the content field. So it's not really an option to statically specify each of the script-urls in my React app. The field for a record could for example look like:

1<p>Some text and html</p>
2<div id="xxx_hype_container">
3    <script type="text/javascript" charset="utf-8" src="https://example.com/uploads/hype_generated_script.js?499892"></script>
4</div>
5<div style="display: none;" aria-hidden="true"> 
6<div>Some text.</div> 
7Etc…
8

I call on this field in my React app using dangerouslySetInnerHTML:

1<p>Some text and html</p>
2<div id="xxx_hype_container">
3    <script type="text/javascript" charset="utf-8" src="https://example.com/uploads/hype_generated_script.js?499892"></script>
4</div>
5<div style="display: none;" aria-hidden="true"> 
6<div>Some text.</div> 
7Etc…
8render() {
9    return (
10        <div data-page="clarifies">
11            <div className="container">
12                <div dangerouslySetInnerHTML={{ __html: post.content }} />
13                ... some other data
14            </div>
15        </div>
16    );
17}
18

It correctly loads the data from the database and displays the html from that data. However, the Javascript does not get executed. I think the script doesn't work because it is dynamically inserted later on. How can I make these scripts work/run?

This post suggest a solution for dynamically inserted scripts, but I don't think I can apply this solution because in my case the script/code is inserted from a database (so how to then use nodeScriptReplace on the code...?). Any suggestions how I might make my scripts work?


Update in response to @lissettdm their answer:

1<p>Some text and html</p>
2<div id="xxx_hype_container">
3    <script type="text/javascript" charset="utf-8" src="https://example.com/uploads/hype_generated_script.js?499892"></script>
4</div>
5<div style="display: none;" aria-hidden="true"> 
6<div>Some text.</div> 
7Etc…
8render() {
9    return (
10        <div data-page="clarifies">
11            <div className="container">
12                <div dangerouslySetInnerHTML={{ __html: post.content }} />
13                ... some other data
14            </div>
15        </div>
16    );
17}
18constructor(props) {
19    this.ref = React.createRef();
20}
21
22componentDidUpdate(prevProps, prevState) {
23    if (prevProps.postData !== this.props.postData) {
24        this.setState({
25            loading: false,
26            post: this.props.postData.data,
27            //etc
28        });
29        setTimeout(() => parseElements());
30
31        console.log(this.props.postData.data.content);
32        // returns html string like: `<div id="hype_container" style="margin: auto; etc.`
33        const node = document.createRange().createContextualFragment(this.props.postData.data.content);
34        console.log(JSON.stringify(this.ref));
35        // returns {"current":null}
36        console.log(node);
37        // returns [object DocumentFragment]
38        this.ref.current.appendChild(node);
39        // produces error "Cannot read properties of null"
40    }
41}
42
43render() {
44    const { history } = this.props;
45    /etc.
46    return (
47        {loading ? (
48            some code
49        ) : (
50            <div data-page="clarifies">
51                <div className="container">
52                    <div ref={this.ref}></div>
53                    ... some other data
54                </div>
55            </div>
56        );
57    );
58}
59

The this.ref.current.appendChild(node); line produces the error:

TypeError: Cannot read properties of null (reading 'appendChild')

ANSWER

Answered 2022-Apr-14 at 19:05

Rendering raw HTML without React recommended method is not a good practice. React recommends method dangerouslySetInnerHTML to render raw HTML.

Source https://stackoverflow.com/questions/71876427

QUESTION

Unknown host CPU architecture: arm64 , Android NDK SiliconM1 Apple MacBook Pro

Asked 2022-Apr-04 at 18:41

I've got a project that is working fine in windows os but when I switched my laptop and opened an existing project in MacBook Pro M1. I'm unable to run an existing android project in MacBook pro M1. first I was getting

Execution failed for task ':app:kaptDevDebugKotlin'. > A failure occurred while executing org.jetbrains.kotlin.gradle.internal.KaptExecution > java.lang.reflect.InvocationTargetException (no error message)

this error was due to the Room database I applied a fix that was adding below library before Room database and also changed my JDK location from file structure from JRE to JDK.

kapt "org.xerial:sqlite-jdbc:3.34.0"

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6

after that now I'm getting an issue which is Unknown host CPU architecture: arm64

there is an SDK in my project that is using this below line.

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16

App Gradle

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23

[CXX1405] error when building with ndkBuild using /Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/Android.mk: Build command failed. Error while executing process /Users/mac/Library/Android/sdk/ndk/21.4.7075529/ndk-build with arguments {NDK_PROJECT_PATH=null APP_BUILD_SCRIPT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/Android.mk APP_ABI=arm64-v8a NDK_ALL_ABIS=arm64-v8a NDK_DEBUG=1 APP_PLATFORM=android-21 NDK_OUT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/build/intermediates/cxx/Debug/4k4s2lc6/obj NDK_LIBS_OUT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/build/intermediates/cxx/Debug/4k4s2lc6/lib APP_SHORT_COMMANDS=false LOCAL_SHORT_COMMANDS=false -B -n} ERROR: Unknown host CPU architecture: arm64

which is causing this issue and whenever I comment on this line

path 'Android.mk'

it starts working fine, is there any way around which will help me run this project with this piece of code without getting this NDK issue?

Update - It seems that Room got fixed in the latest updates, Therefore you may consider updating Room to latest version (2.3.0-alpha01 / 2.4.0-alpha03 or above)

GitHub Issue Tracker

ANSWER

Answered 2022-Apr-04 at 18:41

To solve this on a Apple Silicon M1 I found three options

A

Use NDK 24

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27

You can install it with

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28

or

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29

Depending what where sdkmanager is located enter image description here

B

Change your ndk-build to use Rosetta x86. Search for your installed ndk with

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30

eg

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
31

and change

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
31DIR="$(cd "$(dirname "$0")" && pwd)"
32$DIR/build/ndk-build "$@"
33

to

1   //Room components
2    kapt "org.xerial:sqlite-jdbc:3.34.0"
3    implementation "androidx.room:room-ktx:$rootProject.roomVersion"
4    kapt "androidx.room:room-compiler:$rootProject.roomVersion"
5    androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
6android {
7    externalNativeBuild {
8        ndkBuild {
9           path 'Android.mk'
10        }
11    }
12    ndkVersion '21.4.7075529'
13
14
15}
16 externalNativeBuild {
17        cmake {
18            path "src/main/cpp/CMakeLists.txt"
19            version "3.18.1"
20            //version "3.10.2"
21        }
22    }
23android {
24    ndkVersion "24.0.8215888"
25    ...
26}
27echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
28echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
29find ~ -name ndk-build 2>/dev/null
30vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
31DIR="$(cd "$(dirname "$0")" && pwd)"
32$DIR/build/ndk-build "$@"
33DIR="$(cd "$(dirname "$0")" && pwd)"
34arch -x86_64 /bin/bash $DIR/build/ndk-build "$@"
35

enter image description here

C

convert your ndk-build into a cmake build

Source https://stackoverflow.com/questions/69541831

QUESTION

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory

Asked 2022-Apr-04 at 15:46

Not really sure what caused this but most likely exiting the terminal while my rails server which was connected to PostgreSQL database was closed (not a good practice I know but lesson learned!)

I've already tried the following:

  1. Rebooting my machine (using MBA M1 2020)
  2. Restarting PostgreSQL using homebrew brew services restart postgresql
  3. Re-installing PostgreSQL using Homebrew
  4. Updating PostgreSQL using Homebrew
  5. I also tried following this link but when I run cd Library/Application\ Support/Postgres terminal tells me Postgres folder doesn't exist, so I'm kind of lost already. Although I have a feeling that deleting postmaster.pid would really fix my issue. Any help would be appreciated!

ANSWER

Answered 2022-Jan-13 at 15:19
Resetting PostgreSQL

My original answer only included the troubleshooting steps below, and a workaround. I now decided to properly fix it via brute force by removing all clusters and reinstalling, since I didn't have any data there to keep. It was something along these lines, on my Ubuntu 21.04 system:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6

Now I have:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9

And sudo -u postgres psql works fine. The service was started automatically but it can be done manually with sudo systemctl start postgresql.

Incidentally, I can recommend the PostgreSQL docker image, which eliminates the need to bother with a local installation.

Troubleshooting

Although I cannot provide an answer to your specific problem, I thought I'd share my troubleshooting steps, hoping that it might be of some help. It seems that you are on Mac, whereas I am running Ubuntu 21.04, so expect things to be different.

This is a client connection problem, as noted by section 19.3.2 in the docs.

The directory in my error message is different:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12

I checked what unix sockets I had in that directory:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21

Makes sense, there is a socket for 5433 not 5432. I confirmed this by running:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25

This explains how it got into this mess on my system. The default port is 5432, but after I upgraded from version 12 to 14, the server was setup to listen to 5433, presumably because it considered 5432 as already taken. Two alternatives here, get the server to listen on 5432 which is the client's default, or get the client to use 5433.

Let's try it by changing the client's parameters:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30

It worked! Now, to make it permanent I'm supposed to put this setting on a psqlrc or ~/.psqlrc file. The thin documentation on this (under "Files") was not helpful to me as I was not sure on the syntax and my attempts did not change the client's default, so I moved on.

To change the server I looked for the postgresql.conf mentioned in the documentation but could not find the file. I did however see /var/lib/postgresql/14/main/postgresql.auto.conf so I created it on the same directory with the content:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31

Restarted the server: sudo systemctl restart postgresql

But the error persisted because, as the logs confirmed, the port did not change:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37

After other attempts did not succeed, I eventually decided to use a workaround: to redirect the client's requests on 5432 to 5433:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
38

This is what I have now:

1sudo pg_dropcluster --stop 12 main
2sudo pg_dropcluster --stop 14 main
3sudo apt remove postgresql-14
4sudo apt purge postgresql*
5sudo apt install postgresql-14
6$ pg_lsclusters
7Ver Cluster Port Status Owner    Data directory              Log file
814  main    5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
9$ sudo su postgres -c "psql"
10psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
11        Is the server running locally and accepting connections on that socket?
12$ ls -lah /var/run/postgresql/
13total 8.0K
14drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
15drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
16drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
17drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
18-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
19srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
20-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
21$ pg_lsclusters
22Ver Cluster Port Status                Owner    Data directory              Log file
2312  main    5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
2414  main    5433 online                postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
25$ sudo su postgres -c "psql --port=5433"
26psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
27Type "help" for help.
28
29postgres=#
30port = 5432
31$ tail /var/log/postgresql/postgresql-14-main.log
32...
332021-10-29 16:36:12.195 UTC [25236] LOG:  listening on IPv4 address "127.0.0.1", port 5433
342021-10-29 16:36:12.198 UTC [25236] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
352021-10-29 16:36:12.204 UTC [25237] LOG:  database system was shut down at 2021-10-29 16:36:12 UTC
362021-10-29 16:36:12.210 UTC [25236] LOG:  database system is ready to accept connections
37ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
38$ ls -lah /var/run/postgresql/
39total 8.0K
40drwxrwsr-x  4 postgres postgres  160 Oct 29 16:40 .
41drwxr-xr-x 36 root     root     1.1K Oct 29 14:08 ..
42drwxr-s---  2 postgres postgres   40 Oct 29 14:33 12-main.pg_stat_tmp
43drwxr-s---  2 postgres postgres  120 Oct 29 16:59 14-main.pg_stat_tmp
44-rw-r--r--  1 postgres postgres    6 Oct 29 16:36 14-main.pid
45lrwxrwxrwx  1 postgres postgres   33 Oct 29 16:40 .s.PGSQL.5432 -> /var/run/postgresql/.s.PGSQL.5433
46srwxrwxrwx  1 postgres postgres    0 Oct 29 16:36 .s.PGSQL.5433
47-rw-------  1 postgres postgres   70 Oct 29 16:36 .s.PGSQL.5433.lock
48

This means I can now just run psql without having to explicitly set the port to 5433. Now, this is a hack and I would not recommend it. But in my development system I am happy with it for now, because I don't have more time to spend on this. This is why I shared the steps and the links so that you can find a proper solution for your case.

Source https://stackoverflow.com/questions/69754628

QUESTION

AngularFireModule and AngularFireDatabaseModule not being found in @angular/fire

Asked 2022-Apr-01 at 12:56

I am trying to implement Firebase Realtime Database into a angular project and Im getting stuck at one of the very first steps. Importing AngularFireModule and AngularFireDatabaseModule. It gives me the following error:

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2
1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3

And here is how I am importing them:

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5

Am I missing something here? I have installed @angular/fire via the command

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6

and have also installed firebase tools. Here is a list of the Angular packages I currently have installed and their versions:

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26

I do apologise if this is all excessive information but I am completely stuck as to what the issue is. Any help would be GREATLY appreciated. Right now my suspicion is that its a compatibility issue or perhaps a feature that doesnt exist anymore on the latest versions but I really dont know.

ANSWER

Answered 2021-Aug-26 at 13:20

AngularFire 7.0.0 was launched yesterday with a new API that has a lot of bundle size reduction benefits.

Instead of top level classes like AngularFireDatabase, you can now import smaller independent functions.

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26import { list } from '@angular/fire/database';
27

The initialization process is a bit different too as it has a more flexible API for specifying configurations.

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26import { list } from '@angular/fire/database';
27@NgModule({
28    imports: [
29        provideFirebaseApp(() => initializeApp(config)),
30        provideFirestore(() => {
31            const firestore = getFirestore();
32            connectEmulator(firestore, 'localhost', 8080);
33            enableIndexedDbPersistence(firestore);
34            return firestore;
35        }),
36        provideStorage(() => getStorage()),
37    ],
38})
39

If you want to proceed with the older API there's a compatibility layer.

1Module '"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305)
2Module '"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.
3import {AngularFireModule } from '@angular/fire';
4import {AngularFireDatabaseModule} from '@angular/fire/database'
5npm i firebase @angular/fire
6Angular CLI: 12.2.2
7Node: 14.17.4
8Package Manager: npm 6.14.14
9OS: win32 x64
10
11Angular: 12.2.3
12... animations, common, compiler, compiler-cli, core, forms
13... platform-browser, platform-browser-dynamic, router
14
15Package                         Version
16---------------------------------------------------------
17@angular-devkit/architect       0.1202.2
18@angular-devkit/build-angular   12.2.2
19@angular-devkit/core            12.2.2
20@angular-devkit/schematics      12.2.2
21@angular/cli                    12.2.2
22@angular/fire                   7.0.0
23@schematics/angular             12.2.2
24rxjs                            6.6.7
25typescript                      4.3.5
26import { list } from '@angular/fire/database';
27@NgModule({
28    imports: [
29        provideFirebaseApp(() => initializeApp(config)),
30        provideFirestore(() => {
31            const firestore = getFirestore();
32            connectEmulator(firestore, 'localhost', 8080);
33            enableIndexedDbPersistence(firestore);
34            return firestore;
35        }),
36        provideStorage(() => getStorage()),
37    ],
38})
39import { AngularFireModule} from '@angular/fire/compat'
40import { AngularFireDatabaseModule } from '@angular/fire/compat/database';
41

Source https://stackoverflow.com/questions/68939014

QUESTION

ASP.NET Core 6 how to access Configuration during startup

Asked 2022-Mar-08 at 11:45

In earlier versions, we had Startup.cs class and we get configuration object as follows in the Startup file.

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23

Now in .NET 6 (With Visual Studio 2022), we don't see the Startup.cs class. Looks like its days are numbered. So how do we get these objects like Configuration(IConfiguration) and Hosting Environment(IHostEnvironment)

How do we get these objects, to say read the configuration from appsettings? Currently the Program.cs file looks like this.

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23using Festify.Database;
24using Microsoft.EntityFrameworkCore;
25
26var builder = WebApplication.CreateBuilder(args);
27
28// Add services to the container.
29builder.Services.AddRazorPages();
30
31builder.Services.AddDbContext<FestifyContext>();
32
33
34////////////////////////////////////////////////
35// The following is Giving me error as Configuration 
36// object is not avaible, I dont know how to inject this here.
37////////////////////////////////////////////////
38
39
40builder.Services.AddDbContext<FestifyContext>(opt =>
41        opt.UseSqlServer(
42            Configuration.GetConnectionString("Festify")));
43
44
45var app = builder.Build();
46
47// Configure the HTTP request pipeline.
48if (!app.Environment.IsDevelopment())
49{
50    app.UseExceptionHandler("/Error");
51    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
52    app.UseHsts();
53}
54
55app.UseHttpsRedirection();
56app.UseStaticFiles();
57
58app.UseRouting();
59
60app.UseAuthorization();
61
62app.MapRazorPages();
63
64app.Run();
65

I want to know how to read the configuration from appsettings.json ?

ANSWER

Answered 2021-Oct-26 at 12:26

WebApplicationBuilder returned by WebApplication.CreateBuilder(args) exposes Configuration and Environment properties:

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23using Festify.Database;
24using Microsoft.EntityFrameworkCore;
25
26var builder = WebApplication.CreateBuilder(args);
27
28// Add services to the container.
29builder.Services.AddRazorPages();
30
31builder.Services.AddDbContext<FestifyContext>();
32
33
34////////////////////////////////////////////////
35// The following is Giving me error as Configuration 
36// object is not avaible, I dont know how to inject this here.
37////////////////////////////////////////////////
38
39
40builder.Services.AddDbContext<FestifyContext>(opt =>
41        opt.UseSqlServer(
42            Configuration.GetConnectionString("Festify")));
43
44
45var app = builder.Build();
46
47// Configure the HTTP request pipeline.
48if (!app.Environment.IsDevelopment())
49{
50    app.UseExceptionHandler("/Error");
51    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
52    app.UseHsts();
53}
54
55app.UseHttpsRedirection();
56app.UseStaticFiles();
57
58app.UseRouting();
59
60app.UseAuthorization();
61
62app.MapRazorPages();
63
64app.Run();
65var builder = WebApplication.CreateBuilder(args);
66
67// Add services to the container.
68...
69ConfigurationManager configuration = builder.Configuration;
70IWebHostEnvironment environment = builder.Environment;
71

WebApplication returned by WebApplicationBuilder.Build() also exposes Configuration and Environment:

1public class Startup 
2{
3    private readonly IHostEnvironment environment;
4    private readonly IConfiguration config;
5
6    public Startup(IConfiguration configuration, IHostEnvironment environment) 
7    {
8        this.config = configuration;
9        this.environment = environment;
10    }
11
12    public void ConfigureServices(IServiceCollection services) 
13    {
14        // Add Services
15    }
16
17    public void Configure(IApplicationBuilder app, IWebHostEnvironment env) 
18    {
19        // Add Middlewares
20    }
21
22}
23using Festify.Database;
24using Microsoft.EntityFrameworkCore;
25
26var builder = WebApplication.CreateBuilder(args);
27
28// Add services to the container.
29builder.Services.AddRazorPages();
30
31builder.Services.AddDbContext<FestifyContext>();
32
33
34////////////////////////////////////////////////
35// The following is Giving me error as Configuration 
36// object is not avaible, I dont know how to inject this here.
37////////////////////////////////////////////////
38
39
40builder.Services.AddDbContext<FestifyContext>(opt =>
41        opt.UseSqlServer(
42            Configuration.GetConnectionString("Festify")));
43
44
45var app = builder.Build();
46
47// Configure the HTTP request pipeline.
48if (!app.Environment.IsDevelopment())
49{
50    app.UseExceptionHandler("/Error");
51    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
52    app.UseHsts();
53}
54
55app.UseHttpsRedirection();
56app.UseStaticFiles();
57
58app.UseRouting();
59
60app.UseAuthorization();
61
62app.MapRazorPages();
63
64app.Run();
65var builder = WebApplication.CreateBuilder(args);
66
67// Add services to the container.
68...
69ConfigurationManager configuration = builder.Configuration;
70IWebHostEnvironment environment = builder.Environment;
71var app = builder.Build();
72IConfiguration configuration = app.Configuration;
73IWebHostEnvironment environment = app.Environment;
74

Also check the migration guide and code samples.

Source https://stackoverflow.com/questions/69722872

QUESTION

How to fix: "@angular/fire"' has no exported member 'AngularFireModule'.ts(2305) ionic, firebase, angular

Asked 2022-Feb-11 at 07:31

I'm trying to connect my app with a firebase db, but I receive 4 error messages on app.module.ts:

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5

here is my package.json file:

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79

And here is my app.module.ts:

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79import { NgModule } from '@angular/core';
80import { BrowserModule } from '@angular/platform-browser';
81import { RouteReuseStrategy } from '@angular/router';
82import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
83import { AppRoutingModule } from './app-routing.module';
84import { AppComponent } from './app.component';
85import { ClientPageModule } from './client/client.module';
86import { environment } from '../environments/environment';
87import { AngularFireModule } from '@angular/fire';
88import { AngularFireAuthModule } from '@angular/fire/auth';
89import { AngularFireStorageModule } from '@angular/fire/storage';
90import { AngularFireDatabaseModule } from '@angular/fire/database';
91
92@NgModule({
93  declarations: [AppComponent],
94  entryComponents: [],
95  imports: [
96    BrowserModule,
97    IonicModule.forRoot(),
98    AppRoutingModule,
99    ClientPageModule,
100    AngularFireModule.initializeApp(environment.firebaseConfig),
101    AngularFireAuthModule,
102    AngularFireStorageModule,
103    AngularFireDatabaseModule
104  ],
105  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
106  bootstrap: [AppComponent],
107})
108export class AppModule {}
109

Here is my tsonfig.ts file

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79import { NgModule } from '@angular/core';
80import { BrowserModule } from '@angular/platform-browser';
81import { RouteReuseStrategy } from '@angular/router';
82import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
83import { AppRoutingModule } from './app-routing.module';
84import { AppComponent } from './app.component';
85import { ClientPageModule } from './client/client.module';
86import { environment } from '../environments/environment';
87import { AngularFireModule } from '@angular/fire';
88import { AngularFireAuthModule } from '@angular/fire/auth';
89import { AngularFireStorageModule } from '@angular/fire/storage';
90import { AngularFireDatabaseModule } from '@angular/fire/database';
91
92@NgModule({
93  declarations: [AppComponent],
94  entryComponents: [],
95  imports: [
96    BrowserModule,
97    IonicModule.forRoot(),
98    AppRoutingModule,
99    ClientPageModule,
100    AngularFireModule.initializeApp(environment.firebaseConfig),
101    AngularFireAuthModule,
102    AngularFireStorageModule,
103    AngularFireDatabaseModule
104  ],
105  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
106  bootstrap: [AppComponent],
107})
108export class AppModule {}
109  "compileOnSave": false,
110  "compilerOptions": {
111    "baseUrl": "./",
112    "outDir": "./dist/out-tsc",
113    "sourceMap": true,
114    "declaration": false,
115    "downlevelIteration": true,
116    "experimentalDecorators": true,
117    "moduleResolution": "node",
118    "importHelpers": true,
119    "target": "es2015",
120    "module": "es2020",
121    "lib": ["es2018", "dom"]
122  },
123  "angularCompilerOptions": {
124    "enableI18nLegacyMessageIdFormat": false,
125    "strictInjectionParameters": true,
126    "strictInputAccessModifiers": true,
127    "strictTemplates": true,
128    "skipLibCheck": true 
129  }
130}
131

ANSWER

Answered 2021-Sep-10 at 12:47

You need to add "compat" like this

1'"@angular/fire"' has no exported member 'AngularFireModule'.ts(2305),
2'"@angular/fire/storage"' has no exported member 'AngularFireStorageModule'.ts(2305)
3'"@angular/fire/database"' has no exported member 'AngularFireDatabaseModule'.ts(2305)
4'"@angular/fire/auth"' has no exported member 'AngularFireAuthModule'.ts(2305)
5{
6  "name": "gescable",
7  "version": "0.0.1",
8  "author": "Ionic Framework",
9  "homepage": "https://ionicframework.com/",
10  "scripts": {
11    "ng": "ng",
12    "start": "ng serve",
13    "build": "ng build",
14    "test": "ng test",
15    "lint": "ng lint",
16    "e2e": "ng e2e"
17  },
18  "private": true,
19  "dependencies": {
20    "@angular-devkit/architect": "^0.1202.5",
21    "@angular-devkit/architect-cli": "^0.1202.5",
22    "@angular/common": "~12.1.1",
23    "@angular/core": "~12.1.1",
24    "@angular/fire": "^7.0.4",
25    "@angular/forms": "~12.1.1",
26    "@angular/platform-browser": "~12.1.1",
27    "@angular/platform-browser-dynamic": "~12.1.1",
28    "@angular/router": "~12.1.1",
29    "@ionic/angular": "^5.5.2",
30    "ajv": "^8.6.2",
31    "angularfire2": "^5.4.2",
32    "firebase": "^7.24.0",
33    "rxfire": "^6.0.0",
34    "rxjs": "~6.6.0",
35    "tslib": "^2.2.0",
36    "zone.js": "~0.11.4"
37  },
38  "devDependencies": {
39    "@angular-devkit/build-angular": "~12.1.1",
40    "@angular-eslint/builder": "~12.0.0",
41    "@angular-eslint/eslint-plugin": "~12.0.0",
42    "@angular-eslint/eslint-plugin-template": "~12.0.0",
43    "@angular-eslint/template-parser": "~12.0.0",
44    "@angular/cli": "~12.1.1",
45    "@angular/compiler": "~12.1.1",
46    "@angular/compiler-cli": "~12.1.1",
47    "@angular/language-service": "~12.0.1",
48    "@ionic/angular-toolkit": "^4.0.0",
49    "@types/jasmine": "~3.6.0",
50    "@types/jasminewd2": "~2.0.3",
51    "@types/node": "^12.11.1",
52    "@typescript-eslint/eslint-plugin": "4.16.1",
53    "@typescript-eslint/parser": "4.16.1",
54    "eslint": "^7.6.0",
55    "eslint-plugin-import": "2.22.1",
56    "eslint-plugin-jsdoc": "30.7.6",
57    "eslint-plugin-prefer-arrow": "1.2.2",
58    "jasmine-core": "~3.8.0",
59    "jasmine-spec-reporter": "~5.0.0",
60    "karma": "~6.3.2",
61    "karma-chrome-launcher": "~3.1.0",
62    "karma-coverage": "~2.0.3",
63    "karma-coverage-istanbul-reporter": "~3.0.2",
64    "karma-jasmine": "~4.0.0",
65    "karma-jasmine-html-reporter": "^1.5.0",
66    "protractor": "~7.0.0",
67    "ts-node": "~8.3.0",
68    "typescript": "~4.2.4",
69    "@angular-devkit/architect": "^0.1200.0",
70    "firebase-tools": "^9.0.0",
71    "fuzzy": "^0.1.3",
72    "inquirer": "^6.2.2",
73    "inquirer-autocomplete-prompt": "^1.0.1",
74    "open": "^7.0.3",
75    "jsonc-parser": "^3.0.0"
76  },
77  "description": "An Ionic project"
78}
79import { NgModule } from '@angular/core';
80import { BrowserModule } from '@angular/platform-browser';
81import { RouteReuseStrategy } from '@angular/router';
82import { IonicModule, IonicRouteStrategy } from '@ionic/angular';
83import { AppRoutingModule } from './app-routing.module';
84import { AppComponent } from './app.component';
85import { ClientPageModule } from './client/client.module';
86import { environment } from '../environments/environment';
87import { AngularFireModule } from '@angular/fire';
88import { AngularFireAuthModule } from '@angular/fire/auth';
89import { AngularFireStorageModule } from '@angular/fire/storage';
90import { AngularFireDatabaseModule } from '@angular/fire/database';
91
92@NgModule({
93  declarations: [AppComponent],
94  entryComponents: [],
95  imports: [
96    BrowserModule,
97    IonicModule.forRoot(),
98    AppRoutingModule,
99    ClientPageModule,
100    AngularFireModule.initializeApp(environment.firebaseConfig),
101    AngularFireAuthModule,
102    AngularFireStorageModule,
103    AngularFireDatabaseModule
104  ],
105  providers: [{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }],
106  bootstrap: [AppComponent],
107})
108export class AppModule {}
109  "compileOnSave": false,
110  "compilerOptions": {
111    "baseUrl": "./",
112    "outDir": "./dist/out-tsc",
113    "sourceMap": true,
114    "declaration": false,
115    "downlevelIteration": true,
116    "experimentalDecorators": true,
117    "moduleResolution": "node",
118    "importHelpers": true,
119    "target": "es2015",
120    "module": "es2020",
121    "lib": ["es2018", "dom"]
122  },
123  "angularCompilerOptions": {
124    "enableI18nLegacyMessageIdFormat": false,
125    "strictInjectionParameters": true,
126    "strictInputAccessModifiers": true,
127    "strictTemplates": true,
128    "skipLibCheck": true 
129  }
130}
131import { AngularFireModule } from "@angular/fire/compat";
132import { AngularFireAuthModule } from "@angular/fire/compat/auth";
133import { AngularFireStorageModule } from '@angular/fire/compat/storage';
134import { AngularFirestoreModule } from '@angular/fire/compat/firestore';
135import { AngularFireDatabaseModule } from '@angular/fire/compat/database';
136

Source https://stackoverflow.com/questions/69128608

QUESTION

pymongo [SSL: CERTIFICATE_VERIFY_FAILED]: certificate has expired on Mongo Atlas

Asked 2022-Jan-29 at 22:03

I am using MongoDB(Mongo Atlas) in my Django app. All was working fine till yesterday. But today, when I ran the server, it is showing me the following error on console

1Exception in thread django-main-thread:
2Traceback (most recent call last):
3  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner
4    self.run()
5  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run
6    self._target(*self._args, **self._kwargs)
7  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
8    fn(*args, **kwargs)
9  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
10    self.check_migrations()
11  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations
12    executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
13  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__
14    self.loader = MigrationLoader(self.connection)
15  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__
16    self.build_graph()
17  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph
18    self.applied_migrations = recorder.applied_migrations()
19  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations
20    if self.has_table():
21  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table
22    tables = self.connection.introspection.table_names(cursor)
23  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names
24    return get_names(cursor)
25  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names
26    return sorted(ti.name for ti in self.get_table_list(cursor)
27  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list
28    for c in cursor.db_conn.list_collection_names()
29  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names
30    for result in self.list_collections(session=session, **kwargs)]
31  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections
32    return self.__client._retryable_read(
33  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read
34    server = self._select_server(
35  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server
36    server = topology.select_server(server_selector)
37  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server
38    return random.choice(self.select_servers(selector,
39  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers
40    server_descriptions = self._select_servers_loop(
41  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop
42    raise ServerSelectionTimeoutError(
43pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]>
44

I am using djongo as the database engine

1Exception in thread django-main-thread:
2Traceback (most recent call last):
3  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner
4    self.run()
5  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run
6    self._target(*self._args, **self._kwargs)
7  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
8    fn(*args, **kwargs)
9  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
10    self.check_migrations()
11  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations
12    executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
13  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__
14    self.loader = MigrationLoader(self.connection)
15  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__
16    self.build_graph()
17  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph
18    self.applied_migrations = recorder.applied_migrations()
19  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations
20    if self.has_table():
21  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table
22    tables = self.connection.introspection.table_names(cursor)
23  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names
24    return get_names(cursor)
25  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names
26    return sorted(ti.name for ti in self.get_table_list(cursor)
27  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list
28    for c in cursor.db_conn.list_collection_names()
29  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names
30    for result in self.list_collections(session=session, **kwargs)]
31  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections
32    return self.__client._retryable_read(
33  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read
34    server = self._select_server(
35  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server
36    server = topology.select_server(server_selector)
37  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server
38    return random.choice(self.select_servers(selector,
39  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers
40    server_descriptions = self._select_servers_loop(
41  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop
42    raise ServerSelectionTimeoutError(
43pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]>
44DATABASES = {
45    'default': {
46            'ENGINE': 'djongo',
47            'NAME': 'DbName',
48            'ENFORCE_SCHEMA': False,
49            'CLIENT': {
50                'host': 'mongodb+srv://username:password@cluster0.mny7y.mongodb.net/DbName?retryWrites=true&w=majority'
51            }  
52    }
53}
54

And following dependencies are being used in the app

1Exception in thread django-main-thread:
2Traceback (most recent call last):
3  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner
4    self.run()
5  File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run
6    self._target(*self._args, **self._kwargs)
7  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
8    fn(*args, **kwargs)
9  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
10    self.check_migrations()
11  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations
12    executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
13  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__
14    self.loader = MigrationLoader(self.connection)
15  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__
16    self.build_graph()
17  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph
18    self.applied_migrations = recorder.applied_migrations()
19  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations
20    if self.has_table():
21  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table
22    tables = self.connection.introspection.table_names(cursor)
23  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names
24    return get_names(cursor)
25  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names
26    return sorted(ti.name for ti in self.get_table_list(cursor)
27  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list
28    for c in cursor.db_conn.list_collection_names()
29  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names
30    for result in self.list_collections(session=session, **kwargs)]
31  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections
32    return self.__client._retryable_read(
33  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read
34    server = self._select_server(
35  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server
36    server = topology.select_server(server_selector)
37  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server
38    return random.choice(self.select_servers(selector,
39  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers
40    server_descriptions = self._select_servers_loop(
41  File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop
42    raise ServerSelectionTimeoutError(
43pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]>
44DATABASES = {
45    'default': {
46            'ENGINE': 'djongo',
47            'NAME': 'DbName',
48            'ENFORCE_SCHEMA': False,
49            'CLIENT': {
50                'host': 'mongodb+srv://username:password@cluster0.mny7y.mongodb.net/DbName?retryWrites=true&w=majority'
51            }  
52    }
53}
54dj-database-url==0.5.0
55Django==3.2.5
56djangorestframework==3.12.4
57django-cors-headers==3.7.0
58gunicorn==20.1.0
59psycopg2==2.9.1
60pytz==2021.1
61whitenoise==5.3.0
62djongo==1.3.6
63dnspython==2.1.0
64

What should be done in order to resolve this error?

ANSWER

Answered 2021-Oct-03 at 05:57

This is because of a root CA Let’s Encrypt uses (and Mongo Atals uses Let's Encrypt) has expired on 2020-09-30 - namely the "IdentTrust DST Root CA X3" one.

The fix is to manually install in the Windows certificate store the "ISRG Root X1" and "ISRG Root X2" root certificates, and the "Let’s Encrypt R3" intermediate one - link to their official site - https://letsencrypt.org/certificates/

Copy from the comments: download the .der field from the 1st category, download, double click and follow the wizard to install it.

Source https://stackoverflow.com/questions/69397039

QUESTION

java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): ,

Asked 2022-Jan-18 at 08:15

i'm having a problem to publish my app on the play store after october 2021, the error says that the table media_store_extension doesn't exist. The thing is: i don't use SQLITE on the project, so i have no idea what may be causing this exception.

The target sdk is 30, and de minimun is 26

The full error:

1FATAL EXCEPTION: latency_sensitive_executor-thread-1
2Process: com.google.android.apps.photos, PID: 29478
3java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
4    at nqo.a(PG:3)
5    at aleu.run(PG:6)
6    at krv.a(PG:17)
7    at krw.run(Unknown Source:6)
8    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
9    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
10    at java.lang.Thread.run(Thread.java:764)
11    at ksa.run(PG:5)
12Caused by: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
13    at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method)
14    at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:890)
15    at android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:501)
16    at android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588)
17    at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:58)
18    at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:37)
19    at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:46)
20    at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1392)
21    at android.database.sqlite.SQLiteDatabase.queryWithFactory(SQLiteDatabase.java:1239)
22    at android.database.sqlite.SQLiteDatabase.query(SQLiteDatabase.java:1110)
23    at agcm.a(PG:8)
24    at nnw.run(PG:17)
25    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457)
26    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
27    ... 4 more
28

ANSWER

Answered 2021-Nov-18 at 11:41

This error is reported not only from Flutter developers, but also from Unity (https://forum.unity.com/threads/getting-an-odd-error-in-internal-android-build-after-updating-iap.1104352/ and https://forum.unity.com/threads/error-when-submitting-app-to-google-play.1098139/) and in my case - for a native android app.

We first got this error 6 months ago and applied the fix that was suggested by the unity guys:

1FATAL EXCEPTION: latency_sensitive_executor-thread-1
2Process: com.google.android.apps.photos, PID: 29478
3java.lang.RuntimeException: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
4    at nqo.a(PG:3)
5    at aleu.run(PG:6)
6    at krv.a(PG:17)
7    at krw.run(Unknown Source:6)
8    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
9    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
10    at java.lang.Thread.run(Thread.java:764)
11    at ksa.run(PG:5)
12Caused by: android.database.sqlite.SQLiteException: no such table: media_store_extension (code 1): , while compiling: SELECT id FROM media_store_extension ORDER BY id DESC LIMIT 100 OFFSET 0
13    at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method)
14    at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:890)
15    at android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:501)
16    at android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588)
17    at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:58)
18    at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:37)
19    at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:46)
20    at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1392)
21    at android.database.sqlite.SQLiteDatabase.queryWithFactory(SQLiteDatabase.java:1239)
22    at android.database.sqlite.SQLiteDatabase.query(SQLiteDatabase.java:1110)
23    at agcm.a(PG:8)
24    at nnw.run(PG:17)
25    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457)
26    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
27    ... 4 more
28aaptOptions {
29    noCompress 'db'
30    ...
31}
32

However, yesterday we received the same error again, so the "fix" did not work for us.

The error occurs:

  1. (so far) only during internal testing
  2. only on Xiaomi Redmi 6A.
  3. from time to time(it is not reproduced each time)
  4. always in process com.google.android.apps.photos

The most reasonable explanation that I have seen so far is that the exception occurs when the testing bot attempts to take a screenshot.

This explains why the process is Google Photos', why the error is not reproduced each time and why it is "fixed" by just resubmitting a new build.

This also means that just ignoring the error should be OK.

Source https://stackoverflow.com/questions/69919198

QUESTION

How to solve FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore problem?

Asked 2022-Jan-11 at 15:08

I am trying to set up Firebase with next.js. I am getting this error in the console.

FirebaseError: Expected first argument to collection() to be a CollectionReference, a DocumentReference or FirebaseFirestore

This is one of my custom hook

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38

I also have this firebase.ts file where I initialized my firebase app

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64

I don't know whether the problem is in the project initialization. I am pretty sure the error is generated from my custom hook file. I also found out that there must be something wrong with onSnapshot function. Am I passing the docRef wrong or something? What am I doing wrong here?

The console.log(firestore) log:

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97

ANSWER

Answered 2022-Jan-07 at 19:07

Using getFirestore from lite library will not work with onSnapshot. You are importing getFirestore from lite version:

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97import { getFirestore } from 'firebase/firestore/lite'
98

Change the import to:

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97import { getFirestore } from 'firebase/firestore/lite'
98import { getFirestore } from 'firebase/firestore'
99

From the documentation,

The onSnapshot method and DocumentChange, SnapshotListenerOptions, SnapshotMetadata, SnapshotOptions and Unsubscribe objects are not included in lite version.


Another reason for this error to show up could be passing invalid first argument to collection() or doc() functions. They both take a Firestore instance as first argument.

1import { onAuthStateChanged, User } from '@firebase/auth'
2import { doc, onSnapshot, Unsubscribe } from 'firebase/firestore'
3import { useEffect, useState } from 'react'
4import { auth, fireStore } from './firebase'
5
6export const useUserData = () => {
7  const [username, setUsername] = useState<string | null>(null)
8
9  const [currentUser, setCurrentUser] = useState<User | null>(null)
10
11  useEffect(() => {
12    let unsubscribe: void | Unsubscribe
13
14    onAuthStateChanged(auth, (user) => {
15      if (user) {
16        setCurrentUser(user)
17        // The Problem is inside this try blog
18        try {
19          // the onsnapshot function is causing the problem
20          console.log('firestore: ', fireStore)
21          unsubscribe = onSnapshot(doc(fireStore, 'users', user.uid), (doc) => {
22            setUsername(doc.data()?.username)
23          })
24        } catch (e) {
25          console.log(e.message)
26        }
27      } else {
28        setCurrentUser(null)
29        setUsername(null)
30      }
31    })
32
33    return unsubscribe
34  }, [currentUser])
35
36  return { currentUser, username }
37}
38import { FirebaseApp, getApps, initializeApp } from 'firebase/app'
39import { getAuth } from 'firebase/auth'
40import { getFirestore } from 'firebase/firestore/lite'
41import { getStorage } from 'firebase/storage'
42
43const firebaseConfig = {
44  apiKey: 'some-api',
45  authDomain: 'some-auth-domain',
46  projectId: 'some-project-id',
47  storageBucket: 'some-storage-bucket',
48  messagingSenderId: 'some-id',
49  appId: 'some-app-id',
50  measurementId: 'some-measurement-id',
51}
52
53let firebaseApp: FirebaseApp
54
55if (!getApps.length) {
56  firebaseApp = initializeApp(firebaseConfig)
57}
58
59const fireStore = getFirestore(firebaseApp)
60const auth = getAuth(firebaseApp)
61const storage = getStorage(firebaseApp)
62
63export { fireStore, auth, storage }
64
65    type: "firestore-lite"
66    _app: FirebaseAppImpl
67    _automaticDataCollectionEnabled: false
68    _config: {name: "[DEFAULT]", automaticDataCollectionEnabled: false}
69    _container: ComponentContainer {name: "[DEFAULT]", providers: Map(15)}
70    _isDeleted: false
71    _name: "[DEFAULT]"
72    _options:
73    apiKey: 'some-api'
74    authDomain: 'some-auth-domain'
75    projectId: 'some-project-id'
76    storageBucket: 'some-storage-bucket'
77    messagingSenderId: 'some-id'
78    appId: 'some-app-id'
79    measurementId: 'some-measurement-id'
80    [[Prototype]]: Object
81    automaticDataCollectionEnabled: (...)
82    config: (...)
83    container: (...)
84    isDeleted: (...)
85    name: (...)
86    options: (...)
87    [[Prototype]]: Object
88    _credentials: Q {auth: AuthInterop}
89    _databaseId: H {projectId: "next-firebase-fireship", database: "(default)"}
90    _persistenceKey: "(lite)"
91    _settings: ee {host: "firestore.googleapis.com", ssl: true, credentials: undefined, ignoreUndefinedProperties: false, cacheSizeBytes: 41943040, …}
92    _settingsFrozen: false
93    app: (...)
94    _initialized: (...)
95    _terminated: (...)
96
97import { getFirestore } from 'firebase/firestore/lite'
98import { getFirestore } from 'firebase/firestore'
99// Ensure that "db" is defined and initialized
100const db = getFirestore();
101// console.log(db);
102
103const colRef = collection(db, "collection_name");
104

Source https://stackoverflow.com/questions/69047904

QUESTION

How do I get details of a veracode vulnerability report?

Asked 2022-Jan-07 at 21:46

How do I get details of a veracode vulnerability report?

I'm a maintainer of a popular JS library, Ramda, and we've recently received a report that the library is subject to a prototype pollution vulnerability. This has been tracked back to a veracode report that says:

ramda is vulnerable to prototype pollution. An attacker can inject properties into existing construct prototypes via the _curry2 function and modify attributes such as __proto__, constructor, and prototype.

I understand what they're talking about for Prototype Pollution. A good explanation is at snyk's writeup for lodash.merge. Ramda's design is different, and the obvious analogous Ramda code is not subject to this sort of vulnerability. That does not mean that no part of Ramda is subject to it. But the report contains no details, no code snippet, and no means to challenge their findings.

The details of their description are clearly wrong. _curry2 could not possibly be subject to this problem. But as that function is used as a wrapper to many other functions, it's possible that there is a real vulnerability hidden by the reporter's misunderstanding.

Is there a way to get details of this error report? A snippet of code that demonstrates the problem? Anything? I have filled out their contact form. An answer may still be coming, as it was only 24 hours ago, but I'm not holding my breath -- it seems to be mostly a sales form. All the searching I've done leads to information about how to use their security tool and pretty much nothing about how their custom reports are created. And I can't find this in CVE databases.

ANSWER

Answered 2022-Jan-07 at 21:46

Ok, so to answer my own question, here's how to get the details on a Veracode vulnerability report in less than four weeks and in only fifty-five easy steps.


Pre-work Day 1
  • Receive a comment on the issue that says that the user has received

    a VULN ticket to fix this Prototype Pollution vulnerability found in ramda.

  • Carry on a discussion regarding this comment to learn that there is a report that claims that

    ramda is vulnerable to prototype pollution. An attacker can inject properties into existing construct prototypes via the _curry2 function and modify attributes such as __proto__, constructor, and prototype.

    and eventually learn that this is due to a report from the software security company Veracode.

Days 2 & 3
  • Examine that report to find that it has no details, no explanation of how to trigger the vulnerability, and no suggested fix.

  • Examine the report and other parts of the Veracode site to find there is no public mechanism to challenge such a report.

Day 4
  • Report back to the library's issue that the report must be wrong, as the function mentioned could not possibly generate the behavior described.

  • Post an actual example of the vulnerability under discussion and a parallel snippet from the library to demonstrate that it doesn't share the problem.

  • Find Veracode's online support form, and submit a request for help. Keep your expectations low, as this is probably for the sales department.

  • Post a StackOverflow Question2 asking how to find details of a Veracode vulnerability report, using enough details that if the community has the knowledge, it should be easy to answer.

Days 5 & 6
  • Try to enjoy your Friday and Saturday. Don't obsessively check your email to see if Veracode has responded. Don't visit the StackOverflow question every hour to see if anyone has posted a solution. Really, don't do these things; they don't help.
Day 7
  • Add a 250-reputation point bounty to the StackOverflow question, trying to get additional attention from the smart people who must have dealt with this before.
Day 8
  • Find direct email support addresses on the Veracode site, and send an email asking for details of the supposed vulnerability, a snippet that demonstrates the issue, and procedures to challenge their findings.
Day 9
  • Receive a response from a Veracode Support email addressthat says, in part,

    Are you saying our vuln db is not correct per your github source? If so, I can send it to our research team to ensure it looks good and if not, to update it.

    As for snips of code, we do not provide that.

  • Reply, explaining that you find the report missing the details necessary to challenge it, but that yes, you expect it is incorrect.

  • Receive a response that this has been "shot up the chain" and that you will be hearing from them soon.

Days 10 - 11
  • Again, don't obsessively check your email or the StackOverflow question. But if you do happen to glance at StackOverflow, notice that while there are still no answers to it, there are enough upvotes to cover over half the cost of the bounty. Clearly you're not alone in wanting to know how to do this.
Day 12
  • Receive an email from Veracode:

    Thank you for your interest in Application Security and Veracode.

    Do you have time next week to connect?

    Also, to make sure you are aligned with the right rep, where is your company headquartered?

  • Respond that you're not a potential customer and explain again what you're looking for.

  • Add a comment to the StackOverflow to explain where the process has gotten to and expressing your frustration.

Days 13 - 14
  • Watch another weekend go by without any way to address this concern.

  • Get involved in a somewhat interesting discussion about prototype pollution in the comments to the StackOverflow post.

Day 15
  • Receive an actually helpful email from Veracode, sent by someone new, whose signature says he's a sales manager. The email will look like this:

    Hi Scott, I asked my team to help out with your question, here was their response:

    We have based this artifact from the information available in https://github.com/ramda/ramda/pull/3192. In the Pull Request, there is a POC (https://jsfiddle.net/3pomzw5g/2/) clearly demonstrating the prototype pollution vulnerability in the mapObjIndexed function. In the demo, the user object is modified via the __proto__​ property and is
    considered a violation to the Integrity of the CIA triad. This has been reflected in our CVSS scoring for this vulnerability in our vuln db.

    There is also an unmerged fix for the vulnerability which has also been
    included in our artifact (https://github.com/ramda/ramda/pull/3192/commits/774f767a10f37d1f844168cb7e6412ea6660112d )

    Please let me know if there is a dispute against the POC, and we can look further into this.

  • Try to avoid banging your head against the wall for too long when you realize that the issue you thought might have been raised by someone who'd seen the Veracode report was instead the source of that report.

  • Respond to this helpful person that yes you will have a dispute for this, and ask if you can be put directly in touch with the relevant Veracode people so there doesn't have to be a middleman.

  • Receive an email from this helpful person -- who needs a name, let's call him "Kevin" -- receive an email from Kevin adding to the email chain the research team. (I told you he was helpful!)

  • Respond to Kevin and the team with a brief note that you will spend some time to write up a response and get back to them soon.

  • Look again at the Veracode Report and note that the description has been changed to

    ramda is vulnerable to prototype pollution. An attacker is able to inject and modify attributes of an object through the mapObjIndexed function via the proto property.

    but note also that it still contains no details, no snippets, no dispute process.

  • Receive a bounced-email notification because that research team's email is for internal Veracode use only.

  • Laugh because the only other option is to cry.

  • Tell Kevin what happened and make sure he's willing to remain as an intermediary. Again he's helpful and will agree right away.

  • Spend several hours writing up a detailed response, explaining what prototype pollution is and how the examples do not display this behavior. Post it ahead of time on the issue. (Remember the issue? This is a story about the issue.3) Ask those reading for suggestions before you send the email... mostly as a way to ensure you're not sending this in anger.

  • Go ahead and email it right away anyway; if you said something too angry you probably don't want to be talked out of it now, anyhow.

  • Note that the nonrefundable StackOverflow bounty has expired without a single answer being offered.

Days 16 - 21
  • Twiddle your thumbs for a week, but meanwhile...

  • Receive a marketing email from Veracode, who has never sent you one before.

  • Note that Veracode has again updated the description to say

    ramda allows object prototype manipulation. An attacker is able to inject and modify attributes of an object through the mapObjIndexed function via the proto property. However, due to ramda's design where object immutability is the default, the impact of this vulnerability is limited to the scope of the object instead of the underlying object prototype. Nonetheless, the possibility of object prototype manipulation as demonstrated in the proof-of-concept under References can potentially cause unexpected behaviors in the application. There are currently no known exploits.

    If that's not clear, a translation would be, "Hey, we reported this, and we don't want to back down, so we're going to say that even though the behavior we noted didn't actually happen, the behavior that's there is still, umm, err, somehow wrong."

  • Note that a fan of the library whose employer has a Veracode account has been able to glean more information from their reports. It turns out that their details are restricted to logged-in users, leaving it entirely unclear how they thing such vulnerabilities should be fixed.

Day 22
  • Send a follow-up email to Kevin4 saying

    I'm wondering if there is any response to this.

    I see that the vulnerability report has been updated but not removed.
    I still dispute the altered version of it. If this behavior is a true vulnerability, could you point me to the equivalent report on JavaScript's Object.assign, which, as demonstrated earlier, has the exact same issue as the function in question.

    My immediate goal is to see this report retracted. But I also want to point out the pain involved in this process, pain that I think Veracode could fix:

    I am not a customer, but your customers are coming to me as Ramda's maintainer to fix a problem you've reported. That report really should have enough information in it to allow me to confirm the vulnerability reported. I've learned that such information is available to a logged- in customer. That doesn't help me or others in my position to find the information. Resorting to email and filtering it through your sales department, is a pretty horrible process. Could you alter your public reports to contain or point to a proof of concept of the vulnerability?
    And could you further offer in the report some hint at a dispute process?

Day 23
  • Receive an email from the still-helpful Kevin, which says

    Thanks for the follow up [ ... ], I will continue to manage the communication with my team, at this time they are looking into the matter and it has been raised up to the highest levels.

    Please reach back out to me if you don’t have a response within 72 hrs.

    Thank you for your patience as we investigate the issue, this is a new process for me as well.

  • Laugh out loud at the notion that he thinks you're being patient.

  • Respond, apologizing to Kevin that he's caught in the middle, and read his good-natured reply.

Day 25
  • Hear back from Kevin that your main objective has been met:

    Hi Scott, I wanted to provide an update, my engineering team got back
    to me with the following:

    “updating our DB to remove the report is the final outcome”

    I have also asked for them to let me know about your question regarding the ability to contend findings and will relay that back once feedback is received.

    Otherwise, I hope this satisfies your request and please let me know if any further action is needed from us at this time.

  • Respond gratefully to Kevin and note that you would still like to hear about how they're changing their processes.

  • Reply to your own email to apologize to Kevin for all the misspelling that happened when you try to type anything more than a short text on your mobile device.

  • Check with that helpful Ramda user with Veracode log-in abilities whether the site seems to be updated properly.

  • Reach out to that same user on Twitter when he hasn't responded in five minutes. It's not that you're anxious and want to put this behind you. Really it's not. You're not that kind of person.

  • Read that user's detailed response explaining that all is well.

  • Receive a follow-up from the Veracode Support email address telling you that

    After much consideration we have decided to update our db to remove this report.

    and that they're closing the issue.

  • Laugh about the fact that they are sending this after what seem likely the close of business for the week (7:00 PM your time on a Friday.)

  • Respond politely to say that you're grateful for the result, but that you would still like to see their dispute process modernized.

Day 27
  • Write a 2257-word answer5 to your own Stack Overflow question explaining in great detail the process you went through to resolve this issue.

And that's all it takes. So the next time you run into this, you can solve it too!




Update

(because you knew it couldn't be that easy!)

Day 61
  • Receive an email from a new Veracode account executive which says

    Thanks for your interest! Introducing myself as your point of contact at Veracode.

    I'd welcome the chance to answer any questions you may have around Veracode's services and approach to the space.

    Do you have a few minutes free to touch base? Please let me know a convenient time for you and I'll follow up accordingly.

  • Politely respond to that email suggesting a talk with Kevin and including a link to this list of steps.



1 This is standard behavior with Ramda issues, but it might be the main reason Veracode chose to report this.

2 Be careful not to get into an infinite loop. This recursion does not have a base case.

3 Hey, this was taking place around Thanksgiving. There had to be an Alice's Restaurant reference!

4 If you haven't yet found a Kevin, now would be a good time to insist that Veracode supply you with one.

5 Including footnotes.

Source https://stackoverflow.com/questions/69936667

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Database

Share this Page

share link

Get latest updates on Database