postgres | Mirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work | Database library
kandi X-RAY | postgres Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
postgres Key Features
postgres Examples and Code Snippets
Trending Discussions on postgres
Trending Discussions on postgres
QUESTION
Not really sure what caused this but most likely exiting the terminal while my rails server which was connected to PostgreSQL database was closed (not a good practice I know but lesson learned!)
I've already tried the following:
- Rebooting my machine (using MBA M1 2020)
- Restarting PostgreSQL using homebrew
brew services restart postgresql
- Re-installing PostgreSQL using Homebrew
- Updating PostgreSQL using Homebrew
- I also tried following this link but when I run
cd Library/Application\ Support/Postgres
terminal tells me Postgres folder doesn't exist, so I'm kind of lost already. Although I have a feeling that deleting postmaster.pid would really fix my issue. Any help would be appreciated!
ANSWER
Answered 2022-Jan-13 at 15:19My original answer only included the troubleshooting steps below, and a workaround. I now decided to properly fix it via brute force by removing all clusters and reinstalling, since I didn't have any data there to keep. It was something along these lines, on my Ubuntu 21.04 system:
sudo pg_dropcluster --stop 12 main
sudo pg_dropcluster --stop 14 main
sudo apt remove postgresql-14
sudo apt purge postgresql*
sudo apt install postgresql-14
Now I have:
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
14 main 5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
And sudo -u postgres psql
works fine. The service was started automatically but it can be done manually with sudo systemctl start postgresql
.
Incidentally, I can recommend the PostgreSQL docker image, which eliminates the need to bother with a local installation.
TroubleshootingAlthough I cannot provide an answer to your specific problem, I thought I'd share my troubleshooting steps, hoping that it might be of some help. It seems that you are on Mac, whereas I am running Ubuntu 21.04, so expect things to be different.
This is a client connection problem, as noted by section 19.3.2 in the docs.
The directory in my error message is different:
$ sudo su postgres -c "psql"
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
I checked what unix sockets I had in that directory:
$ ls -lah /var/run/postgresql/
total 8.0K
drwxrwsr-x 4 postgres postgres 160 Oct 29 16:40 .
drwxr-xr-x 36 root root 1.1K Oct 29 14:08 ..
drwxr-s--- 2 postgres postgres 40 Oct 29 14:33 12-main.pg_stat_tmp
drwxr-s--- 2 postgres postgres 120 Oct 29 16:59 14-main.pg_stat_tmp
-rw-r--r-- 1 postgres postgres 6 Oct 29 16:36 14-main.pid
srwxrwxrwx 1 postgres postgres 0 Oct 29 16:36 .s.PGSQL.5433
-rw------- 1 postgres postgres 70 Oct 29 16:36 .s.PGSQL.5433.lock
Makes sense, there is a socket for 5433 not 5432. I confirmed this by running:
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
12 main 5432 down,binaries_missing postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
14 main 5433 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
This explains how it got into this mess on my system. The default port is 5432, but after I upgraded from version 12 to 14, the server was setup to listen to 5433, presumably because it considered 5432 as already taken. Two alternatives here, get the server to listen on 5432 which is the client's default, or get the client to use 5433.
Let's try it by changing the client's parameters:
$ sudo su postgres -c "psql --port=5433"
psql (14.0 (Ubuntu 14.0-1.pgdg21.04+1))
Type "help" for help.
postgres=#
It worked! Now, to make it permanent I'm supposed to put this setting on a psqlrc
or ~/.psqlrc
file. The thin documentation on this (under "Files") was not helpful to me as I was not sure on the syntax and my attempts did not change the client's default, so I moved on.
To change the server I looked for the postgresql.conf
mentioned in the documentation but could not find the file. I did however see /var/lib/postgresql/14/main/postgresql.auto.conf
so I created it on the same directory with the content:
port = 5432
Restarted the server: sudo systemctl restart postgresql
But the error persisted because, as the logs confirmed, the port did not change:
$ tail /var/log/postgresql/postgresql-14-main.log
...
2021-10-29 16:36:12.195 UTC [25236] LOG: listening on IPv4 address "127.0.0.1", port 5433
2021-10-29 16:36:12.198 UTC [25236] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5433"
2021-10-29 16:36:12.204 UTC [25237] LOG: database system was shut down at 2021-10-29 16:36:12 UTC
2021-10-29 16:36:12.210 UTC [25236] LOG: database system is ready to accept connections
After other attempts did not succeed, I eventually decided to use a workaround: to redirect the client's requests on 5432 to 5433:
ln -s /var/run/postgresql/.s.PGSQL.5433 /var/run/postgresql/.s.PGSQL.5432
This is what I have now:
$ ls -lah /var/run/postgresql/
total 8.0K
drwxrwsr-x 4 postgres postgres 160 Oct 29 16:40 .
drwxr-xr-x 36 root root 1.1K Oct 29 14:08 ..
drwxr-s--- 2 postgres postgres 40 Oct 29 14:33 12-main.pg_stat_tmp
drwxr-s--- 2 postgres postgres 120 Oct 29 16:59 14-main.pg_stat_tmp
-rw-r--r-- 1 postgres postgres 6 Oct 29 16:36 14-main.pid
lrwxrwxrwx 1 postgres postgres 33 Oct 29 16:40 .s.PGSQL.5432 -> /var/run/postgresql/.s.PGSQL.5433
srwxrwxrwx 1 postgres postgres 0 Oct 29 16:36 .s.PGSQL.5433
-rw------- 1 postgres postgres 70 Oct 29 16:36 .s.PGSQL.5433.lock
This means I can now just run psql
without having to explicitly set the port to 5433. Now, this is a hack and I would not recommend it. But in my development system I am happy with it for now, because I don't have more time to spend on this. This is why I shared the steps and the links so that you can find a proper solution for your case.
QUESTION
My app.py file
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres:////tmp/test.db'
db = SQLAlchemy(app) # refer https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy
One of my model classes, where I imported db
from app import db
Base = declarative_base()
# User class
class User(db.Model, Base):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
def __repr__(self):
return '' % self.username
def get_user_by_id(self, id):
return self.query.get(id)
My database has the same set of tables in different schema (multi-tenancy) and there I need to select the schema as per the request initiated by a particular tenant on the fly by using before_request
(grabbing tenant_id from subdomain URL).
I found Postgres provides selecting the schema name on fly by using schema_translate_map
ref. https://docs.sqlalchemy.org/en/14/core/connections.html#translation-of-schema-names and that is under execution_options
https://docs.sqlalchemy.org/en/14/core/connections.html#sqlalchemy.engine.Connection.execution_options
In my above code snippet where you see db = SQLAlchemy(app)
, as per official documentation, two parameters can be set in SQLAlchemy
objct creation and they are - session_options
and engine_options
, but no execution_options
ref. https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy
But how do I set schema_translate_map
setting when I am creating an object of SQLAlchemy
I tried this -
db = SQLAlchemy(app,
session_options={
"autocommit": True,
"autoflush": False,
"schema_translate_map": {
None: "public"
}
}
)
But obviously, it did not work, because schema_translate_map
is under execution_options
as mentioned here https://docs.sqlalchemy.org/en/14/core/connections.html#translation-of-schema-names
Anyone has an idea, how to set schema_translate_map
at the time of creating SQLAlchemy
object.
My goal is to set it dynamically for each request. I want to control it from this centralized place, rather than going in each model file and specifying it when I execute queries.
I am aware of doing this differently as suggested here https://stackoverflow.com/a/56490246/1560470 but my need is to set somewhere around db = SQLAlchemy(app)
in app.py
file only. Then after I import db
in all my model classes (as shown above) and in those model classes, all queries execute under the selected schema.
ANSWER
Answered 2022-Feb-19 at 23:10I found a way to accomplish it. This is what needed
db = SQLAlchemy(app,
session_options={
"autocommit": True,
"autoflush": False
},
engine_options={
"execution_options":
{
"schema_translate_map": {
None: "public",
"abc": "xyz"
}
}
}
)
QUESTION
I am now using this command to generate schema in rust diesel:
diesel --database-url postgres://postgres:kZLxttcZSN@127.0.0.1:5432/rhythm \
migration run --config-file="${CURRENT_DIR}"/diesel-rhythm.toml
and this is the toml config:
[print_schema]
file = "src/model/diesel/rhythm/rhythm_schema.rs"
# This will cause only the users and posts tables to be output
filter = { only_tables = ["favorites", "songs", "playlist"] }
is it possible to make diesel auto generate the model entity? the entity may look like this:
#[derive( Serialize, Queryable, Deserialize,Default)]
pub struct Music {
pub id: i64,
pub name: String,
pub source_id: String
}
now I write the entity by handle. what should I do to make it generate by diesel cli, I read the document and did not found any useful configuration about this.
ANSWER
Answered 2022-Feb-02 at 18:49You are looking for diesel_cli_ext
First install diesel_cli_ext:
cargo install diesel_cli_ext
[Then] you would have to generate your schema file the diesel way if you haven't yet:
diesel print-schema > src/schema.rs
Finally you have to generate the models file:
diesel_ext --model > src/models.rs
The models in your schema file would be generated in src/models.rs
eg:
#[derive(Queryable)]
pub struct Music {
pub id: i64,
pub name: String,
pub source_id: String
}
QUESTION
I am following this tutorial on migrating data from an oracle database to a Cloud SQL PostreSQL instance.
I am using the Google Provided Streaming Template Datastream to PostgreSQL
At a high level this is what is expected:
- Datastream exports in Avro format backfill and changed data into the specified Cloud Bucket location from the source Oracle database
- This triggers the Dataflow job to pickup the Avro files from this cloud storage location and insert into PostgreSQL instance.
When the Avro files are uploaded into the Cloud Storage location, the job is indeed triggered but when I check the target PostgreSQL database the required data has not been populated.
When I check the job logs and worker logs, there are no error logs. When the job is triggered these are the logs that logged:
StartBundle: 4
Matched 1 files for pattern gs://BUCKETNAME/ora2pg/DEMOAPP_DEMOTABLE/2022/01/11/20/03/7e13ac05aa3921875434e51c0c0c63aaabced31a_oracle-backfill_336860711_1_0.avro
FinishBundle: 5
Does anyone know what the issue is? Is it a configuration issue? If needed I will post the required configurations.
If not could someone aid me on how to properly debug this particular Dataflow job? Thanks
EDIT 1:
When checking the step info for the steps in the pipeline, found the following:
Below are all the steps in the pipeline:
First step (DatastreamIO) seems to work as expected with the correct number of element counters in the "Output collection" which is 2.
However in the second step, these 2 element counters are not found in the Output collection. On further inspection, it can be seen that the elements seem to be dropped in the following step (Format to Postgres DML > Format to Postgres DML > Map):
EDIT 2:
This is a screenshot of the Cloud Worker logs for the above step:
EDIT 3:
I individually built and deployed the template from source in order to debug this issue. I found that the code works up to the following line in DatabaseMigrationUtils.java
:
return KV.of(jsonString, dmlInfo);
Where the jsonString
variable contains the dataset read from the .avro
file.
But the code does not progress beyond this and seems to abruptly stop without any errors being thrown.
ANSWER
Answered 2022-Jan-26 at 19:14This answer is accurate as of 19th January 2022.
Upon manual debug of this dataflow, I found that the issue is due to the dataflow job is looking for a schema with the exact same name as the value passed for the parameter databaseName
and there was no other input parameter for the job using which we could pass a schema name. Therefore for this job to work, the tables will have to be created/imported into a schema with the same name as the database.
However, as @Iñigo González said this dataflow is currently in Beta and seems to have some bugs as I ran into another issue as soon as this was resolved which required me having to change the source code of the dataflow template job itself and build a custom docker image for it.
QUESTION
Why does this simple query result in a "division by zero" Error?
select
case when b > 0 then sum(a / b) end
from (values (1,1),(2,1),(1,0),(2,0)) t (a,b)
group by b
I would expect the output:
case 3 NULLThe only explanation I have is that postgres calculates the sum before doing the grouping and evaluating the case.
ANSWER
Answered 2022-Jan-26 at 14:26See the documentation:
[…] a
CASE
cannot prevent evaluation of an aggregate expression contained within it, because aggregate expressions are computed before other expressions in aSELECT
list orHAVING
clause are considered. For example, the following query can cause a division-by-zero error despite seemingly having protected against it:
SELECT CASE WHEN min(employees) > 0
THEN avg(expenses / employees)
END
FROM departments;
So it is expected that aggregate expression sum(a/b)
are computed before other expressions. This applies not only to Postgresql, but also to Sql Server too.
QUESTION
I have the following code for connecting to a Postgres database:
func connectToPostgres(ctx context.Context, url string) (*pgxpool.Pool, error) {
var err error
for i := 0; i < 5; i++ {
p, err := pgxpool.Connect(ctx, url)
if err != nil || p == nil {
time.Sleep(3 * time.Second)
continue
}
log.Printf("pool returned from connect: %s", p)
return p, nil
}
return nil, errors.Wrap(err, "timed out waiting to connect postgres")
}
The use case is to wait for Postgres to become available when starting my server with docker-compose. Even though the code sleeps if p == nil
, the log just before the first return prints out: pool returned from connect: %!s(*pgxpool.Pool=)
Is there some way that a background process in pgxpool
could make p == nil
?
Any thoughts on why this would happen?
EDIT: This appears to only happen while running my app and Postgres via docker-compose. I'm using the following compose file:
services:
app:
build: .
ports:
- "8080:8080"
depends_on:
- "db"
db:
image: postgres
restart: always
environment:
- POSTGRES_DB=demo_db
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- "8081:5432"
and the Dockerfile for my app:
FROM golang:1.17
WORKDIR /
COPY go.mod .
COPY go.sum .
COPY *.go .
RUN go mod download
RUN go build
EXPOSE 8080
CMD [ "./app" ]
And a minimally reproducible example go file:
package main
import (
"context"
"fmt"
"log"
"net/http"
"time"
"github.com/jackc/pgx/v4/pgxpool"
"github.com/pkg/errors"
)
func main() {
log.Printf("connecting to postgres...")
pgpool, err := connectToPostgres(context.Background(), "postgresql://localhost:5432/demo_db")
log.Printf("pool: %s", pgpool)
if err != nil {
log.Fatalln(err)
}
log.Printf("successfully connected to postgres")
if err := http.ListenAndServe(":8080", nil); err != nil {
log.Fatal(err)
}
log.Println("stopped")
}
func connectToPostgres(ctx context.Context, url string) (*pgxpool.Pool, error) {
var err error
for i := 0; i < 5; i++ {
p, err := pgxpool.Connect(ctx, url)
if err != nil || p == nil {
time.Sleep(3 * time.Second)
continue
}
log.Printf("pool returned from connect: %s", p)
return p, nil
}
return nil, errors.Wrap(err, "timed out waiting to connect postgres")
}
ANSWER
Answered 2021-Dec-21 at 21:47The issue is that when connecting in a docker-compose
network, you have to connect to the hostname of the container, in this case db
.
You could also use the other container's IP
but would take additional amount of work, it's simpler to just use the hostname.
In other words, you have the wrong connection string, I got this as well when connecting to localhost
app_1 | 2021/12/21 18:53:28 pool: %!s(*pgxpool.Pool=)
app_1 | 2021/12/21 18:53:28 successfully connected to postgres
When connecting with the right connection string:
"postgres://postgres:mysecretpassword@db:5432/postgres"
It works perfectly.
Rest of the logs
db_1 | 2021-12-21 18:56:04.122 UTC [1] LOG: database system is ready to accept connections
app_1 | 2021/12/21 18:56:06 pool returned from connect: &{%!s(*puddle.Pool=&{0xc00007c040 0xc0000280b0 [0xc00007c0c0] [0xc00007c0c0] 0x65cb60 0x65dc80 16 1 9872796 1 0 false}) %!s(*pgxpool.Config=&{0xc0000a2000 3600000000000 1800000000000 16 0 60000000000 false true}) %!s(func(context.Context, *pgx.ConnConfig) error=) %!s(func(context.Context, *pgx.Conn) error=) %!s(func(context.Context, *pgx.Conn) bool=) %!s(func(*pgx.Conn) bool=) %!s(int32=0) %!s(time.Duration=3600000000000) %!s(time.Duration=1800000000000) %!s(time.Duration=60000000000) {%!s(uint32=0) {%!s(int32=0) %!s(uint32=0)}} %!s(chan struct {}=0xc000024060)}
app_1 | 2021/12/21 18:56:06 pool: &{%!s(*puddle.Pool=&{0xc00007c040 0xc0000280b0 [0xc00007c0c0] [0xc00007c0c0] 0x65cb60 0x65dc80 16 1 9872796 1 0 false}) %!s(*pgxpool.Config=&{0xc0000a2000 3600000000000 1800000000000 16 0 60000000000 false true}) %!s(func(context.Context, *pgx.ConnConfig) error=) %!s(func(context.Context, *pgx.Conn) error=) %!s(func(context.Context, *pgx.Conn) bool=) %!s(func(*pgx.Conn) bool=) %!s(int32=0) %!s(time.Duration=3600000000000) %!s(time.Duration=1800000000000) %!s(time.Duration=60000000000) {%!s(uint32=0) {%!s(int32=0) %!s(uint32=0)}} %!s(chan struct {}=0xc000024060)}
app_1 | 2021/12/21 18:56:06 successfully connected to postgres
QUESTION
I want to query the Covalent database to find out the amount of gas paid out in the latest 100 rUSDT token transfer transactions on the RSK blockchain.
In the following SQL query I am trying to join these two tables to find out the gas fees paid for each of the latest 100 transactions.
SELECT
t.fees_paid
FROM chain_rsk_mainnet.block_log_events e
INNER JOIN chain_rsk_mainnet.block_transactions t ON
e.block_id = t.block_id
AND e.tx_offset = t.tx_offset
WHERE
e.topics @> array[E'\\xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef'::bytea]
AND e.topics[1] = E'\\xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef'
AND e.sender = E'\\xEf213441a85DF4d7acBdAe0Cf78004E1e486BB96'
ORDER BY e.block_id DESC, e.tx_offset DESC
LIMIT 100;
Unfortunately this query appears to take too long to process.
How can I modify this query?
More context:
0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef
is the ERC20Transfer
event log's topic ID.0xEf213441a85DF4d7acBdAe0Cf78004E1e486BB96
is the smart contract of the ERC20 token.- the
\\x
in Postgres'bytea
format is used to type hexadecimal values as string literals, may be considered to be equivalent to the0x
prefix. - In the Covalent database,
chain_rsk_mainnet.block_log_events
is a table with all events emitted by smart contracts on RSK Mainnet - In the Covalent database,
chain_rsk_mainnet.block_transactions
is a table with all RSK Mainnet transaction details - The reason that
e.topics
is matched twice is a performance optimisation. Strictly speaking, only the latter one is necessary.
ANSWER
Answered 2021-Dec-01 at 02:23You need to put a date range on the query or else it will run for a very long time. There are a huge number of rUSDT Transfer
event logs on RSK. Scanning the full table to find all of them, and joining these all in one go is the root cause that this query takes too long.
To solve this, for each of the tables being joined, add a condition to the time-related fields (block_log_events.block_signed_at
and block_transactions.signed_at
), to limit it to a certain interval, say a month:
AND e.block_signed_at > NOW() - INTERVAL '1 month' AND e.block_signed_at <= NOW()
AND t.signed_at > NOW() - INTERVAL '1 month' AND t.signed_at <= NOW()
Here's the full query:
SELECT
t.fees_paid
FROM chain_rsk_mainnet.block_log_events e
INNER JOIN chain_rsk_mainnet.block_transactions t ON
e.block_id = t.block_id
AND e.tx_offset = t.tx_offset
WHERE
e.topics @> array[E'\\xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef'::bytea]
AND e.topics[1] = E'\\xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef'
AND e.sender = E'\\xEf213441a85DF4d7acBdAe0Cf78004E1e486BB96'
AND e.block_signed_at > NOW() - INTERVAL '1 month' AND e.block_signed_at <= NOW()
AND t.signed_at > NOW() - INTERVAL '1 month' AND t.signed_at <= NOW()
ORDER BY e.block_id DESC, e.tx_offset DESC
LIMIT 100;
QUESTION
I'm trying to start up this Project on my Mac https://github.com/realsuayip/django-sozluk It works on my Windows machine, but I got this Error on my Mac:
unexpected character "." in variable name near "127.0.0.1 192.168.2.253\nDJANGO_SETTINGS_MODULE=djdict.settings_prod\n\n\nSQL_ENGINE=django.db.backends.postgresql\nSQL_PORT=5432\nDATABASE=postgres\nSQL_HOST=db\n\nSQL_DATABASE=db_dictionary\nSQL_USER=db_dictionary_user\nSQL_PASSWORD=db_dictionary_password\n\n\nEMAIL_HOST=eh\nEMAIL_PORT=587\nEMAIL_HOST_USER=eh_usr\nEMAIL_HOST_PASSWORD=pw" furkan@MacBook-Air-von-Furkan gs %
Any help would be much appreciated!
ANSWER
Answered 2021-Oct-11 at 12:31I had a similar problem with a docker container. It probably appears after a system update under my linux. I can't say anything about the reason, but try following:
Quote the variable values in file ".env" of the Project, such as:
DEBUG=0
SECRET_KEY=foo
DJANGO_ALLOWED_HOSTS="localhost 127.0.0.1 192.168.2.253"
DJANGO_SETTINGS_MODULE="djdict.settings_prod"
SQL_ENGINE="django.db.backends.postgresql"
# ...
And try again
QUESTION
I am trying to get a brand new cloud based server working with a default version of 20.04 server ubuntu working with apache and node. The node server appears to be running without issues reporting 4006 port is open. However I believe my apache config is not. The request will hang for a very very long time. No errors are displayed in the node terminal. So the fault must lie in my apache config seeing as we are getting the below apache errors and no JS errors.
Request error after some time502 proxy error
[Sun Oct 17 20:58:56.608793 2021] [proxy:error] [pid 1596878] (111)Connection refused: AH00957: HTTP: attempt to connect to [::1]:4006 (localhost) failed
[Sun Oct 17 20:58:56.608909 2021] [proxy_http:error] [pid 1596878] [client 207.46.13.93:27392] AH01114: HTTP: failed to make connection to backend: localhost
ServerName api.aDomain.com
Redirect permanent / https://api.aDomain.com/
ServerName api.aDomain.com
ProxyRequests on
LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so
ProxyPass / http://localhost:4006/
ProxyPassReverse / http://localhost:4006/
#certificates SSL
SSLEngine on
SSLCACertificateFile /etc/ssl/api.aDomain.com/apimini.ca
SSLCertificateFile /etc/ssl/api.aDomain.com/apimini.crt
SSLCertificateKeyFile /etc/ssl/api.aDomain.com/apimini.key
ErrorLog ${APACHE_LOG_DIR}/error_api.aDomain.com.log
CustomLog ${APACHE_LOG_DIR}/access_api.aDomain.com.log combined
[nodemon] 1.19.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `babel-node -r dotenv/config --inspect=9229 index.js`
Debugger listening on ws://127.0.0.1:9229/c1fcf271-aea8-47ff-910e-fe5a91fce6d2
For help, see: https://nodejs.org/en/docs/inspector
Browserslist: caniuse-lite is outdated. Please run next command `npm update`
🚀 Server ready at http://localhost:4006
import cors from 'cors'
import scrape from './src/api/routes/scrape'
const express = require('express')
const { ApolloServer, gql } = require('apollo-server-express')
const { postgraphile } = require('postgraphile')
const ConnectionFilterPlugin = require('postgraphile-plugin-connection-filter')
const dbHost = process.env.DB_HOST
const dbPort = process.env.DB_PORT
const dbName = process.env.DB_NAME
const dbUser = process.env.DB_USER
const dbPwd = process.env.DB_PWD
const dbUrl = dbPwd
? `postgres://${dbUser}:${dbPwd}@${dbHost}:${dbPort}/${dbName}`
: `postgres://${dbHost}:${dbPort}/${dbName}`
var corsOptions = {
origin: '*',
optionsSuccessStatus: 200, // some legacy browsers (IE11, various SmartTVs) choke on 204
}
async function main() {
// Construct a schema, using GraphQL schema language
const typeDefs = gql`
type Query {
hello: String
}
`
// Provide resolver functions for your schema fields
const resolvers = {
Query: {
hello: () => 'Hello world!',
},
}
const server = new ApolloServer({ typeDefs, resolvers })
const app = express()
app.use(cors(corsOptions))
app.use(
postgraphile(process.env.DATABASE_URL || dbUrl, 'public', {
appendPlugins: [ConnectionFilterPlugin],
watchPg: true,
graphiql: true,
enhanceGraphiql: true,
})
)
server.applyMiddleware({ app })
//Scraping Tools
scrape(app)
const port = 4006
await app.listen({ port })
console.log(`🚀 Server ready at http://localhost:${port}`)
}
main().catch(e => {
console.error(e)
process.exit(1)
})
/etc/apache2/mods-enabled/proxy.conf /etc/apache2/mods-enabled/proxy.load /etc/apache2/mods-enabled/proxy_http.load
Updated Error Logs[Thu Oct 21 10:59:22.560608 2021] [proxy_http:error] [pid 10273] (70007)The timeout specified has expired: [client 93.115.195.232:8963] AH01102: error reading status line from remote server 127.0.0.1:4006, referer: https://miniatureawards.com/
[Thu Oct 21 10:59:22.560691 2021] [proxy:error] [pid 10273] [client 93.115.195.232:8963] AH00898: Error reading from remote server returned by /graphql, referer: https://miniatureawards.com/
ANSWER
Answered 2021-Oct-20 at 23:51If you use a docker for your node server, then it might be set up incorrectly
QUESTION
I'm trying to run two sites on django on the same server under different ip, an error occurs that the port is busy, I fixed the ports, but the site does not start. Tell me where is the error please? Ip work, when I go to the second ip I get redirects to the first site. All settings were specified for the second site. At the end, I added the nginx setting of the first site
This is the second docker-compose file and its settings. I would be very grateful for your help
.env
#Django
# Should be one of dev, prod
MODE=prod
PORT=8008
#postgres
DB_NAME=xxx
DB_USER=xxx
DB_HOST=xxx
DB_PASSWORD=xxxx
DB_PORT=5432
POSTGRES_PASSWORD=mysecretpassword
#WSGI
WSGI_PORT=8008
WSGI_WORKERS=4
WSGI_LOG_LEVEL=debug
# Celery
CELERY_NUM_WORKERS=2
# Email
EMAIL_HOST_USER=xxxx
EMAIL_HOST_PASSWORD=xxxx
docker-compose.yml
version: '3'
services:
backend:
build: ./
container_name: site_container
restart: always
command: ./commands/start_server.sh
ports:
- "${PORT}:${WSGI_PORT}"
volumes:
- ./src:/srv/project/src
- ./commands:/srv/project/commands
- static_content:/var/www/site
env_file:
- .env
depends_on:
- postgres
postgres:
image: postgres:12
volumes:
- pg_data:/var/lib/postgresql/data
env_file:
- .env
# environment:
# - DJANGO_SETTINGS_MODULE=app.settings.${MODE}
nginx:
image: nginx:1.19
volumes:
- ./nginx:/etc/nginx/conf.d
- static_content:/var/www/site
ports:
- 81:80
- 444:443
env_file:
- .env
depends_on:
- backend
volumes:
pg_data: {}
static_content: {}
default.conf
server {
listen 80 default_server;
server_name 183.22.332.12;
location /static/ {
root /var/www/site;
}
location /media/ {
root /var/www/site;
}
location / {
proxy_set_header Host $host;
proxy_pass http://backend:8010;
}
}
default.conf for first site
server {
#listen 80 default_server;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name site1 ip_site1;
ssl_certificate /etc/letsencrypt/live/site1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/site1/chain.pem;
location /static/ {
root /var/www/artads;
}
location /media/ {
root /var/www/artads;
}
location / {
proxy_set_header Host $host;
proxy_pass http://backend:8008;
}
}
server {
listen 80 default_server;
server_name ip_site2 site2;
location /static/ {
root /var/www/gdr_mr;
}
location /media/ {
root /var/www/gdr_mr;
}
location / {
proxy_set_header Host $host;
proxy_pass http://backend:8013;
}
}
server {
listen 80;
listen [::]:80;
server_name www.site1 site1;
location / {
return 301 https://site1$request_uri;
}
}
ANSWER
Answered 2021-Sep-22 at 21:54If you're running two virtual servers with different IPs on the same machine, you'd want to specify the IP address in the listen directive:
server {
listen 192.168.1.1:80;
server_name example.net www.example.net;
...
}
server {
listen 192.168.1.2:80;
server_name example.com www.example.com;
...
}
More on how nginx
processes requests can be found here: http://nginx.org/en/docs/http/request_processing.html
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install postgres
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page