kandi X-RAY | master-slave Summary
kandi X-RAY | master-slave Summary
master-slave
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Get people by name
- get thread connection
- Determines a random lookup key .
- Get a connection
- Insert a user .
- Get id by name
- Initialize the slave data source .
- Gets user id
- Gets the full url .
- Sets the URL to use for this request .
master-slave Key Features
master-slave Examples and Code Snippets
Community Discussions
Trending Discussions on master-slave
QUESTION
I have master-slave (primary-standby) streaming replication set up on 2 physical nodes. Although the replication is working correctly and walsender and walreceiver both work fine, the files in the pg_wal
folder on the slave node are not getting removed. This is a problem I have been facing every time I try to bring the slave node back after a crash. Here are the details of the problem:
postgresql.conf on master and slave/standby node
...ANSWER
Answered 2021-Jun-14 at 15:00You didn't describe omitting pg_replslot during your rsync, as the docs recommend. If you didn't omit it, then now your replica has a replication slot which is a clone of the one on the master. But if nothing ever connects to that slot on the replica and advances the cutoff, then the WAL never gets released to recycling. To fix you just need to shutdown the replica, remove that directory, restart it, (and wait for the next restart point to finish).
Do they need to go to wal_archive folder on the disk just like they go to wal_archive folder on the master node?
No, that is optional not necessary. It is set by archive_mode = always
if you want it to happen.
QUESTION
I am wondering if, using C (or C++ or Rust) and javascript, I am able to do CRUD operations to a shared data object. Using the most basic example, here would be an example or each of the operations:
...ANSWER
Answered 2021-May-24 at 08:54Yes, this is possible.
WebAssembly stores objects within linear memory, a contiguous array of bytes that the module can read and write to. The host environment (typically JavaScript within the web browser) can also read and write to linear memory, allowing it to access the objects that the WebAssembly modules stores there.
There are two challenges here:
- How do you find where your WebAssembly module has stored an object?
- How is the object encoded?
You need to ensure that you can read and write these objects from both the WebAssembly module and the JavaScript host.
I'd pick a known memory location, and a known serialisation format and use that to read/write from both sides.
QUESTION
I have AMQ Artemis cluster, shared-store HA (master-slave), 2.17.0.
I noticed that all my clusters (active servers only) that are idle (no one is using them) using from 10% to 20% of CPU, except one, which is using around 1% (totally normal). I started investigating...
Long story short - only one cluster has a completely normal CPU usage. The only difference I've managed to find that if I connect to that normal cluster's master node and attempt telnet slave 61616
- it will show as connected. If I do the same in any other cluster (that has high CPU usage) - it will show as rejected.
In order to better understand what is happening, I enabled DEBUG
logs in instance/etc/logging.properties
. Here is what master node is spamming:
ANSWER
Answered 2021-May-10 at 06:50Turns out issue was in broker.xml
configuration. In static-connectors
I somehow decided to list only a "non-current server" (e.g. I have srv0 and srv1 - in srv0 I only added connector of srv1 and vice versa).
What it used to be (on 1st master
node):
QUESTION
I have setup two ActiveMQ Artemis brokers (version 2.17) on a master-slave configuration with a shared file system for HA. During testing with heavy traffic I would stop master broker and see that the slave takes over and starts forwarding messages to consumers. After a while I can witness that a number of messages seem to have been stuck in queues. The "stuck" messages are considered as "in delivery" as I can see from the Artemis UI when queried.
My issue is that even when I restart the master broker these messages are not delivered to the consumer and remain stuck even if more messages are still populating the same queue and the queue has consumers. My assumption was that it had to do with previous connections setup by consumers still remaining active because there were not acknowledged.
So I did try to setup on broker, or on client connection string
(tcp://host1:61616,tcp://host2:61616)?ha=true&connectionTtl=60000&reconnectAttempts=-1)
but that did not seemed to have any effect
since the connection would not be closed and the messages were not released.
For consuming messages I am using Artemis JMS Spring client with a CachingConnectionFactory
but also tried JmsPoolConnectionFactory
to no avail.
ANSWER
Answered 2021-Mar-03 at 15:05Using variable concurrency with the CachingConnectionFactory
could be the issue.
When an "idle" consumer is returned to the cache, the broker doesn't know the consumer is not active and still sends messages to it.
The caching factory is really only needed on the producer side (JmsTemplate
) to avoid creating a connection and session for each send.
It's best to not use the CCF if you use variable concurrency; or configure it to not cache consumers, or don't use variable concurrency.
QUESTION
I have been using Windows 10 operating system. I want to use slony master-slave setup with postgresql. For this, I have downloaded enterprise database 13.1 version. After installation I select slony download from stackbuilder plus. Then I copied these 2 files to offline windows 10 machine. I install both of them. After that, i try to run a simple slonik script to setup master. while executing "init cluster" command i get file not found error, c:servershare/slony1_base.2.8.sql. Do you have an idea for the solution?
...ANSWER
Answered 2021-Feb-01 at 06:39After setting SLONY_SHARE_DIR to the location that the slony1 .sql files are in, everything works fine.
QUESTION
I have two DB servers and I am trying to make a Master-Slave replication for my PostgreSQL with TimescaleDB extension.
I went through this process:
[Both] Install PostgreSQL 12 on CentOS 7.
[Both] Initialize DB and install TimescaleDB. (followed official website tutorial)
[Both] Do all the
firewall-cmd
,postgresql.conf
, andpg_hba.conf
works.[Master] Create initial database.
[Slave] stop PostgreSQL, remove everything in
/var/lib/pgsql/12/data
, andpg_basebackup
from Master.(Command I used:
pg_basebackup -h [MASTER_DB_IP] -D /var/lib/pgsql/12/data -U [REP_USER] -vP -W
)[Slave] set
hot_standby = on
and createrecovery.conf
[Slave] start PostgreSQL to check replication works.
[Slave] This error occurs and cannot start PostgreSQL.
ANSWER
Answered 2020-Dec-03 at 03:20From version 12 on, PostgreSQL no longer uses recovery.conf
for recovery configuration.
Instead, you have to add the recovery configuration parameters to postgresql.conf
or postgresql.auto.conf
(the latter is commendable for automated editing).
Then create a file standby.signal
in the data directory, which tells PostgreSQL to start and remain in recovery mode.
Then start the standby server.
Note: the standby_mode
parameter is gone. Instead, use standby.signal
if you want standby mode and recovery.signal
for recovery mode that ends.
QUESTION
My question is built on the question and answers from this question - What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?
The question might not be well-formed for some of you.
I am trying to understand the differences between clusterIP
, nodePort
and Loadbalancer
and when to use these with an example. I suppose that my understanding of the following concept is correct
K8s consists of the following components
- Node - A VM or physical machine. Runs kubectl and docker process
- Pod - unit which encapsulates container(s) and volumes (storage). If a pod contains multiple containers then shared volume could be the way for process communication
- Node can have one or multiple pods. Each pod will have its own IP
- Cluster - replicas of a Node. Each node in a cluster will contain same pods (instances, type)
Here is the scenario:
My application has a web server
(always returning 200OK) and a database
(always returning the same value) for simplicity. Also, say I am on GCP
and I make images of webserver
and of the database
. Each of these will be run in their own respective pods
and will have 2 replicas.
I suppose I'll have two clusters (cluster-webserver (node1-web (containing pod1-web), node2-web (containing pod2-web))
and cluster-database (node1-db (containing pod1-db), node2-db (containing pod2-db))
. Each node will have its own ip
address (node1-webip, node2-webip, node1-dbip, node2-dbip
)
A client application (browser) should be able to access the web application from outside web
cluster but the database
shouldn't be accessible from outside database
cluster. However web
nodes should be able to access database
nodes)
- Question 1 - Am I correct that if I create a service for
web
(webServiceName
) and a service fordatabase
then by default, I'll get onlyclusterIP
and aport
(ortargetPort
). - Question 1.2 - Am I correct that
clusterIP
is anIP
assigned to apod
, not thenode
i.e. in my example, clusterIP gets assigned topod1-web
, notnode1-web
even thoughnode1
has onlypod1
. - Question 1.3 - Am I correct that as
cluster IP
is accessible from only within the cluster,pod1-web
andpod2-web
can talk to each other andpod1-db
andpod2-db
can talk to each other usingclusterIP/dns:port
orclusterIP/dns:targetPort
butweb
can't talk todatabase
(and vice versa) and external client can't talk to web? Also, thenodes
are not accessible using thecluster IP
. - Question 1.4 - Am I correct that
dns
i.e.servicename.namespace.svc.cluster.local
would map theclusterIP
? - Question 1.5 - For which type of applications I might use only
clusterIP
? Where multiple instances of an application need to communicate with each other (eg master-slave configuration)?
If I use nodePort
then K8s
will open a port
on each of the node
and will forward nodeIP/nodePort
to cluster IP (on pod)/Cluster Port
- Question 2 - Can
web
nodes now accessdatabase
nodes usingnodeIP:nodePort
which will route the traffic todatabase's
clusterIP (on pod):clusterport/targertPort
? ( I have read that clusterIP/dns:nodePort will not work). - Question 2.1 - How do I get a
node's
IP
? IsnodeIP
theIP
I'll get when I rundescribe pods
command? - Question 2.2 - Is there a
dns
equivalent for thenode IP
asnode IP
could change during failovers. Or doesdns
now resolve to the node's IP instead ofclusterIP
? - Question 2.3 - I read that
K8s
will createendpoints
for eachservice
. Isendpoint
same asnode
or is it same aspod
? If I runkubectl describe pods
orkubectl get endpoints
, would I get sameIPs
)?
As I am on GCP
, I can use Loadbalancer
for web
cluster to get an external IP. Using the external IP, the client application can access the web
service
I saw this configuration for a LoadBalancer
ANSWER
Answered 2020-Oct-23 at 13:58My question is built on the question and answers from this question - What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?
The question might not be well-formed for some of you.
It's ok but in my opinion it's a bit too extensive for a single question and it could be posted as a few separate questions as it touches quite a few different topics.
I am trying to understand the differences between
clusterIP
,nodePort
andLoadbalancer
and when to use these with an example. I suppose that my understanding of the following concept is correct K8s consists of the following components
- Node - A VM or physical machine. Runs kubectl and docker process
Not kubectl
but kubelet
. You can check it by ssh-ing
into your node and runnign systemctl status kubelet
. And yes, it runs also some sort of container runtime environment. It doesn't have to be exactly docker.
- Pod - unit which encapsulates container(s) and volumes (storage). If a pod contains multiple containers then shared volume could be the way for process communication
- Node can have one or multiple pods. Each pod will have its own IP
That's correct.
- Cluster - replicas of a Node. Each node in a cluster will contain same pods (instances, type)
Not really. Kubernetes nodes
are not different replicas. They are part of the same kubernetes cluster but they are independent instances, which are capable of running your containerized apps. In kubernetes terminology it is called a workload. Workload isn't part of kubernetes cluster, it's something that you run on it. Your Pods
can be scheduled on different nodes and it doesn't always have to be an even distribution. Suppose you have kubernetes cluster consisting of 3 worker nodes (nodes on which workload can be scheduled as opposed to master node, that usually runs only kubernetes control plane components). If you deploy your application as a Deployment
e.g. 5 different replicas of the same Pod
are created. Usually they are scheduled on different nodes, but situation when node1 runs 2 replicas, node2 3 replicas and node3 zero replicas is perfectly possible.
You need to keep in mind that there are different clustering levels. You have your kubernetes cluster which basically is an environment to run your containerized workload.
There are also clusters within this cluster i.e. it is perfectly possible that your workload forms clusters as well e.g. you can have a database deployed as a StatefulSet
and it can run in a cluster. In such scenario, different stateful Pods
will form members or nodes of such cluster.
Even if your Pods
don't comunicate with each other but e.g. serve exactly the same content, Deployment
resoure makes sure that there is always certain number of replicas of such a Pod
that is up and running. If one kubernetes node for some reason becomes unavailable, it means such Pod
needs to be re-scheduled on one of the available nodes. So the replication of your workload isn't achieved by deploying it on different kubernetes nodes but by assuring that certain amout of replicas of a Pod
of a certain kind is always up and running, and it may be running on the same as well as on different kubernetes nodes.
Here is the scenario:
My application has a
web server
(always returning 200OK) and adatabase
(always returning the same value) for simplicity. Also, say I am onGCP
and I make images ofwebserver
and of thedatabase
. Each of these will be run in their own respectivepods
and will have 2 replicas.I suppose I'll have two clusters (
cluster-webserver (node1-web (containing pod1-web), node2-web (containing pod2-web))
andcluster-database (node1-db (containing pod1-db), node2-db (containing pod2-db))
. Each node will have its ownip
address (node1-webip, node2-webip, node1-dbip, node2-dbip
)
See above what I wrote about different clustering levels. Clusters formed by your app have nothing to do with kubernetes cluster nor its nodes. And I would say you would rather have 2 different microservices comunicating with each other and in some way also dependent on one another. But yes, you may see your database as a separate db cluster deployed within kubernetes cluster.
A client application (browser) should be able to access the web application from outside
web
cluster but thedatabase
shouldn't be accessible from outsidedatabase
cluster. Howeverweb
nodes should be able to accessdatabase
nodes)
- Question 1 - Am I correct that if I create a service for
web
(webServiceName
) and a service fordatabase
then by default, I'll get onlyclusterIP
and aport
(ortargetPort
).
Yes, ClusterIP
service type is often simply called a Service
because it's the default Service
type. If you don't specify type
like in this example, ClusterIP
type is created. To understand the difference between port
and targetPort
you can take a look at this answer or kubernetes official docs.
- Question 1.2 - Am I correct that
clusterIP
is anIP
assigned to apod
, not thenode
i.e. in my example, clusterIP gets assigned topod1-web
, notnode1-web
even thoughnode1
has onlypod1
.
Basically yes. ClusterIP
is one of the things that can be easily misunderstood as it is used to denote also a specific Service
type, but in this context yes, it's an internal IP assigned within a kubernetes cluster to a specific resource, in this case to a Pod
, but Service
object has it's own Cluster IP assigned. Pods
as part of kubernetes cluster get their own internal IPs (from kubernetes cluster perspective) - cluster IPs. Nodes can have completely different addressing scheme. They can also be private IPs but they are not cluster IPs, in other words they are not internal kubernetes cluster IPs from cluster perspective. Apart from those external IPs (from kubernetes cluster perspective), kubernetes nodes as legitimate API resources / objects have also their own Cluster IPs assigned.
You can check it by running:
QUESTION
I'm trying to setup simple redis configuration with only 2 services: master and slave.
Here's part of my .gitlab-ci.yml
with master-slave setup:
ANSWER
Answered 2020-Oct-22 at 16:47In your nc command you are using -n
option, which is, per man, does the following
-n Do not do any DNS or service lookups on any specified addresses, hostnames or ports.
So essentially you are turning off your dns searches, here is a little test to illustrate:
QUESTION
I had have to redefine the description of the problem.
I have PostgreSQL cloud-based database, making 1.5M requests per day. I checked the statistics of the individual queries themselves with different variants of the extracted data. In general, the individual queries seem to be okay (they are really simple and are unlikely to delay). The problem occurs while the application is running. The application is an internet game. During one gaming session, new records (with the current state of the game) are constantly being written to the database. A lot of inserts are made at this time. The user may wish to see the history of the game at any time(while such inserts are in progress). At this point, when the writing-service is adding new records into the database, the reading-service reads the data. Such reading is very rare compared to writing, it occurs in a ratio of 1:100 (but such reading will occur more often in the future). Service ugh-read usually reads data in 0-6 seconds. Sometimes the reading time increases to over 40 or even 100 seconds. Rare jumps like 10-20 seconds would be acceptable, but I absolutely need to get rid of jumps over 40 seconds. For this particular problem I think about replication MASTER-SLAVE (write_only-read_only)
The additional information: commentator asked about:
- Cloud service: gcp
- Service limit: limits: memory: 1500Mi cpu: 500m
- Postgres version: 10
If would be good, I could present the structure of queries and tables. Everything is written in spring.
...ANSWER
Answered 2020-Oct-02 at 15:04Simply speaking, you have to find and remove the bottleneck.
A few pointers:
Look at the operating system and see how the I/O system and the CPU are doing.
Reduce the number of concurrent database connections, perhaps using a connection pool.
Employ
pg_stat_statements
to find the statements that cause the most load.
QUESTION
We have a master-slave setup in AWS RDS using mysql. We have a db.m4.xlarge
for both of the instances. One of the tables is performing very slow on the read replica.
Table - user_exercises
Table Size - 3.9GB
Rows - 273499
Query - select count(*) from user_exercises;
The table has a primary key only as an index. On the master it takes < 0.1 second. On the read replica it runs forever.
...ANSWER
Answered 2020-Oct-14 at 16:36- Is the Query cache turned on? That may explain why the Primary ran so fast.
- Adding a secondary index on some small column will speed up the query. Yeah, it is a kludge -- The Optimizer will pick the smallest index for doing
SELECT COUNT(*)
. - 3.9GB/273K -- sounds like the table has a big text column?
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install master-slave
You can use master-slave like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the master-slave component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page