drbd | LINBIT DRBD kernel module
kandi X-RAY | drbd Summary
kandi X-RAY | drbd Summary
LINBIT DRBD kernel module
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of drbd
drbd Key Features
drbd Examples and Code Snippets
Community Discussions
Trending Discussions on drbd
QUESTION
I have a high availability cluster with two nodes, with a resource for drbd, a virtual IP and the mariaDB files shared on the drbd partition.
Everything seems to work OK, but drbd is not syncing the latest files I have created, even though drbd status tells me they are UpToDate.
...ANSWER
Answered 2022-Feb-23 at 09:15I have found a Split-Brain that did not appear in the status of pcs.
QUESTION
I am working on MongoDB HA. Don't want to go with the HA approach mentioned in mongo official docs due to resource limitation.
I have done the MySQL (Active-Active) HA with DRBD, corosync & pacemaker. I have done mongoDB HA (Active- Standby) with DRBD, corosync & pacemaker. I have tested it a small scale data. It's working fine.
I read that mongoDB with DRBD is not good approach & it can lead to data corruption.
Should i go with this approach ?? if not any other approach apart from official one ??
...ANSWER
Answered 2021-Jun-02 at 19:57If you're doing Active/Passive (Active/Standby) clustering, there is no difference between a MongoDB on DRBD vs. MongoDB on any other block device.
If you had multiple active MongoDB's accessing a dual-primary (Active/Active) DRBD device, that's where the potential for corruption would come in.
QUESTION
I am currently trying out linstor
in my lab. I am trying to setup a separation of compute
and storage
node. Storage node that runs linstor whereas Compute node is running Docker Swarm or K8s. I have setup 1 linstor node and 1 docker swarm node in this testing. Linstor node is configured successfully.
DRBD 9.1.2
ANSWER
Answered 2021-May-28 at 07:49LINSTOR manages storage in a cluster of nodes replicating disk space inside a LVM or ZFS volume (or bare partition I'd say) by using DRDB (Distributed Replicated Block Device) to replicate data across the nodes, as per the official docs:
So I'd say yes, you really need to have the driver on every node on which you want to use the driver (I did see Docker's storage plugin try to mount the DRBD volume locally)
However, you do not necessarily need to have the storage space itself on the compute node, since you can mount a diskless DRBD resource from volumes that are replicated on separate nodes so I'd say your idea should work, unless there is some bug in the driver itself I didn't discover yet: your compute node(s) needs to be registered as being a diskless node for all the required pools (I didn't try this but remember reading it's not only possible but recommended for some types of data migrations).
Of course if you don't have more than 1 storage nodes you don't gain much from using LINSTOR/drbd (node or disk failure will leave you diskless). My use case for it was to have replicated storage across different servers in different datacenters, so that the next time one burns to a crisp 😅 I can have my data and containers running after minutes instead of several days...
QUESTION
I changed the path from my mariaDB data files to /mnt/datosDRBD/mariaDB
...ANSWER
Answered 2021-Mar-31 at 11:08OK, I solved it, changing the resource in pacemaker.
QUESTION
I'm looking at the options in ActiveMQ Artemis for data recovery if we lose an entire data centre. We have two data centres, one on the east coast and one on the west coast.
From the documentation and forums I've found four options:
Disk based methods:
Block based replication of the data directory between the sites, run Artemis on one site (using Ciphy or DRBD with protocol A). In the event of disaster (or fail over test), stop Artemis on the dead site, and start it on the live site.
The same thing but with both Artemis servers active, using an
ha-policy
to indicate the master and the slave using a shared store.
Network replication:
Like number 2, but with data replication enabled in Artemis, so Artemis handles the replication.
Our IT team uses / is familiar with MySQL replication, NFS, and rsync for our other services. We are currently handling JMS with a JBoss 4 server replicated over MySQL.
My reaction from reading the documentation is that high availability data replication is the way to go, but is there trade offs I'm not seeing. The only one that mentions DR and cross site is the mirror broker connection, but on the surface it looks like a more difficult to manage version of the same thing?
Our constraints are that we need high performance on the live cluster (on the order of 10s of thousands of messages per second, all small) We can afford to lose messages (as few as possible preferably) in an emergency fail over. We should not lose messages in a controlled fail over.
We do not want clients in site A connecting to Artemis in site B - we will enable clients on site B in the event of a fail over.
...ANSWER
Answered 2021-Mar-02 at 18:26The first thing to note is that the high availability functionality (both shared-store and replication - options #2 & #3) configured via ha-policy
is designed for use in a local data center with high speed, low latency network connections. It is not designed for disaster recovery.
The problem specifically with network-based data replication for you is that replication is synchronous which means there's a very good chance it will negatively impact performance since every durable message will have to be sent across the country from one data center to the other. Also, if the replicating broker fails then clients will automatically fail-over to the backup in the other data center.
Using a solution based on block-storage (e.g. Ceph or DRDB) is viable, but it's really an independent thing outside the control of ActiveMQ Artemis.
The mirror broker connection was designed with the disaster recovery use-case in mind. It is asynchronous so it won't have nearly the performance impact of replication, and if the mirroring broker fails clients will not automatically fail-over to the mirror.
QUESTION
I have a script that sends an email when one of my DRBD nodes down, this script runs every 4mn, the problem is when one of the nodes fails (down) the script keeps sending emails every 4mn, how can I make the script only send once a day, or every 24 hours?.
I post the current script.
...ANSWER
Answered 2020-Oct-08 at 09:30Assuming you want to keep doing the check every 4 minutes, but suppress repeated mails for the same day or for 24 hours, you have to save the time when you sent the mail.
You can use the modification time of a file for this purpose.
QUESTION
How do I change the node IP of DRBD?
This is my config:
...ANSWER
Answered 2020-May-20 at 15:03# drbdadm disconnect # on both nodes
- Change the IP address within the
/etc/drbd.d/.res
file on both nodes # drbdadm adjust # on both nodes
When DRBD starts it runs through a series of steps, if any one of these fails it will skip the latter steps. One of these steps is to create a TCP socket. If it fails to do that, it will skip the latter steps, one of which is attaching to the disk.
I suspect in your case, that DRBD fails to find the IP address to use present on the system, and thus skips the latter steps of attaching to the disk, and thus starts up connectionless and diskless. Make sure the IP address you're changing DRBD to use is already present on the systems.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install drbd
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page