ceph-deploy | Deploy Ceph with minimal infrastructure | DevOps library
kandi X-RAY | ceph-deploy Summary
kandi X-RAY | ceph-deploy Summary
Deploy Ceph with minimal infrastructure, using just SSH access
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Create a new cluster
- Given a list of ips returns the public ip address
- Format a log record
- Returns a tuple of names and host names
- Install a custom repo
- Load a remote module
- Purge data from cluster
- Find the path to the given executable
- Decorator to handle exceptions
- Make an exception message
- Write a configuration file
- Add a repository
- Install repository
- Check if we can connect to the remote host
- Check if the connection is upstart
- Return the nonlocal ip address
- Install the given packages
- Override subcommands
- Vendorize requirements
- Remove one or more packages
- Add a new RPM repository
- Gather keys from mon
- Return the contents of osd file
- Return a set of IP addresses for the given interface
- Uninstall ceph packages
- Add admin keys and conf to cluster
ceph-deploy Key Features
ceph-deploy Examples and Code Snippets
Community Discussions
Trending Discussions on ceph-deploy
QUESTION
I have been trying to deploy ceph cluster via ansible. When I tried to render my deploy_ceph_cluster.sh.j2 into a shell script, I got some problems. For better illustration, I will provide a minimal working example.
Here is my inventory file:
...ANSWER
Answered 2021-Apr-02 at 06:51QUESTION
I have a cluster runnin nautilus v14.2.4 and want to upgrade it to the latest nautilus version.
Is this posible with ceph-deploy?
I see the package upgrade in apt, but can't find any documenatation for how to do the upgrade on nautilus.
Any suggestions?
Thanks!
...ANSWER
Answered 2020-Oct-30 at 13:21ceph-deploy
is not really supported anymore and its functionality might be broken. I think it still is able to perform the basic tasks but I wouldn't rely on it. I haven't tried that myself but one way would be to change the repos (if necessary) for newer packages and run ceph-deploy install --release nautilus
and see if that works. If updates are applied you can continue with the rest, if not you'll have to run the update manually on each node.
QUESTION
I'm new to Ceph, and I'm trying to install and config a ceph-cluster.
After successfully installing the ceph-cluster I've run into some issues regarding storage and decided to re-install after purging everything and following this guide
https://www.howtoforge.com/tutorial/how-to-install-a-ceph-cluster-on-ubuntu-16-04/
which went well the first time I've installed this.
But in my second attempt I get this error after running the install command:
ceph-deploy install ceph-admin ceph-osd1 ceph-osd2 ceph-osd3 mon1
And this is my output:
...ANSWER
Answered 2020-Aug-23 at 11:58Managed to sort it out by changing the repository it is downloading from to debian-15.2.4 by installing Ceph like this:
QUESTION
With the previous Jewel release I had no problems. I have created a test cluster of 5 vms, all with Centos 7 and the Nautilus release of Ceph. 1 vm is a monitor, 3 are OSDs and 1 is admin-mgr. The deployment of the cluster is OK, the health is OK, but after creating MDS and pools...
...ANSWER
Answered 2020-Apr-14 at 08:33In most cases such peering/unknown PG are related to connectivity issues. Are monitor and OSDs able to reach each other? Might there be a firewall problem or some foul routing causing the issues?
Additionally the OSD and monitor logs are also worth checking out. Are there errors in the logs (most certainly)?
Checking all this will guide you to solve your problem.
See also the Ceph troubleshooting guide.
QUESTION
I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:
Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD). http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/
Deploy a Docker stack that mounts a volume using the rexray driver e.g.
...
ANSWER
Answered 2018-Oct-17 at 02:57Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:
Rexray does not support multiple mounts:
Today REX-Ray was designed to actually ensure safety among many hosts that could potentially have access to the same host. This means that it forcefully restricts a single volume to only be available to one host at a time. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
But Rexray does support a feature called
pre-emption
where:..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
However, pre-emption is not supported by the Ceph RBD. (https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)
QUESTION
I'm playing with Ceph in Vagrant environment & trying to create some minimal cluster. I have two nodes: 'master' & 'slave' Master as admin, monitor, manager. Slave for OSD.
I'm following the official ceph deploy guides & facing the problem with OSD creation. On slave node I created some 10Gb loop device & mounted it to /media/vdevice then on master node I've tried to create OSD:
...ANSWER
Answered 2018-Feb-14 at 16:55Ceph requires a block device on an OSD. To turn a disk image file into a loopback block device you can use the losetup
utility.
QUESTION
I am learning Ceph storage (luminous) with one admin node and two nodes for OSD and MON etc. as I following the doc http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to setup my initial storage cluster and stuck after executing this below command. as per the document the below command should out put 6 files but this file "ceph.bootstrap-rbd.keyrin" is missing in the admin node directory where I execute ceph-deploy commands.
ceph-deploy --username sanadmin mon create-initial
I am not sure whether it is a normal behaviour are I am really missing something. appreciate you help on this.
Thanks.
...ANSWER
Answered 2019-Jul-17 at 07:45It is not important. Because rbd is native service for ceph. Do not worry about that
QUESTION
I want to create ceph cluster and then connect to it through S3 RESTful api. So, I've deployed ceph cluster (mimic 13.2.4) on "Ubuntu 16.04.5 LTS" with 3 OSD (one per each HDD 10Gb).
Using this tutorials:
1) http://docs.ceph.com/docs/mimic/start/quick-start-preflight/#ceph-deploy-setup
2) http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/ At this point, ceph status is OK:
...ANSWER
Answered 2019-Jan-17 at 15:34Your issue is
QUESTION
I set up a test cluster and follow the documentation.
I created cluster with command ceph-deploy new node1
. After that, ceph configuration file appeared in the current directory, which contains information about the monitor on the node with hostname node1
. Then I added two OSDs to the cluster.
So now I have cluster with 1 monitor and 2 OSDs. ceph status
command says that status is HEALTH_OK
.
Following all the same documentation, I moved on to section "Expanding your cluster" and added two new monitors with commands ceph-deploy mon add node2
and ceph-deploy mon add node3
. Now I have cluster with three monitors in the quorum and status HEALTH_OK
, but there is one little discrepancy for me. The ceph.conf
is still the same. It contains old information about only one monitor. Why ceph-deploy mon add {node-name}
command didn't update configuration file? And the main question is why ceph status
displays correct information about new cluster state with 3 monitors while ceph.conf
doesn't contain this information. Where is real configuration file and why ceph-deploy
knows it but I don't?
And it works even after a reboot. All ceph daemons start, read incorrect ceph.conf
(I checked this with strace
) and, ignoring this, work fine with new configuration.
And the last question. Why ceph-deploy osd activate {ceph-node}:/path/to/directory
command didn't update configuration file too? After all why do we need ceph.conf
file if we have so smart ceph-deploy
now?
ANSWER
Answered 2017-Jun-07 at 06:52You have multiple questions here.
1) ceph.conf doesn't need to be completely the same for all nodes to run. E.g. OSD only need osd configuration they care about, MON only need configuration mon care ( unless you run everything on the same node which is also not recommended) So maybe your MON1 has MON1 MON2 has MON2 MON3 has MON3
2) When MON being created and then added, the MON map being updated so MON itself already know which other MON require to have quorum. So MON doesn't counting on ceph.conf to get quorum information but to change run time configuration.
3) ceph-deploy just a python script to prepare and run the ceph command for you. If you read into the detail ceph-deploy use e.g. ceph-disk zap prepare activate. Once you osd being prepared, and activate, once it is format to ceph partition, udev know where to mount. Then systemd ceph-osd.server will be activate ceph-osd at boot. That's why it doesn't need OSD information in ceph.conf at all
QUESTION
I am trying to install the ceph on centos 7 machine. when I run disk prepare command its failing. the rooot cause is mkfs failing with bolck size.
...ANSWER
Answered 2017-Jul-02 at 20:52you probably have a wrong value for osd_journal_size in ceph.conf. http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
"Ceph’s default osd journal size is 0, so you will need to set this in your ceph.conf file."
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
Install ceph-deploy
You can use ceph-deploy like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page