ceph-deploy | Deploy Ceph with minimal infrastructure | DevOps library

 by   ceph Python Version: 2.0.1 License: MIT

kandi X-RAY | ceph-deploy Summary

kandi X-RAY | ceph-deploy Summary

ceph-deploy is a Python library typically used in Devops, Ansible applications. ceph-deploy has build file available, it has a Permissive License and it has low support. However ceph-deploy has 7 bugs and it has 2 vulnerabilities. You can install using 'pip install ceph-deploy' or download it from GitHub, PyPI.

Deploy Ceph with minimal infrastructure, using just SSH access
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              ceph-deploy has a low active ecosystem.
              It has 374 star(s) with 299 fork(s). There are 136 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              ceph-deploy has no issues reported. There are 6 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of ceph-deploy is 2.0.1

            kandi-Quality Quality

              OutlinedDot
              ceph-deploy has 7 bugs (4 blocker, 0 critical, 3 major, 0 minor) and 88 code smells.

            kandi-Security Security

              ceph-deploy has 2 vulnerability issues reported (0 critical, 0 high, 0 medium, 2 low).
              ceph-deploy code analysis shows 0 unresolved vulnerabilities.
              There are 56 security hotspots that need review.

            kandi-License License

              ceph-deploy is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              ceph-deploy releases are not available. You will need to build from source code and install.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              ceph-deploy saves you 4340 person hours of effort in developing the same functionality from scratch.
              It has 9197 lines of code, 914 functions and 136 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed ceph-deploy and discovered the below as its top functions. This is intended to give you an instant insight into ceph-deploy implemented functionality, and help decide if they suit your requirements.
            • Create a new cluster
            • Given a list of ips returns the public ip address
            • Format a log record
            • Returns a tuple of names and host names
            • Install a custom repo
            • Load a remote module
            • Purge data from cluster
            • Find the path to the given executable
            • Decorator to handle exceptions
            • Make an exception message
            • Write a configuration file
            • Add a repository
            • Install repository
            • Check if we can connect to the remote host
            • Check if the connection is upstart
            • Return the nonlocal ip address
            • Install the given packages
            • Override subcommands
            • Vendorize requirements
            • Remove one or more packages
            • Add a new RPM repository
            • Gather keys from mon
            • Return the contents of osd file
            • Return a set of IP addresses for the given interface
            • Uninstall ceph packages
            • Add admin keys and conf to cluster
            Get all kandi verified functions for this library.

            ceph-deploy Key Features

            No Key Features are available at this moment for ceph-deploy.

            ceph-deploy Examples and Code Snippets

            No Code Snippets are available at this moment for ceph-deploy.

            Community Discussions

            QUESTION

            When I iterated a string list which generate by jinja2, I get a series of chars rather than strings. Why this happend?
            Asked 2021-Apr-02 at 06:51

            I have been trying to deploy ceph cluster via ansible. When I tried to render my deploy_ceph_cluster.sh.j2 into a shell script, I got some problems. For better illustration, I will provide a minimal working example.

            Here is my inventory file:

            ...

            ANSWER

            Answered 2021-Apr-02 at 06:51

            QUESTION

            How to perform minor upgrades with ceph-deploy?
            Asked 2020-Nov-24 at 14:52

            I have a cluster runnin nautilus v14.2.4 and want to upgrade it to the latest nautilus version.

            Is this posible with ceph-deploy?

            I see the package upgrade in apt, but can't find any documenatation for how to do the upgrade on nautilus.

            Any suggestions?

            Thanks!

            ...

            ANSWER

            Answered 2020-Oct-30 at 13:21

            ceph-deploy is not really supported anymore and its functionality might be broken. I think it still is able to perform the basic tasks but I wouldn't rely on it. I haven't tried that myself but one way would be to change the repos (if necessary) for newer packages and run ceph-deploy install --release nautilus and see if that works. If updates are applied you can continue with the rest, if not you'll have to run the update manually on each node.

            Source https://stackoverflow.com/questions/64607059

            QUESTION

            Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
            Asked 2020-Aug-23 at 11:58

            I'm new to Ceph, and I'm trying to install and config a ceph-cluster. After successfully installing the ceph-cluster I've run into some issues regarding storage and decided to re-install after purging everything and following this guide https://www.howtoforge.com/tutorial/how-to-install-a-ceph-cluster-on-ubuntu-16-04/ which went well the first time I've installed this. But in my second attempt I get this error after running the install command: ceph-deploy install ceph-admin ceph-osd1 ceph-osd2 ceph-osd3 mon1

            And this is my output:

            ...

            ANSWER

            Answered 2020-Aug-23 at 11:58

            Managed to sort it out by changing the repository it is downloading from to debian-15.2.4 by installing Ceph like this:

            Source https://stackoverflow.com/questions/63447868

            QUESTION

            Ceph Luminous, what I miss?
            Asked 2020-Apr-14 at 08:33

            With the previous Jewel release I had no problems. I have created a test cluster of 5 vms, all with Centos 7 and the Nautilus release of Ceph. 1 vm is a monitor, 3 are OSDs and 1 is admin-mgr. The deployment of the cluster is OK, the health is OK, but after creating MDS and pools...

            ...

            ANSWER

            Answered 2020-Apr-14 at 08:33

            In most cases such peering/unknown PG are related to connectivity issues. Are monitor and OSDs able to reach each other? Might there be a firewall problem or some foul routing causing the issues?

            Additionally the OSD and monitor logs are also worth checking out. Are there errors in the logs (most certainly)?

            Checking all this will guide you to solve your problem.

            See also the Ceph troubleshooting guide.

            Source https://stackoverflow.com/questions/61164763

            QUESTION

            Mount rexray/ceph volume in multiple containers on Docker swarm
            Asked 2020-Jan-03 at 11:56
            What I have done

            I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:

            1. Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD). http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

            2. Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/

            3. Deploy a Docker stack that mounts a volume using the rexray driver e.g.

              ...

            ANSWER

            Answered 2018-Oct-17 at 02:57

            Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:

            1. Rexray does not support multiple mounts:

              Today REX-Ray was designed to actually ensure safety among many hosts that could potentially have access to the same host. This means that it forcefully restricts a single volume to only be available to one host at a time. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)

            2. But Rexray does support a feature called pre-emption where:

              ..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)

            3. However, pre-emption is not supported by the Ceph RBD. (https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)

            Source https://stackoverflow.com/questions/52731504

            QUESTION

            How to create osd for Ceph on loop device
            Asked 2019-Nov-04 at 19:47

            I'm playing with Ceph in Vagrant environment & trying to create some minimal cluster. I have two nodes: 'master' & 'slave' Master as admin, monitor, manager. Slave for OSD.

            I'm following the official ceph deploy guides & facing the problem with OSD creation. On slave node I created some 10Gb loop device & mounted it to /media/vdevice then on master node I've tried to create OSD:

            ...

            ANSWER

            Answered 2018-Feb-14 at 16:55

            Ceph requires a block device on an OSD. To turn a disk image file into a loopback block device you can use the losetup utility.

            Source https://stackoverflow.com/questions/48784830

            QUESTION

            Ceph-deploy is not creating ceph.bootstrap-rbd.keyring file
            Asked 2019-Jul-17 at 07:45

            I am learning Ceph storage (luminous) with one admin node and two nodes for OSD and MON etc. as I following the doc http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to setup my initial storage cluster and stuck after executing this below command. as per the document the below command should out put 6 files but this file "ceph.bootstrap-rbd.keyrin" is missing in the admin node directory where I execute ceph-deploy commands.

            ceph-deploy --username sanadmin mon create-initial

            I am not sure whether it is a normal behaviour are I am really missing something. appreciate you help on this.

            Thanks.

            ...

            ANSWER

            Answered 2019-Jul-17 at 07:45

            It is not important. Because rbd is native service for ceph. Do not worry about that

            Source https://stackoverflow.com/questions/46683150

            QUESTION

            Ceph status HEALTH_WARN while adding an RGW Instance
            Asked 2019-Jan-17 at 15:34

            I want to create ceph cluster and then connect to it through S3 RESTful api. So, I've deployed ceph cluster (mimic 13.2.4) on "Ubuntu 16.04.5 LTS" with 3 OSD (one per each HDD 10Gb).

            Using this tutorials:

            1) http://docs.ceph.com/docs/mimic/start/quick-start-preflight/#ceph-deploy-setup

            2) http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/ At this point, ceph status is OK:

            ...

            ANSWER

            Answered 2019-Jan-17 at 15:34

            QUESTION

            Ceph configuration file and ceph-deploy
            Asked 2017-Jul-26 at 21:57

            I set up a test cluster and follow the documentation.

            I created cluster with command ceph-deploy new node1. After that, ceph configuration file appeared in the current directory, which contains information about the monitor on the node with hostname node1. Then I added two OSDs to the cluster.

            So now I have cluster with 1 monitor and 2 OSDs. ceph status command says that status is HEALTH_OK.

            Following all the same documentation, I moved on to section "Expanding your cluster" and added two new monitors with commands ceph-deploy mon add node2 and ceph-deploy mon add node3. Now I have cluster with three monitors in the quorum and status HEALTH_OK, but there is one little discrepancy for me. The ceph.conf is still the same. It contains old information about only one monitor. Why ceph-deploy mon add {node-name} command didn't update configuration file? And the main question is why ceph status displays correct information about new cluster state with 3 monitors while ceph.conf doesn't contain this information. Where is real configuration file and why ceph-deploy knows it but I don't?

            And it works even after a reboot. All ceph daemons start, read incorrect ceph.conf (I checked this with strace) and, ignoring this, work fine with new configuration.

            And the last question. Why ceph-deploy osd activate {ceph-node}:/path/to/directory command didn't update configuration file too? After all why do we need ceph.conf file if we have so smart ceph-deploy now?

            ...

            ANSWER

            Answered 2017-Jun-07 at 06:52

            You have multiple questions here.

            1) ceph.conf doesn't need to be completely the same for all nodes to run. E.g. OSD only need osd configuration they care about, MON only need configuration mon care ( unless you run everything on the same node which is also not recommended) So maybe your MON1 has MON1 MON2 has MON2 MON3 has MON3

            2) When MON being created and then added, the MON map being updated so MON itself already know which other MON require to have quorum. So MON doesn't counting on ceph.conf to get quorum information but to change run time configuration.

            3) ceph-deploy just a python script to prepare and run the ceph command for you. If you read into the detail ceph-deploy use e.g. ceph-disk zap prepare activate. Once you osd being prepared, and activate, once it is format to ceph partition, udev know where to mount. Then systemd ceph-osd.server will be activate ceph-osd at boot. That's why it doesn't need OSD information in ceph.conf at all

            Source https://stackoverflow.com/questions/44346046

            QUESTION

            Ceph prepare disk agsize (251 blocks) too small, need at least 4096 blocks
            Asked 2017-Jul-02 at 20:52

            I am trying to install the ceph on centos 7 machine. when I run disk prepare command its failing. the rooot cause is mkfs failing with bolck size.

            ...

            ANSWER

            Answered 2017-Jul-02 at 20:52

            you probably have a wrong value for osd_journal_size in ceph.conf. http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/

            "Ceph’s default osd journal size is 0, so you will need to set this in your ceph.conf file."

            Source https://stackoverflow.com/questions/44557476

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Install ceph-deploy

            You can install using 'pip install ceph-deploy' or download it from GitHub, PyPI.
            You can use ceph-deploy like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install ceph-deploy

          • CLONE
          • HTTPS

            https://github.com/ceph/ceph-deploy.git

          • CLI

            gh repo clone ceph/ceph-deploy

          • sshUrl

            git@github.com:ceph/ceph-deploy.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular DevOps Libraries

            ansible

            by ansible

            devops-exercises

            by bregman-arie

            core

            by dotnet

            semantic-release

            by semantic-release

            Carthage

            by Carthage

            Try Top Libraries by ceph

            ceph

            by cephC++

            ceph-ansible

            by cephPython

            ceph-container

            by cephShell

            ceph-csi

            by cephGo

            go-ceph

            by cephGo