kandi background
Explore Kits

CMAK | CMAK is a tool for managing Apache Kafka clusters | Pub Sub library

 by   yahoo Scala Version: 3.0.0.5 License: Apache-2.0

 by   yahoo Scala Version: 3.0.0.5 License: Apache-2.0

Download this library from

kandi X-RAY | CMAK Summary

CMAK is a Scala library typically used in Messaging, Pub Sub, Kafka applications. CMAK has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
CMAK (Cluster Manager for Apache Kafka, previously known as Kafka Manager).
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • CMAK has a medium active ecosystem.
  • It has 10302 star(s) with 2342 fork(s). There are 548 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 453 open issues and 177 have been closed. On average issues are closed in 147 days. There are 28 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of CMAK is 3.0.0.5
CMAK Support
Best in #Pub Sub
Average in #Pub Sub
CMAK Support
Best in #Pub Sub
Average in #Pub Sub

quality kandi Quality

  • CMAK has 0 bugs and 0 code smells.
CMAK Quality
Best in #Pub Sub
Average in #Pub Sub
CMAK Quality
Best in #Pub Sub
Average in #Pub Sub

securitySecurity

  • CMAK has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • CMAK code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
CMAK Security
Best in #Pub Sub
Average in #Pub Sub
CMAK Security
Best in #Pub Sub
Average in #Pub Sub

license License

  • CMAK is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
CMAK License
Best in #Pub Sub
Average in #Pub Sub
CMAK License
Best in #Pub Sub
Average in #Pub Sub

buildReuse

  • CMAK releases are available to install and integrate.
  • Installation instructions are not available. Examples and code snippets are available.
  • It has 18893 lines of code, 953 functions and 158 files.
  • It has low code complexity. Code complexity directly impacts maintainability of the code.
CMAK Reuse
Best in #Pub Sub
Average in #Pub Sub
CMAK Reuse
Best in #Pub Sub
Average in #Pub Sub
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

CMAK Key Features

Manage multiple clusters

Easy inspection of cluster state (topics, consumers, offsets, brokers, replica distribution, partition distribution)

Run preferred replica election

Generate partition assignments with option to select brokers to use

Run reassignment of partition (based on generated assignments)

Create a topic with optional topic configs (0.8.1.1 has different configs than 0.8.2+)

Delete topic (only supported on 0.8.2+ and remember set delete.topic.enable=true in broker config)

Topic list now indicates topics marked for deletion (only supported on 0.8.2+)

Batch generate partition assignments for multiple topics with option to select brokers to use

Batch run reassignment of partition for multiple topics

Add partitions to existing topic

Update config for existing topic

Optionally enable JMX polling for broker level and topic level metrics.

Optionally filter out consumers that do not have ids/ owners/ & offsets/ directories in zookeeper.

default

copy iconCopydownload iconDownload
1. [Kafka 0.8.*.* or 0.9.*.* or 0.10.*.* or 0.11.*.*](http://kafka.apache.org/downloads.html)
2. Java 11+

Configuration
-------------

The minimum configuration is the zookeeper hosts which are to be used for CMAK (pka kafka manager) state.
This can be found in the application.conf file in conf directory.  The same file will be packaged
in the distribution zip file; you may modify settings after unzipping the file on the desired server.

    cmak.zkhosts="my.zookeeper.host.com:2181"

You can specify multiple zookeeper hosts by comma delimiting them, like so:

    cmak.zkhosts="my.zookeeper.host.com:2181,other.zookeeper.host.com:2181"

Alternatively, use the environment variable `ZK_HOSTS` if you don't want to hardcode any values.

    ZK_HOSTS="my.zookeeper.host.com:2181"

You can optionally enable/disable the following functionality by modifying the default list in application.conf :

    application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFeature"]

 - KMClusterManagerFeature - allows adding, updating, deleting cluster from CMAK (pka Kafka Manager)
 - KMTopicManagerFeature - allows adding, updating, deleting topic from a Kafka cluster
 - KMPreferredReplicaElectionFeature - allows running of preferred replica election for a Kafka cluster
 - KMReassignPartitionsFeature - allows generating partition assignments and reassigning partitions

Consider setting these parameters for larger clusters with jmx enabled :

 - cmak.broker-view-thread-pool-size=< 3 * number_of_brokers>
 - cmak.broker-view-max-queue-size=< 3 * total # of partitions across all topics>
 - cmak.broker-view-update-seconds=< cmak.broker-view-max-queue-size / (10 * number_of_brokers) >

Here is an example for a kafka cluster with 10 brokers, 100 topics, with each topic having 10 partitions giving 1000 total partitions with JMX enabled :

 - cmak.broker-view-thread-pool-size=30
 - cmak.broker-view-max-queue-size=3000
 - cmak.broker-view-update-seconds=30

The follow control consumer offset cache's thread pool and queue :

 - cmak.offset-cache-thread-pool-size=< default is # of processors>
 - cmak.offset-cache-max-queue-size=< default is 1000>
 - cmak.kafka-admin-client-thread-pool-size=< default is # of processors>
 - cmak.kafka-admin-client-max-queue-size=< default is 1000>

You should increase the above for large # of consumers with consumer polling enabled.  Though it mainly affects ZK based consumer polling.

Kafka managed consumer offset is now consumed by KafkaManagedOffsetCache from the "__consumer_offsets" topic.  Note, this has not been tested with large number of offsets being tracked.  There is a single thread per cluster consuming this topic so it may not be able to keep up on large # of offsets being pushed to the topic.

### Authenticating a User with LDAP
Warning, you need to have SSL configured with CMAK (pka Kafka Manager) to ensure your credentials aren't passed unencrypted.
Authenticating a User with LDAP is possible by passing the user credentials with the Authorization header.
LDAP authentication is done on first visit, if successful, a cookie is set.
On next request, the cookie value is compared with credentials from Authorization header.
LDAP support is through the basic authentication filter.

1. Configure basic authentication
- basicAuthentication.enabled=true
- basicAuthentication.realm=< basic authentication realm>

2. Encryption parameters (optional, otherwise randomly generated on startup) :
- basicAuthentication.salt="some-hex-string-representing-byte-array"
- basicAuthentication.iv="some-hex-string-representing-byte-array"
- basicAuthentication.secret="my-secret-string"

3. Configure LDAP/LDAPS authentication
- basicAuthentication.ldap.enabled=< Boolean flag to enable/disable ldap authentication >
- basicAuthentication.ldap.server=< fqdn of LDAP server>
- basicAuthentication.ldap.port=< port of LDAP server>
- basicAuthentication.ldap.username=< LDAP search username>
- basicAuthentication.ldap.password=< LDAP search password>
- basicAuthentication.ldap.search-base-dn=< LDAP search base>
- basicAuthentication.ldap.search-filter=< LDAP search filter>
- basicAuthentication.ldap.connection-pool-size=< number of connection to LDAP server>
- basicAuthentication.ldap.ssl=< Boolean flag to enable/disable LDAPS>

4. (Optional) Limit access to a specific LDAP Group
- basicAuthentication.ldap.group-filter=< LDAP group filter>
- basicAuthentication.ldap.ssl-trust-all=< Boolean flag to allow non-expired invalid certificates>

#### Example (Online LDAP Test Server):

- basicAuthentication.ldap.enabled=true
- basicAuthentication.ldap.server="ldap.forumsys.com"
- basicAuthentication.ldap.port=389
- basicAuthentication.ldap.username="cn=read-only-admin,dc=example,dc=com"
- basicAuthentication.ldap.password="password"
- basicAuthentication.ldap.search-base-dn="dc=example,dc=com"
- basicAuthentication.ldap.search-filter="(uid=$capturedLogin$)"
- basicAuthentication.ldap.group-filter="cn=allowed-group,ou=groups,dc=example,dc=com"
- basicAuthentication.ldap.connection-pool-size=10
- basicAuthentication.ldap.ssl=false
- basicAuthentication.ldap.ssl-trust-all=false


Deployment
----------

The command below will create a zip file which can be used to deploy the application.

    ./sbt clean dist

Please refer to play framework documentation on [production deployment/configuration](https://www.playframework.com/documentation/2.4.x/ProductionConfiguration).

If java is not in your path, or you need to build against a specific java version,
please use the following (the example assumes zulu java11):

    $ PATH=/usr/lib/jvm/zulu-11-amd64/bin:$PATH \
      JAVA_HOME=/usr/lib/jvm/zulu-11-amd64 \
      /path/to/sbt -java-home /usr/lib/jvm/zulu-11-amd64 clean dist

This ensures that the 'java' and 'javac' binaries in your path are first looked up in the
correct location. Next, for all downstream tools that only listen to JAVA_HOME, it points
them to the java11 location. Lastly, it tells sbt to use the java11 location as
well.

Starting the service
--------------------

After extracting the produced zipfile, and changing the working directory to it, you can
run the service like this:

    $ bin/cmak

By default, it will choose port 9000. This is overridable, as is the location of the
configuration file. For example:

    $ bin/cmak -Dconfig.file=/path/to/application.conf -Dhttp.port=8080

Again, if java is not in your path, or you need to run against a different version of java,
add the -java-home option as follows:

    $ bin/cmak -java-home /usr/lib/jvm/zulu-11-amd64

Starting the service with Security
----------------------------------

To add JAAS configuration for SASL, add the config file location at start:

    $ bin/cmak -Djava.security.auth.login.config=/path/to/my-jaas.conf

NOTE: Make sure the user running CMAK (pka kafka manager) has read permissions on the jaas config file


Packaging
---------

If you'd like to create a Debian or RPM package instead, you can run one of:

    sbt debian:packageBin

    sbt rpm:packageBin

Credits
-------

Most of the utils code has been adapted to work with [Apache Curator](http://curator.apache.org) from [Apache Kafka](http://kafka.apache.org).

Name and Management
-------

CMAK was renamed from its previous name due to [this issue](https://github.com/yahoo/kafka-manager/issues/713). CMAK is designed to be used with Apache Kafka and is offered to support the needs of the Kafka community. This project is currently managed by employees at Verizon Media and the community who supports this project.

License
-------

Licensed under the terms of the Apache License 2.0. See accompanying LICENSE file for terms.

Consumer/Producer Lag
-------

Producer offset is polled.  Consumer offset is read from the offset topic for Kafka based consumers.  This means the reported lag may be negative since we are consuming offset from the offset topic faster then polling the producer offset.  This is normal and not a problem.

Migration from Kafka Manager to CMAK
-------

1. Copy config files from old version to new version (application.conf, consumer.properties)
2. Change start script to use bin/cmak instead of bin/kafka-manager

KeeperErrorCode = Unimplemented for /kafka-manager/mutex

copy iconCopydownload iconDownload
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /kafka-manager
[configs, deleteClusters, clusters]
[zk: localhost:2181(CONNECTED) 1] create /kafka-manager/mutex ""
Created /kafka-manager/mutex
[zk: localhost:2181(CONNECTED) 2] create /kafka-manager/mutex/locks ""
Created /kafka-manager/mutex/locks
[zk: localhost:2181(CONNECTED) 3] create /kafka-manager/mutex/leases ""
Created /kafka-manager/mutex/leases
[zk: localhost:2181(CONNECTED) 4]

Iterating over PE files

copy iconCopydownload iconDownload
for directories in datasetPath: # directories iterating over my datasetPath which contains list of my pe files
    samples = [f for f in os.listdir(datasetPath) if isfile(join(datasetPath, f))]
    for file in samples:
        filePath = directories+"/"+file
        fileByteSequence = readFile(filePath)
for dir in datasetPath:
    samples = []
    for f in os.listdir(dir):
        p = os.path.join(dir, f)
        if isfile(p):
            samples.append(p)
    for filePath in samples:
        fileByteSequence = readFile(filePath)
for filePath in os.listdir('.'):
    if filePath in datasetPath:
        fileByteSequence = readFile(filePath)
-----------------------
for directories in datasetPath: # directories iterating over my datasetPath which contains list of my pe files
    samples = [f for f in os.listdir(datasetPath) if isfile(join(datasetPath, f))]
    for file in samples:
        filePath = directories+"/"+file
        fileByteSequence = readFile(filePath)
for dir in datasetPath:
    samples = []
    for f in os.listdir(dir):
        p = os.path.join(dir, f)
        if isfile(p):
            samples.append(p)
    for filePath in samples:
        fileByteSequence = readFile(filePath)
for filePath in os.listdir('.'):
    if filePath in datasetPath:
        fileByteSequence = readFile(filePath)
-----------------------
for directories in datasetPath: # directories iterating over my datasetPath which contains list of my pe files
    samples = [f for f in os.listdir(datasetPath) if isfile(join(datasetPath, f))]
    for file in samples:
        filePath = directories+"/"+file
        fileByteSequence = readFile(filePath)
for dir in datasetPath:
    samples = []
    for f in os.listdir(dir):
        p = os.path.join(dir, f)
        if isfile(p):
            samples.append(p)
    for filePath in samples:
        fileByteSequence = readFile(filePath)
for filePath in os.listdir('.'):
    if filePath in datasetPath:
        fileByteSequence = readFile(filePath)
-----------------------
samples = [f for f in os.listdir(datasetPath) if isfile(join(datasetPath, f))]

Community Discussions

Trending Discussions on CMAK
  • What's the difference between CMake set() and CMake file(Globe)?
  • KeeperErrorCode = Unimplemented for /kafka-manager/mutex
  • Iterating over PE files
Trending Discussions on CMAK

QUESTION

What's the difference between CMake set() and CMake file(Globe)?

Asked 2021-Jul-13 at 15:17

In my project CMake, I'm using file(Globe ...):

file(GLOB SOURCES
   srcDir/srcA.cpp
   srcDir/srcB.cpp
   srcDir/srcC.cpp
   srcDir/srcD.cpp
 )

I read that file(GLOB ...) can be evil: cmake-file-glob-evil, glob pros / cons.

I can understand why using file(GLOB ...) with wildcard is not recommended:

file(GLOB SOURCES
   srcDir/*
 )

But I dont understand the difference between file(GLOB ...) to set(...) if I'm using it with specific files:

file(GLOB SOURCES
   srcDir/srcA.cpp
   srcDir/srcB.cpp
   srcDir/srcC.cpp
   srcDir/srcD.cpp
 )

VS

set(SOURCES
   srcDir/srcA.cpp
   srcDir/srcB.cpp
   srcDir/srcC.cpp
   srcDir/srcD.cpp
 )

############# Edit ##################

Side question: What's the "price" of using file(GLOB) instead of set()? Does it effect compilation time?

ANSWER

Answered 2021-Jul-13 at 15:17

If you do file and there's no such file, then you'll probably get linking errors (because since you had headers, your code compiled just fine, but implementations of functions will not be found because of missing source files that were not added to compilation, thus missing links).

If you do set and there's no such file, you'll either get CMake configuration error or compilation error (because CMake will try to pass a nonexistent path to the compiler).

set however is a very fast operation (basically set local variable to some constant), while file actually requires to deeply traverse the whole directory on File System and to match some pattern to each file on the way. It may be slow if you GLOB over a very big/deep folder.

Source https://stackoverflow.com/questions/68256139

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install CMAK

You can download it from GitHub.

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.