kandi background
Explore Kits

opentsdb | A scalable, distributed Time Series Database. | Database library

 by   OpenTSDB Java Version: v2.4.1 License: LGPL-2.1

 by   OpenTSDB Java Version: v2.4.1 License: LGPL-2.1

Download this library from

kandi X-RAY | opentsdb Summary

opentsdb is a Java library typically used in Database applications. opentsdb has no vulnerabilities, it has a Weak Copyleft License and it has high support. However opentsdb has 85 bugs and it build file is not available. You can download it from GitHub, Maven.
OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on top of HBase. OpenTSDB was written to address a common need: store, index and serve metrics collected from computer systems (network gear, operating systems, applications) at a large scale, and make this data easily accessible and graphable. Thanks to HBase's scalability, OpenTSDB allows you to collect thousands of metrics from tens of thousands of hosts and applications, at a high rate (every few seconds). OpenTSDB will never delete or downsample data and can easily store hundreds of billions of data points. OpenTSDB is free software and is available under both LGPLv2.1+ and GPLv3+. Find more about OpenTSDB at http://opentsdb.net.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • opentsdb has a highly active ecosystem.
  • It has 4615 star(s) with 1254 fork(s). There are 345 watchers for this library.
  • There were 1 major release(s) in the last 12 months.
  • There are 503 open issues and 575 have been closed. On average issues are closed in 382 days. There are 15 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of opentsdb is v2.4.1
opentsdb Support
Best in #Database
Average in #Database
opentsdb Support
Best in #Database
Average in #Database

quality kandi Quality

  • opentsdb has 85 bugs (5 blocker, 2 critical, 45 major, 33 minor) and 10333 code smells.
opentsdb Quality
Best in #Database
Average in #Database
opentsdb Quality
Best in #Database
Average in #Database

securitySecurity

  • opentsdb has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • opentsdb code analysis shows 0 unresolved vulnerabilities.
  • There are 23 security hotspots that need review.
opentsdb Security
Best in #Database
Average in #Database
opentsdb Security
Best in #Database
Average in #Database

license License

  • opentsdb is licensed under the LGPL-2.1 License. This license is Weak Copyleft.
  • Weak Copyleft licenses have some restrictions, but you can use them in commercial projects.
opentsdb License
Best in #Database
Average in #Database
opentsdb License
Best in #Database
Average in #Database

buildReuse

  • opentsdb releases are available to install and integrate.
  • Deployable package is available in Maven.
  • opentsdb has no build file. You will be need to create the build yourself to build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
  • opentsdb saves you 102750 person hours of effort in developing the same functionality from scratch.
  • It has 110633 lines of code, 7959 functions and 403 files.
  • It has high code complexity. Code complexity directly impacts maintainability of the code.
opentsdb Reuse
Best in #Database
Average in #Database
opentsdb Reuse
Best in #Database
Average in #Database
Top functions reviewed by kandi - BETA

kandi has reviewed opentsdb and discovered the below as its top functions. This is intended to give you an instant insight into opentsdb implemented functionality, and help decide if they suit your requirements.

  • Process a list of datapoints .
  • Run the fsck .
  • Format a query asynchronously .
  • Called when a module is loaded .
  • Handles a query .
  • Handles a POST request .
  • Processes the TSeries meta object .
  • Initialize the filters .
  • Fetches a branch from storage .
  • Imports a file .

opentsdb Key Features

A scalable, distributed Time Series Database.

default

copy iconCopydownload iconDownload
   ___                 _____ ____  ____  ____
  / _ \ _ __   ___ _ _|_   _/ ___||  _ \| __ )
 | | | | '_ \ / _ \ '_ \| | \___ \| | | |  _ \
 | |_| | |_) |  __/ | | | |  ___) | |_| | |_) |
  \___/| .__/ \___|_| |_|_| |____/|____/|____/
       |_|    The modern time series database.

Tick_charts deployment.yaml cannot apply with kubectl

copy iconCopydownload iconDownload
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller / helm init --service-account tiller --upgrade(in case you have already done heln init)
helm repo update
helm install --name my-release stable/influxdb

LAST DEPLOYED: Fri Nov 22 14:10:41 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                 DATA  AGE
my-release-influxdb  1     1s

==> v1/Pod(related)
NAME                   READY  STATUS   RESTARTS  AGE
my-release-influxdb-0  0/1    Pending  0         1s

==> v1/Service
NAME                 TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
my-release-influxdb  ClusterIP  *.*.*.*   <none>       8086/TCP,8088/TCP  1s

==> v1/StatefulSet
NAME                 READY  AGE
my-release-influxdb  0/1    1s


NOTES:
InfluxDB can be accessed via port 8086 on the following DNS name from within your cluster:

- http://my-release-influxdb.default:8086

You can easily connect to the remote instance with your local influx cli. To forward the API port to localhost:8086 run the following:

- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }') 8086:8086

You can also connect to the influx cli from inside the container. To open a shell session in the InfluxDB pod run the following:

- kubectl exec -i -t --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{.items[0].metadata.name}') /bin/sh

To tail the logs for the InfluxDB pod run the following:

- kubectl logs -f --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }')



kubectl get all -l app=my-release-influxdb
NAME                        READY   STATUS    RESTARTS   AGE
pod/my-release-influxdb-0   1/1     Running   0          11m

NAME                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/my-release-influxdb   ClusterIP   CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
my-release-influxdb  ClusterIP  *.*.*.*    <none>        8086/TCP,8088/TCP   11m

NAME                                   READY   AGE
statefulset.apps/my-release-influxdb   1/1     11m
-----------------------
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller / helm init --service-account tiller --upgrade(in case you have already done heln init)
helm repo update
helm install --name my-release stable/influxdb

LAST DEPLOYED: Fri Nov 22 14:10:41 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                 DATA  AGE
my-release-influxdb  1     1s

==> v1/Pod(related)
NAME                   READY  STATUS   RESTARTS  AGE
my-release-influxdb-0  0/1    Pending  0         1s

==> v1/Service
NAME                 TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
my-release-influxdb  ClusterIP  *.*.*.*   <none>       8086/TCP,8088/TCP  1s

==> v1/StatefulSet
NAME                 READY  AGE
my-release-influxdb  0/1    1s


NOTES:
InfluxDB can be accessed via port 8086 on the following DNS name from within your cluster:

- http://my-release-influxdb.default:8086

You can easily connect to the remote instance with your local influx cli. To forward the API port to localhost:8086 run the following:

- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }') 8086:8086

You can also connect to the influx cli from inside the container. To open a shell session in the InfluxDB pod run the following:

- kubectl exec -i -t --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{.items[0].metadata.name}') /bin/sh

To tail the logs for the InfluxDB pod run the following:

- kubectl logs -f --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }')



kubectl get all -l app=my-release-influxdb
NAME                        READY   STATUS    RESTARTS   AGE
pod/my-release-influxdb-0   1/1     Running   0          11m

NAME                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/my-release-influxdb   ClusterIP   CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
my-release-influxdb  ClusterIP  *.*.*.*    <none>        8086/TCP,8088/TCP   11m

NAME                                   READY   AGE
statefulset.apps/my-release-influxdb   1/1     11m

HBase schema design to store video frames: rows vs columns, rows as columns

copy iconCopydownload iconDownload
key = hash(video_id, timestamp) + video_id + timestamp

Kubernetes ConfigMap YAML into Terraform Kubernetes

copy iconCopydownload iconDownload
resource "aws_instance" "example" {
  ami = "abc123"

  network_interface {
    # ...
  }
}
-----------------------
resource "kubernetes_config_map" "opentsdb" {
  metadata {
    name = "opentsdb-config"
    namespace = "dev"
  }

  data = {
    "opentsdb.conf" = <<EOF
google.bigtable.project.id = ${var.project_id}
google.bigtable.instance.id = ${var.bigtable_instance_id}
google.bigtable.zone.id = ${var.zone}
hbase.client.connection.impl = com.google.cloud.bigtable.hbase1_2.BigtableConnection
google.bigtable.auth.service.account.enable = true

tsd.network.port = 4242
tsd.core.auto_create_metrics = true
tsd.core.meta.enable_realtime_ts = true
tsd.core.meta.enable_realtime_uid = true
tsd.core.meta.enable_tsuid_tracking = true
tsd.http.request.enable_chunked = true
tsd.http.request.max_chunk = 131072
tsd.storage.fix_duplicates = true
tsd.storage.enable_compaction = false
tsd.storage.max_tags = 12
tsd.http.staticroot = /opentsdb/build/staticroot
tsd.http.cachedir = /tmp/opentsdb
    EOF
  }
}

What does it mean when the tag value returned in an opentsdb query response is "node"?

copy iconCopydownload iconDownload
tsd.storage.salt.width = 1

OpenTSDB no JDK found

copy iconCopydownload iconDownload
JDK_DIRS=" Path_to_your_JDK_here  \
  /usr/lib/jvm/java-8-oracle /usr/lib/jvm/java-8-openjdk \
  /usr/lib/jvm/java-8-openjdk-amd64/ /usr/lib/jvm/java-8-openjdk-i386/ \
        \
  /usr/lib/jvm/java-7-oracle /usr/lib/jvm/java-7-openjdk \
  /usr/lib/jvm/java-7-openjdk-amd64/ /usr/lib/jvm/java-7-openjdk-i386/ \
        \
  /usr/lib/jvm/default-java"

OpenTSDB integration with kerberized HBase

copy iconCopydownload iconDownload
tsd.network.port = 4242
tsd.storage.hbase.zk_basedir = /hbase-secure
tsd.storage.hbase.zk_quorum = ZKhostname1,ZKhostname2,ZKhostname3
hbase.security.auth.enable=true
hbase.security.authentication=kerberos
hbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN
hbase.sasl.clientconfig=Client
# Add the src dir so we can find logback.xml 
CLASSPATH="$CLASSPATH:$abs_srcdir/src:/usr/hdp/2.4.2.0-258/zookeeper/lib/:/usr/hdp/2.4.2.0-258/zookeeper/:/etc/hadoop/2.4.2.0-258/0/:/usr/hdp/2.4.2.0-258/hbase/:/etc/hbase/2.4.2.0-258/0/:/home/user/phoenix-4.4.0-HBase-1.1-client.jar"

JVMARGS=${JVMARGS-'-Djava.security.krb5.conf=/etc/krb5.conf -Dhbase.security.authentication=kerberos -Dhbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN -Dhbase.rpc.protection=authentication -Dhbase.sasl.clientconfig=Client -Djava.security.auth.login.config=/home/user/opentsdb-jaas.conf -enableassertions -enablesystemassertions'}
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
}
-----------------------
tsd.network.port = 4242
tsd.storage.hbase.zk_basedir = /hbase-secure
tsd.storage.hbase.zk_quorum = ZKhostname1,ZKhostname2,ZKhostname3
hbase.security.auth.enable=true
hbase.security.authentication=kerberos
hbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN
hbase.sasl.clientconfig=Client
# Add the src dir so we can find logback.xml 
CLASSPATH="$CLASSPATH:$abs_srcdir/src:/usr/hdp/2.4.2.0-258/zookeeper/lib/:/usr/hdp/2.4.2.0-258/zookeeper/:/etc/hadoop/2.4.2.0-258/0/:/usr/hdp/2.4.2.0-258/hbase/:/etc/hbase/2.4.2.0-258/0/:/home/user/phoenix-4.4.0-HBase-1.1-client.jar"

JVMARGS=${JVMARGS-'-Djava.security.krb5.conf=/etc/krb5.conf -Dhbase.security.authentication=kerberos -Dhbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN -Dhbase.rpc.protection=authentication -Dhbase.sasl.clientconfig=Client -Djava.security.auth.login.config=/home/user/opentsdb-jaas.conf -enableassertions -enablesystemassertions'}
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
}
-----------------------
tsd.network.port = 4242
tsd.storage.hbase.zk_basedir = /hbase-secure
tsd.storage.hbase.zk_quorum = ZKhostname1,ZKhostname2,ZKhostname3
hbase.security.auth.enable=true
hbase.security.authentication=kerberos
hbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN
hbase.sasl.clientconfig=Client
# Add the src dir so we can find logback.xml 
CLASSPATH="$CLASSPATH:$abs_srcdir/src:/usr/hdp/2.4.2.0-258/zookeeper/lib/:/usr/hdp/2.4.2.0-258/zookeeper/:/etc/hadoop/2.4.2.0-258/0/:/usr/hdp/2.4.2.0-258/hbase/:/etc/hbase/2.4.2.0-258/0/:/home/user/phoenix-4.4.0-HBase-1.1-client.jar"

JVMARGS=${JVMARGS-'-Djava.security.krb5.conf=/etc/krb5.conf -Dhbase.security.authentication=kerberos -Dhbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN -Dhbase.rpc.protection=authentication -Dhbase.sasl.clientconfig=Client -Djava.security.auth.login.config=/home/user/opentsdb-jaas.conf -enableassertions -enablesystemassertions'}
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
}

Python string template substitution with dict of regex groups?

copy iconCopydownload iconDownload
label_name = "${1}_${2}_${3}_${6}"
metric = "connstats_by.vip.nested._Common_Domain.89.44.250.117.conncount:40|g"
rx = re.compile('connstats_by\\.vip\\.nested\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^\:]*)(?:\:)([^\|]*)(?:\|)([^\n]*)')
print(label_name.format(*rx.match(metric).groups()))
$89_$44_$250_$40
-----------------------
label_name = "${1}_${2}_${3}_${6}"
metric = "connstats_by.vip.nested._Common_Domain.89.44.250.117.conncount:40|g"
rx = re.compile('connstats_by\\.vip\\.nested\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^\:]*)(?:\:)([^\|]*)(?:\|)([^\n]*)')
print(label_name.format(*rx.match(metric).groups()))
$89_$44_$250_$40

Regex stripping start and end of string

copy iconCopydownload iconDownload
str = "15-min-sum:rate:proc.stat.cpu{host=foo,type=idle}"
print(str[str.rfind(":")+1 : str.find("{")])
-----------------------
https://regex101.com/r/4aIVLr/4
(?![\w-]*:)([\w\.]*)({.*})
creates two group 
-----------------------
https://regex101.com/r/4aIVLr/4
(?![\w-]*:)([\w\.]*)({.*})
creates two group 

API redirect with ProxyPassMatch

copy iconCopydownload iconDownload
ProxyPassMatch "/api/*" "http://127.0.0.1:4343"

How JDK 8's type inference works with generic?

copy iconCopydownload iconDownload
 AAA(Collections.singletonList(new B())); // returns List<A> NOT List<B>

Community Discussions

Trending Discussions on opentsdb
  • Nifi JSON ETL: Custom Transformation Class not found with JoltTransformJSON Processor
  • Tick_charts deployment.yaml cannot apply with kubectl
  • HBase schema design to store video frames: rows vs columns, rows as columns
  • Kubernetes ConfigMap YAML into Terraform Kubernetes
  • How To Convert CSV File to OpenTSDB Format
  • In which case should I use System.in.close()?
  • Which time series database to use for persistent and accurate historical data
  • Writing OpenTSDB to Bigtable with HTTP POST not working (using Kubernetes(
  • What does it mean when the tag value returned in an opentsdb query response is "node"?
  • OpenTSDB no JDK found
Trending Discussions on opentsdb

QUESTION

Nifi JSON ETL: Custom Transformation Class not found with JoltTransformJSON Processor

Asked 2020-Jan-16 at 18:48

I'd like to use my custom JSON transformation that implements the com.bazaarvoice.jolt.Transform interface.

I use "Custom Transformation Class Name" and "Custom Module Directory" like this:

enter image description here

However, I cannot get the JoltTransformJSON processor to use it; I get a ClassNotFoundException:

2019-04-01 14:30:54,196 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.JoltTransformJSON JoltTransformJSON[id=b407714f-0169-1000-d9b2-1709069238d7] Unable to transform StandardFlowFileRecord[uuid=72dc471b-c587-4da9-b54c-eb46247b0cf4,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1554129053747-21203, container=default, section=723], offset=607170, length=5363],offset=0,name=72dc471b-c587-4da9-b54c-eb46247b0cf4,size=5363] due to java.util.concurrent.CompletionException: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB: java.util.concurrent.CompletionException: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB
java.util.concurrent.CompletionException: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB
        at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:3373)
        at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2039)
        at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
        at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2037)
        at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2020)
        at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:112)
        at com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:67)
        at org.apache.nifi.processors.standard.JoltTransformJSON.getTransform(JoltTransformJSON.java:316)
        at org.apache.nifi.processors.standard.JoltTransformJSON.onTrigger(JoltTransformJSON.java:277)
        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
        at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:205)
        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.nifi.processors.standard.util.jolt.TransformFactory.getCustomTransform(TransformFactory.java:65)
        at org.apache.nifi.processors.standard.JoltTransformJSON.createTransform(JoltTransformJSON.java:346)
        at org.apache.nifi.processors.standard.JoltTransformJSON.lambda$setup$0(JoltTransformJSON.java:324)
        at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:3366)
        ... 19 common frames omitted

I compiled the class together with all its dependencies with the maven-assembly-plugin and placed it in the directory "/data/bin/nifi-1.9.1/jolt_modules". The directory and the jar are readable.

I also have tried to add the classname to the operation in spec as in here), but Ii seems that it's the "Custom Module Directory" that does no effect for some reason...

EDIT: I complete the answer with the code of ElasticsearchToOpenTSDB, in case somebody finds it useful. Is just converts Sentilo messages stored in Elasticsearch to OpenTSDB datapoints, flattening some nested JSON structures on the way.

package org.sentilo.nifi.elasticsearch;

import com.bazaarvoice.jolt.SpecDriven;
import com.bazaarvoice.jolt.Transform;
import com.bazaarvoice.jolt.exception.TransformException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.beanutils.BeanUtils;
import org.sentilo.agent.historian.domain.OpenTSDBDataPoint;
import org.sentilo.agent.historian.utils.OpenTSDBValueConverter;
import org.sentilo.common.domain.EventMessage;
import org.sentilo.nifi.elasticsearch.model.Hits;
import org.springframework.util.StringUtils;

import javax.inject.Inject;
import java.lang.reflect.InvocationTargetException;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;

import static org.sentilo.agent.historian.utils.OpenTSDBValueConverter.replaceIllegalCharacters;


public class ElasticsearchToOpenTSDB implements SpecDriven, Transform {

    private final Object spec;

    private final ObjectMapper mapper = new ObjectMapper();

    public ElasticsearchToOpenTSDB() {
        this.spec = "{}";
    }

    @Inject
    public ElasticsearchToOpenTSDB( Object spec ) {
        this.spec = spec;
    }

    public Object transform( final Object input ) {

        try{
            Hits hits = mapper.readValue(input.toString(), Hits.class);
            List<EventMessage> newEventList = new ArrayList<EventMessage>();
            List<OpenTSDBDataPoint> dataPoints = new ArrayList<OpenTSDBDataPoint>();

            for(EventMessage event : hits.hits) {

                if (OpenTSDBValueConverter.isComplexValue(event.getMessage())) {
                    addComplexValueToQueue(event,newEventList);
                } else {
                    addSimpleValueToQueue(event, newEventList);
                }
            }

            for(EventMessage event2 : newEventList) {
                OpenTSDBDataPoint dp = unmarshal(event2);
                dataPoints.add(dp);
            }

            return dataPoints;

        }catch(Exception e) {
            throw new TransformException(e.getMessage());
        }


    }


    private void addComplexValueToQueue(final EventMessage event, List<EventMessage> eventList) throws IllegalAccessException, InvocationTargetException {
        // Flatten JSON message into N measures
        final String metricName = OpenTSDBValueConverter.createMetricName(event);
        final Map<String, Object> unfoldValues = OpenTSDBValueConverter.extractMeasuresFromComplexType(metricName, event.getMessage());
        for (final Map.Entry<String, Object> e : unfoldValues.entrySet()) {
            final EventMessage newEvent = new EventMessage();
            BeanUtils.copyProperties(newEvent, event);
            newEvent.setTopic(e.getKey());
            newEvent.setMessage(e.getValue().toString());
            eventList.add(newEvent);
        }
    }

    private void addSimpleValueToQueue(final EventMessage event, List<EventMessage> eventList) {
        // The value should be long, float or boolean
        try {
            final Object numericValue = OpenTSDBValueConverter.getSimpleValue(event.getMessage());
            final String metricName = OpenTSDBValueConverter.createMetricName(event);
            event.setMessage(numericValue.toString());
            event.setTopic(metricName);
            eventList.add(event);

        } catch (final ParseException e) {
            // Probably String or some non-numeric value that we cannot store in OpenTSDB. Pass
            return;
        }
    }

    public static OpenTSDBDataPoint unmarshal(final EventMessage event) throws ParseException {
        final OpenTSDBDataPoint dataPoint = new OpenTSDBDataPoint();

        dataPoint.setMetric(event.getTopic());
        dataPoint.setValue(OpenTSDBValueConverter.getSimpleValue(event.getMessage()));
        if (event.getPublishedAt() != null) {
            dataPoint.setTimestamp(event.getPublishedAt());
        } else {
            dataPoint.setTimestamp(event.getTime());
        }

        dataPoint.setTags(createTags(event));

        return dataPoint;

    }

    private static Map<String, String> createTags(final EventMessage event) {
        final Map<String, String> tags = new LinkedHashMap<String, String>();
        putTag(tags, OpenTSDBDataPoint.Tags.type.name(), replaceIllegalCharacters(event.getType()));
        putTag(tags, OpenTSDBDataPoint.Tags.sensor.name(), replaceIllegalCharacters(event.getSensor()));
        putTag(tags, OpenTSDBDataPoint.Tags.provider.name(), replaceIllegalCharacters(event.getProvider()));
        putTag(tags, OpenTSDBDataPoint.Tags.component.name(), replaceIllegalCharacters(event.getComponent()));
        putTag(tags, OpenTSDBDataPoint.Tags.alertType.name(), replaceIllegalCharacters(event.getAlertType()));
        putTag(tags, OpenTSDBDataPoint.Tags.sensorType.name(), replaceIllegalCharacters(event.getSensorType()));
        putTag(tags, OpenTSDBDataPoint.Tags.publisher.name(), replaceIllegalCharacters(event.getPublisher()));
        putTag(tags, OpenTSDBDataPoint.Tags.tenant.name(), replaceIllegalCharacters(event.getTenant()));
        putTag(tags, OpenTSDBDataPoint.Tags.publisherTenant.name(), replaceIllegalCharacters(event.getPublisherTenant()));

        return tags;
    }

    private static void putTag(final Map<String, String> tags, final String tagName, final String tagValue) {
        if (StringUtils.hasText(tagValue)) {
            tags.put(tagName, tagValue);
        }
    }
}

Update

As indicated in the comments, the issue is not resolved yet and has been filed as a bug report. The latest status can be seen here: https://issues.apache.org/jira/browse/NIFI-6213

ANSWER

Answered 2020-Jan-16 at 18:48

The problem is not resolved yet and has been filed as a bug report. The latest status can be seen here:

https://issues.apache.org/jira/browse/NIFI-6213

Source https://stackoverflow.com/questions/55458351

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install opentsdb

You can download it from GitHub, Maven.
You can use opentsdb like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the opentsdb component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Reuse Pre-built Kits with opentsdb
Compare Database Libraries with Highest Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.