Support
Quality
Security
License
Reuse
kandi has reviewed opentsdb and discovered the below as its top functions. This is intended to give you an instant insight into opentsdb implemented functionality, and help decide if they suit your requirements.
A scalable, distributed Time Series Database.
default
___ _____ ____ ____ ____
/ _ \ _ __ ___ _ _|_ _/ ___|| _ \| __ )
| | | | '_ \ / _ \ '_ \| | \___ \| | | | _ \
| |_| | |_) | __/ | | | | ___) | |_| | |_) |
\___/| .__/ \___|_| |_|_| |____/|____/|____/
|_| The modern time series database.
Tick_charts deployment.yaml cannot apply with kubectl
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller / helm init --service-account tiller --upgrade(in case you have already done heln init)
helm repo update
helm install --name my-release stable/influxdb
LAST DEPLOYED: Fri Nov 22 14:10:41 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
my-release-influxdb 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
my-release-influxdb-0 0/1 Pending 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release-influxdb ClusterIP *.*.*.* <none> 8086/TCP,8088/TCP 1s
==> v1/StatefulSet
NAME READY AGE
my-release-influxdb 0/1 1s
NOTES:
InfluxDB can be accessed via port 8086 on the following DNS name from within your cluster:
- http://my-release-influxdb.default:8086
You can easily connect to the remote instance with your local influx cli. To forward the API port to localhost:8086 run the following:
- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }') 8086:8086
You can also connect to the influx cli from inside the container. To open a shell session in the InfluxDB pod run the following:
- kubectl exec -i -t --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{.items[0].metadata.name}') /bin/sh
To tail the logs for the InfluxDB pod run the following:
- kubectl logs -f --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }')
kubectl get all -l app=my-release-influxdb
NAME READY STATUS RESTARTS AGE
pod/my-release-influxdb-0 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-influxdb ClusterIP CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release-influxdb ClusterIP *.*.*.* <none> 8086/TCP,8088/TCP 11m
NAME READY AGE
statefulset.apps/my-release-influxdb 1/1 11m
-----------------------
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller / helm init --service-account tiller --upgrade(in case you have already done heln init)
helm repo update
helm install --name my-release stable/influxdb
LAST DEPLOYED: Fri Nov 22 14:10:41 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
my-release-influxdb 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
my-release-influxdb-0 0/1 Pending 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release-influxdb ClusterIP *.*.*.* <none> 8086/TCP,8088/TCP 1s
==> v1/StatefulSet
NAME READY AGE
my-release-influxdb 0/1 1s
NOTES:
InfluxDB can be accessed via port 8086 on the following DNS name from within your cluster:
- http://my-release-influxdb.default:8086
You can easily connect to the remote instance with your local influx cli. To forward the API port to localhost:8086 run the following:
- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }') 8086:8086
You can also connect to the influx cli from inside the container. To open a shell session in the InfluxDB pod run the following:
- kubectl exec -i -t --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{.items[0].metadata.name}') /bin/sh
To tail the logs for the InfluxDB pod run the following:
- kubectl logs -f --namespace default $(kubectl get pods --namespace default -l app=my-release-influxdb -o jsonpath='{ .items[0].metadata.name }')
kubectl get all -l app=my-release-influxdb
NAME READY STATUS RESTARTS AGE
pod/my-release-influxdb-0 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-influxdb ClusterIP CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release-influxdb ClusterIP *.*.*.* <none> 8086/TCP,8088/TCP 11m
NAME READY AGE
statefulset.apps/my-release-influxdb 1/1 11m
HBase schema design to store video frames: rows vs columns, rows as columns
key = hash(video_id, timestamp) + video_id + timestamp
Kubernetes ConfigMap YAML into Terraform Kubernetes
resource "aws_instance" "example" {
ami = "abc123"
network_interface {
# ...
}
}
-----------------------
resource "kubernetes_config_map" "opentsdb" {
metadata {
name = "opentsdb-config"
namespace = "dev"
}
data = {
"opentsdb.conf" = <<EOF
google.bigtable.project.id = ${var.project_id}
google.bigtable.instance.id = ${var.bigtable_instance_id}
google.bigtable.zone.id = ${var.zone}
hbase.client.connection.impl = com.google.cloud.bigtable.hbase1_2.BigtableConnection
google.bigtable.auth.service.account.enable = true
tsd.network.port = 4242
tsd.core.auto_create_metrics = true
tsd.core.meta.enable_realtime_ts = true
tsd.core.meta.enable_realtime_uid = true
tsd.core.meta.enable_tsuid_tracking = true
tsd.http.request.enable_chunked = true
tsd.http.request.max_chunk = 131072
tsd.storage.fix_duplicates = true
tsd.storage.enable_compaction = false
tsd.storage.max_tags = 12
tsd.http.staticroot = /opentsdb/build/staticroot
tsd.http.cachedir = /tmp/opentsdb
EOF
}
}
What does it mean when the tag value returned in an opentsdb query response is "node"?
tsd.storage.salt.width = 1
OpenTSDB no JDK found
JDK_DIRS=" Path_to_your_JDK_here \
/usr/lib/jvm/java-8-oracle /usr/lib/jvm/java-8-openjdk \
/usr/lib/jvm/java-8-openjdk-amd64/ /usr/lib/jvm/java-8-openjdk-i386/ \
\
/usr/lib/jvm/java-7-oracle /usr/lib/jvm/java-7-openjdk \
/usr/lib/jvm/java-7-openjdk-amd64/ /usr/lib/jvm/java-7-openjdk-i386/ \
\
/usr/lib/jvm/default-java"
OpenTSDB integration with kerberized HBase
tsd.network.port = 4242
tsd.storage.hbase.zk_basedir = /hbase-secure
tsd.storage.hbase.zk_quorum = ZKhostname1,ZKhostname2,ZKhostname3
hbase.security.auth.enable=true
hbase.security.authentication=kerberos
hbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN
hbase.sasl.clientconfig=Client
# Add the src dir so we can find logback.xml
CLASSPATH="$CLASSPATH:$abs_srcdir/src:/usr/hdp/2.4.2.0-258/zookeeper/lib/:/usr/hdp/2.4.2.0-258/zookeeper/:/etc/hadoop/2.4.2.0-258/0/:/usr/hdp/2.4.2.0-258/hbase/:/etc/hbase/2.4.2.0-258/0/:/home/user/phoenix-4.4.0-HBase-1.1-client.jar"
JVMARGS=${JVMARGS-'-Djava.security.krb5.conf=/etc/krb5.conf -Dhbase.security.authentication=kerberos -Dhbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN -Dhbase.rpc.protection=authentication -Dhbase.sasl.clientconfig=Client -Djava.security.auth.login.config=/home/user/opentsdb-jaas.conf -enableassertions -enablesystemassertions'}
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
}
-----------------------
tsd.network.port = 4242
tsd.storage.hbase.zk_basedir = /hbase-secure
tsd.storage.hbase.zk_quorum = ZKhostname1,ZKhostname2,ZKhostname3
hbase.security.auth.enable=true
hbase.security.authentication=kerberos
hbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN
hbase.sasl.clientconfig=Client
# Add the src dir so we can find logback.xml
CLASSPATH="$CLASSPATH:$abs_srcdir/src:/usr/hdp/2.4.2.0-258/zookeeper/lib/:/usr/hdp/2.4.2.0-258/zookeeper/:/etc/hadoop/2.4.2.0-258/0/:/usr/hdp/2.4.2.0-258/hbase/:/etc/hbase/2.4.2.0-258/0/:/home/user/phoenix-4.4.0-HBase-1.1-client.jar"
JVMARGS=${JVMARGS-'-Djava.security.krb5.conf=/etc/krb5.conf -Dhbase.security.authentication=kerberos -Dhbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN -Dhbase.rpc.protection=authentication -Dhbase.sasl.clientconfig=Client -Djava.security.auth.login.config=/home/user/opentsdb-jaas.conf -enableassertions -enablesystemassertions'}
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
}
-----------------------
tsd.network.port = 4242
tsd.storage.hbase.zk_basedir = /hbase-secure
tsd.storage.hbase.zk_quorum = ZKhostname1,ZKhostname2,ZKhostname3
hbase.security.auth.enable=true
hbase.security.authentication=kerberos
hbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN
hbase.sasl.clientconfig=Client
# Add the src dir so we can find logback.xml
CLASSPATH="$CLASSPATH:$abs_srcdir/src:/usr/hdp/2.4.2.0-258/zookeeper/lib/:/usr/hdp/2.4.2.0-258/zookeeper/:/etc/hadoop/2.4.2.0-258/0/:/usr/hdp/2.4.2.0-258/hbase/:/etc/hbase/2.4.2.0-258/0/:/home/user/phoenix-4.4.0-HBase-1.1-client.jar"
JVMARGS=${JVMARGS-'-Djava.security.krb5.conf=/etc/krb5.conf -Dhbase.security.authentication=kerberos -Dhbase.kerberos.regionserver.principal=hbase/hostname@FORSYS.LAN -Dhbase.rpc.protection=authentication -Dhbase.sasl.clientconfig=Client -Djava.security.auth.login.config=/home/user/opentsdb-jaas.conf -enableassertions -enablesystemassertions'}
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
}
Python string template substitution with dict of regex groups?
label_name = "${1}_${2}_${3}_${6}"
metric = "connstats_by.vip.nested._Common_Domain.89.44.250.117.conncount:40|g"
rx = re.compile('connstats_by\\.vip\\.nested\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^\:]*)(?:\:)([^\|]*)(?:\|)([^\n]*)')
print(label_name.format(*rx.match(metric).groups()))
$89_$44_$250_$40
-----------------------
label_name = "${1}_${2}_${3}_${6}"
metric = "connstats_by.vip.nested._Common_Domain.89.44.250.117.conncount:40|g"
rx = re.compile('connstats_by\\.vip\\.nested\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^.]*)\\.([^\:]*)(?:\:)([^\|]*)(?:\|)([^\n]*)')
print(label_name.format(*rx.match(metric).groups()))
$89_$44_$250_$40
Regex stripping start and end of string
str = "15-min-sum:rate:proc.stat.cpu{host=foo,type=idle}"
print(str[str.rfind(":")+1 : str.find("{")])
-----------------------
https://regex101.com/r/4aIVLr/4
(?![\w-]*:)([\w\.]*)({.*})
creates two group
-----------------------
https://regex101.com/r/4aIVLr/4
(?![\w-]*:)([\w\.]*)({.*})
creates two group
API redirect with ProxyPassMatch
ProxyPassMatch "/api/*" "http://127.0.0.1:4343"
How JDK 8's type inference works with generic?
AAA(Collections.singletonList(new B())); // returns List<A> NOT List<B>
QUESTION
Nifi JSON ETL: Custom Transformation Class not found with JoltTransformJSON Processor
Asked 2020-Jan-16 at 18:48I'd like to use my custom JSON transformation that implements the com.bazaarvoice.jolt.Transform interface.
I use "Custom Transformation Class Name" and "Custom Module Directory" like this:
However, I cannot get the JoltTransformJSON processor to use it; I get a ClassNotFoundException:
2019-04-01 14:30:54,196 ERROR [Timer-Driven Process Thread-4] o.a.n.p.standard.JoltTransformJSON JoltTransformJSON[id=b407714f-0169-1000-d9b2-1709069238d7] Unable to transform StandardFlowFileRecord[uuid=72dc471b-c587-4da9-b54c-eb46247b0cf4,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1554129053747-21203, container=default, section=723], offset=607170, length=5363],offset=0,name=72dc471b-c587-4da9-b54c-eb46247b0cf4,size=5363] due to java.util.concurrent.CompletionException: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB: java.util.concurrent.CompletionException: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB
java.util.concurrent.CompletionException: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB
at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:3373)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2039)
at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2037)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2020)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:112)
at com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:67)
at org.apache.nifi.processors.standard.JoltTransformJSON.getTransform(JoltTransformJSON.java:316)
at org.apache.nifi.processors.standard.JoltTransformJSON.onTrigger(JoltTransformJSON.java:277)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:205)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.sentilo.nifi.elasticsearch.ElasticsearchToOpenTSDB
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.nifi.processors.standard.util.jolt.TransformFactory.getCustomTransform(TransformFactory.java:65)
at org.apache.nifi.processors.standard.JoltTransformJSON.createTransform(JoltTransformJSON.java:346)
at org.apache.nifi.processors.standard.JoltTransformJSON.lambda$setup$0(JoltTransformJSON.java:324)
at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:3366)
... 19 common frames omitted
I compiled the class together with all its dependencies with the maven-assembly-plugin and placed it in the directory "/data/bin/nifi-1.9.1/jolt_modules". The directory and the jar are readable.
I also have tried to add the classname to the operation in spec as in here), but Ii seems that it's the "Custom Module Directory" that does no effect for some reason...
EDIT: I complete the answer with the code of ElasticsearchToOpenTSDB, in case somebody finds it useful. Is just converts Sentilo messages stored in Elasticsearch to OpenTSDB datapoints, flattening some nested JSON structures on the way.
package org.sentilo.nifi.elasticsearch;
import com.bazaarvoice.jolt.SpecDriven;
import com.bazaarvoice.jolt.Transform;
import com.bazaarvoice.jolt.exception.TransformException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.beanutils.BeanUtils;
import org.sentilo.agent.historian.domain.OpenTSDBDataPoint;
import org.sentilo.agent.historian.utils.OpenTSDBValueConverter;
import org.sentilo.common.domain.EventMessage;
import org.sentilo.nifi.elasticsearch.model.Hits;
import org.springframework.util.StringUtils;
import javax.inject.Inject;
import java.lang.reflect.InvocationTargetException;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import static org.sentilo.agent.historian.utils.OpenTSDBValueConverter.replaceIllegalCharacters;
public class ElasticsearchToOpenTSDB implements SpecDriven, Transform {
private final Object spec;
private final ObjectMapper mapper = new ObjectMapper();
public ElasticsearchToOpenTSDB() {
this.spec = "{}";
}
@Inject
public ElasticsearchToOpenTSDB( Object spec ) {
this.spec = spec;
}
public Object transform( final Object input ) {
try{
Hits hits = mapper.readValue(input.toString(), Hits.class);
List<EventMessage> newEventList = new ArrayList<EventMessage>();
List<OpenTSDBDataPoint> dataPoints = new ArrayList<OpenTSDBDataPoint>();
for(EventMessage event : hits.hits) {
if (OpenTSDBValueConverter.isComplexValue(event.getMessage())) {
addComplexValueToQueue(event,newEventList);
} else {
addSimpleValueToQueue(event, newEventList);
}
}
for(EventMessage event2 : newEventList) {
OpenTSDBDataPoint dp = unmarshal(event2);
dataPoints.add(dp);
}
return dataPoints;
}catch(Exception e) {
throw new TransformException(e.getMessage());
}
}
private void addComplexValueToQueue(final EventMessage event, List<EventMessage> eventList) throws IllegalAccessException, InvocationTargetException {
// Flatten JSON message into N measures
final String metricName = OpenTSDBValueConverter.createMetricName(event);
final Map<String, Object> unfoldValues = OpenTSDBValueConverter.extractMeasuresFromComplexType(metricName, event.getMessage());
for (final Map.Entry<String, Object> e : unfoldValues.entrySet()) {
final EventMessage newEvent = new EventMessage();
BeanUtils.copyProperties(newEvent, event);
newEvent.setTopic(e.getKey());
newEvent.setMessage(e.getValue().toString());
eventList.add(newEvent);
}
}
private void addSimpleValueToQueue(final EventMessage event, List<EventMessage> eventList) {
// The value should be long, float or boolean
try {
final Object numericValue = OpenTSDBValueConverter.getSimpleValue(event.getMessage());
final String metricName = OpenTSDBValueConverter.createMetricName(event);
event.setMessage(numericValue.toString());
event.setTopic(metricName);
eventList.add(event);
} catch (final ParseException e) {
// Probably String or some non-numeric value that we cannot store in OpenTSDB. Pass
return;
}
}
public static OpenTSDBDataPoint unmarshal(final EventMessage event) throws ParseException {
final OpenTSDBDataPoint dataPoint = new OpenTSDBDataPoint();
dataPoint.setMetric(event.getTopic());
dataPoint.setValue(OpenTSDBValueConverter.getSimpleValue(event.getMessage()));
if (event.getPublishedAt() != null) {
dataPoint.setTimestamp(event.getPublishedAt());
} else {
dataPoint.setTimestamp(event.getTime());
}
dataPoint.setTags(createTags(event));
return dataPoint;
}
private static Map<String, String> createTags(final EventMessage event) {
final Map<String, String> tags = new LinkedHashMap<String, String>();
putTag(tags, OpenTSDBDataPoint.Tags.type.name(), replaceIllegalCharacters(event.getType()));
putTag(tags, OpenTSDBDataPoint.Tags.sensor.name(), replaceIllegalCharacters(event.getSensor()));
putTag(tags, OpenTSDBDataPoint.Tags.provider.name(), replaceIllegalCharacters(event.getProvider()));
putTag(tags, OpenTSDBDataPoint.Tags.component.name(), replaceIllegalCharacters(event.getComponent()));
putTag(tags, OpenTSDBDataPoint.Tags.alertType.name(), replaceIllegalCharacters(event.getAlertType()));
putTag(tags, OpenTSDBDataPoint.Tags.sensorType.name(), replaceIllegalCharacters(event.getSensorType()));
putTag(tags, OpenTSDBDataPoint.Tags.publisher.name(), replaceIllegalCharacters(event.getPublisher()));
putTag(tags, OpenTSDBDataPoint.Tags.tenant.name(), replaceIllegalCharacters(event.getTenant()));
putTag(tags, OpenTSDBDataPoint.Tags.publisherTenant.name(), replaceIllegalCharacters(event.getPublisherTenant()));
return tags;
}
private static void putTag(final Map<String, String> tags, final String tagName, final String tagValue) {
if (StringUtils.hasText(tagValue)) {
tags.put(tagName, tagValue);
}
}
}
As indicated in the comments, the issue is not resolved yet and has been filed as a bug report. The latest status can be seen here: https://issues.apache.org/jira/browse/NIFI-6213
ANSWER
Answered 2020-Jan-16 at 18:48The problem is not resolved yet and has been filed as a bug report. The latest status can be seen here:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
No vulnerabilities reported
Save this library and start creating your kit
Explore Related Topics
Save this library and start creating your kit