fluentd | Docker image with docker-gen and fluentd | Continuous Deployment library
kandi X-RAY | fluentd Summary
kandi X-RAY | fluentd Summary
which tails docker containers logs and sends them to an elasticsearch host. By default it adds some additional tags using record-modifier plugin and sends only stderr logs. Elasticsearch info is controlled by ES_HOST and ES_PORT variables which will be overwritten if you link elastic search container.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of fluentd
fluentd Key Features
fluentd Examples and Code Snippets
Community Discussions
Trending Discussions on fluentd
QUESTION
I have a docker-compose setup with containers logging into fluentd. To support different demo environments, I have events being output to multiple destinations (ElasticSearch, Splunk, Syslog, etc.)
I would like to maintain a single configuration file, but disable output plugins that are not needed. If I have 4 potential output destinations, I would have to maintain 10 different configuration files to support all the different possible combinations.
I know that plugins can use environment variables for configuration parameters, which would be ideal. However, I don't see that there is a common 'enabled' or 'disable' parameter in the underlying plugin architecture.
Is there any way to disable a plugin externally? Or will I have to dynamically build my configuration file from an external script?
...ANSWER
Answered 2021-Jun-12 at 05:20I ended up doing this with environment variables by specifying the plugin type externally:
QUESTION
In our fluentd kubernetes deamonset we wanted to store pod name in a separate field named as application_id. But type of this field must be keyword. For that we need to provide index mapping to elasticsearch output plugin to create index as per our need. But we did not found any key in elasticsearch output plugin to provide index mapping. Can anyone help us to resolve this issue?
...ANSWER
Answered 2021-Jun-04 at 17:17You can provide index template in elasticsearch output plugin using template_name
, template_file
options:
QUESTION
I use my custom fluentd plugin and it does not work with ubuntu20 but for other ubuntu version is no problem.
Here is my error
...ANSWER
Answered 2021-May-28 at 04:22I fixed this problem because it need require class in_tail
QUESTION
I installed Docker on a CentOS 7 machine and DNS is not working within containers.
So, if I run nslookup google.com
on my host, it resolves correctly. However, if I do docker container run busybox nslookup google.com
I get:
ANSWER
Answered 2021-May-23 at 21:09As you can see in your error :
Can't find google.com
Container does't have access to network and therefore it can't find google !
And I can't see your Dockerfile
and docker-compose.yml
(If you use it) in the question above !
BUT
First step it's better to create a network using docker network create --help
--help ------> For seeing which options you want to use for your container networking :) (according to docs.docker)
Second step it's to EXPOSE:
the port on docker file (docs.docker & Article about concept of EXPOSE)
AND LAST : Try to check your container networking another way and simply use docker run
Try to use bash in your main image That is Cent OS for checking the network of container
QUESTION
I am publishing a data to elasticsearch using fluentd. It has a field Data.CPU
which is currently set to string
. Index name is health_gateway
I have made some changes in python code which is generating the data so now this field Data.CPU
has now become integer
. But still elasticsearch is showing it as string. How can I update it data type.
I tried running below commands in kibana dev tools:
...ANSWER
Answered 2021-May-21 at 06:13You can update the mapping, by indexing the same field in multiple ways i.e by using multi fields.
Using the below mapping, Data.CPU.raw
will be of integer
type
QUESTION
I have the following json message on Fluentd input
{"foo":{"bar":{"abc":"[\n {\n \"ip\":\"192.168.1.1\",\n \"hostname\":\"pc\",\n \"mac\":\"01:02:03:04:05:06\"\n} \n]"}}}
And want to get the output message
...ANSWER
Answered 2021-May-18 at 05:06You can use record_transformer by enabling Ruby like this:
QUESTION
We're using https://github.com/fluent/fluentd-kubernetes-daemonset to deploy Fluentd in our K8s cluster. We have 5 nodes in the cluster, which means there are 5 Fluentd pods.
Each Fluentd pod in the DaemonSet exposes Prometheus metrics on localhost:24231/metrics
endpoint via fluentd prometheus plugin. I'm having trouble finding the relevant bits of documentation on how to configure Prometheus to collect those metrics from every Pod's localhost:24321/metrics endpoint.
- there are N pods in a DaemonSet
- each pod has prometheus metrics exposed on localhost:24321
I need to configure Prometheus so it's able to scrape those metrics somehow. Any tips on how to solve this or examples of such configurations would be much appreciated!
...ANSWER
Answered 2021-Mar-04 at 20:35Solution in our case was using pod monitors CRD from the prometheus operator with podMetricEndpoints
pointing at the right port:
QUESTION
hope you're all well during this pandemic.
I've got a kubernetes cluster running. The comunication between pods is done through kafka. It is currently logging to stdout only. No files. no kafka topic. This is obviously pretty bad.
I want to setup a grafana instance that lets me centralize all logs there. The storage would be Loki + S3
In order to do that, I found that many people use tools like Fluentd, FluentBit and Promtail, which centralizes the logs and sends them to Loki. However, I already have Kafka running. I can't see why I'd use some tool like fluentd if I can send all logs to kafka through a "logging" topic.
My question is: How could I send all messages inside the logging topic to Loki? Fluentd cannot get input from kafka.
Would I have to setup some script that runs periodically, sorts data and sends it to loki directly?
...ANSWER
Answered 2021-Apr-14 at 09:27I recommend you to use promtail because is also from Grafana and not use the kafka solution.
If you send the logs from your apps to kafka then you need to:
- modify your apps to send to kafka instead of stdout
- configure a log forwarder to send messages on kafka to loki (it can be fluentd)
And if you use one the normal proposed approach you need to:
- configure a log forwarder to send messages from docker stdout to loki (you can use promtail default configuration)
But if you want to go for your solution with kafka in the middle there are some plugins of fluentd to configure kafka as input and output. https://github.com/fluent/fluent-plugin-kafka
QUESTION
I have to run two CronJobs in Kubernetes (AWS-EKS) and I have below configuration. When I apply the template, only one CronJob is getting created. The one that gets created is always the second one. So it looks like the first one is getting overwritten by the second. I am unable to figure out what am I doing wrong.
...ANSWER
Answered 2021-Apr-09 at 23:20I could solve this by separating the Documents by using --- between CronJob entries
QUESTION
I am trying to bring up an on-prem k8 cluster using kubespray with 3 master and 5 worker nodes. The node IPs are from 2 different subnets.
Ansible inventory:
...ANSWER
Answered 2021-Apr-09 at 09:49Thanks to @laimison for giving me those pointers.
Posting all my observations, so it can be useful to somebody.
On M1,
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install fluentd
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page