fancy | fancy let 's you fanout rsyslog to Loki | Dashboard library
kandi X-RAY | fancy Summary
kandi X-RAY | fancy Summary
fancy let's you fanout rsyslog to Loki and is meant to be executed by rsyslog under omprog.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Unmarshal implements the unmarshaler interface .
- skipLogproto skips a log message .
- Main entry point
- parseLine parses and returns a LogLine .
- getSeverity returns the severity of the given byte
- set severity
- NewLoki returns a new Loki instance
- encodeBatch encodes a batch of log messages .
- encodeVarintLogproto encodes a varint .
- batch scan value
fancy Key Features
fancy Examples and Code Snippets
Community Discussions
Trending Discussions on fancy
QUESTION
I'm trying to make sure gcc vectorizes my loops. It turns out, that by using -march=znver1
(or -march=native
) gcc skips some loops even though they can be vectorized. Why does this happen?
In this code, the second loop, which multiplies each element by a scalar is not vectorised:
...ANSWER
Answered 2022-Apr-10 at 02:47The default -mtune=generic
has -mprefer-vector-width=256
, and -mavx2
doesn't change that.
znver1 implies -mprefer-vector-width=128
, because that's all the native width of the HW. An instruction using 32-byte YMM vectors decodes to at least 2 uops, more if it's a lane-crossing shuffle. For simple vertical SIMD like this, 32-byte vectors would be ok; the pipeline handles 2-uop instructions efficiently. (And I think is 6 uops wide but only 5 instructions wide, so max front-end throughput isn't available using only 1-uop instructions). But when vectorization would require shuffling, e.g. with arrays of different element widths, GCC code-gen can get messier with 256-bit or wider.
And vmovdqa ymm0, ymm1
mov-elimination only works on the low 128-bit half on Zen1. Also, normally using 256-bit vectors would imply one should use vzeroupper
afterwards, to avoid performance problems on other CPUs (but not Zen1).
I don't know how Zen1 handles misaligned 32-byte loads/stores where each 16-byte half is aligned but in separate cache lines. If that performs well, GCC might want to consider increasing the znver1 -mprefer-vector-width
to 256. But wider vectors means more cleanup code if the size isn't known to be a multiple of the vector width.
Ideally GCC would be able to detect easy cases like this and use 256-bit vectors there. (Pure vertical, no mixing of element widths, constant size that's am multiple of 32 bytes.) At least on CPUs where that's fine: znver1, but not bdver2 for example where 256-bit stores are always slow due to a CPU design bug.
You can see the result of this choice in the way it vectorizes your first loop, the memset-like loop, with a vmovdqu [rdx], xmm0
. https://godbolt.org/z/E5Tq7Gfzc
So given that GCC has decided to only use 128-bit vectors, which can only hold two uint64_t
elements, it (rightly or wrongly) decides it wouldn't be worth using vpsllq
/ vpaddd
to implement qword *5
as (v<<2) + v
, vs. doing it with integer in one LEA instruction.
Almost certainly wrongly in this case, since it still requires a separate load and store for every element or pair of elements. (And loop overhead since GCC's default is not to unroll except with PGO, -fprofile-use
. SIMD is like loop unrolling, especially on a CPU that handles 256-bit vectors as 2 separate uops.)
I'm not sure exactly what GCC means by "not vectorized: unsupported data-type". x86 doesn't have a SIMD uint64_t
multiply instruction until AVX-512, so perhaps GCC assigns it a cost based on the general case of having to emulate it with multiple 32x32 => 64-bit pmuludq
instructions and a bunch of shuffles. And it's only after it gets over that hump that it realizes that it's actually quite cheap for a constant like 5
with only 2 set bits?
That would explain GCC's decision-making process here, but I'm not sure it's exactly the right explanation. Still, these kinds of factors are what happen in a complex piece of machinery like a compiler. A skilled human can easily make smarter choices, but compilers just do sequences of optimization passes that don't always consider the big picture and all the details at the same time.
-mprefer-vector-width=256
doesn't help:
Not vectorizing uint64_t *= 5
seems to be a GCC9 regression
(The benchmarks in the question confirm that an actual Zen1 CPU gets a nearly 2x speedup, as expected from doing 2x uint64 in 6 uops vs. 1x in 5 uops with scalar. Or 4x uint64_t in 10 uops with 256-bit vectors, including two 128-bit stores which will be the throughput bottleneck along with the front-end.)
Even with -march=znver1 -O3 -mprefer-vector-width=256
, we don't get the *= 5
loop vectorized with GCC9, 10, or 11, or current trunk. As you say, we do with -march=znver2
. https://godbolt.org/z/dMTh7Wxcq
We do get vectorization with those options for uint32_t
(even leaving the vector width at 128-bit). Scalar would cost 4 operations per vector uop (not instruction), regardless of 128 or 256-bit vectorization on Zen1, so this doesn't tell us whether *=
is what makes the cost-model decide not to vectorize, or just the 2 vs. 4 elements per 128-bit internal uop.
With uint64_t
, changing to arr[i] += arr[i]<<2;
still doesn't vectorize, but arr[i] <<= 1;
does. (https://godbolt.org/z/6PMn93Y5G). Even arr[i] <<= 2;
and arr[i] += 123
in the same loop vectorize, to the same instructions that GCC thinks aren't worth it for vectorizing *= 5
, just different operands, constant instead of the original vector again. (Scalar could still use one LEA). So clearly the cost-model isn't looking as far as final x86 asm machine instructions, but I don't know why arr[i] += arr[i]
would be considered more expensive than arr[i] <<= 1;
which is exactly the same thing.
GCC8 does vectorize your loop, even with 128-bit vector width: https://godbolt.org/z/5o6qjc7f6
QUESTION
This is a followup or more a simplification of this question Error: File header.tex not found in resource path in a rmarkdown generated pdf report from a shiny app
With this Rmarkdown code I can achieve what I want:
logo.png
report.Rmd
...ANSWER
Answered 2022-Apr-09 at 16:36Basically you already figured out what's the issue. Hence one approach to fix your issue would be to do copy both the report template and the logo to the same temporary directory.
QUESTION
I am working on a p2p application and to make testing simple, I am currently using udp broadcast for the peer discovery in my local network. Each peer binds one udp socket to port 29292 of the ip address of each local network interface (discovered via GetAdaptersInfo
) and each socket periodically sends a packet to the broadcast address of its network interface/local address. The sockets are set to allow port reuse (via setsockopt
SO_REUSEADDR
), which enables me to run multiple peers on the same local machine without any conflicts. In this case there is only a single peer on the entire network though.
This all works perfectly fine (tested with 2 peers on 1 machine and 2 peers on 2 machines) UNTIL a network interface is disconnected. When deactivacting the network adapter of either my wifi or an USB-to-LAN adapter in the windows dialog, or just plugging the usb cable of the adapter, the next call to sendto
will fail with return code 10049
. It doesn't matter if the other adapter is still connected, or was at the beginning, it will fail. The only thing that doesn't make it fail is deactivating wifi through the fancy win10 dialog through the taskbar, but that isn't really a surprise because that doesn't deactivate or remove the adapter itself.
I initially thought that this makes sense because when the nic is gone, how should the system route the packet. But: The fact that the packet can't reach its target has absolutely nothing to do with the address itsself being invalid (which is what the error means), so I suspect I am missing something here. I was looking for any information I could use to detect this case and distinguish it from simply trying to sendto
INADDR_ANY
, but I couldn't find anything. I started to log every bit of information which I suspected could have changed, but its all the same on a successfull sendto
and the one that crashes (retrieved via getsockopt
):
ANSWER
Answered 2022-Mar-01 at 16:01This is a issue people have been facing up for a while , and people suggested to read the documentation provided by Microsoft on the following issue . "Btw , I don't know whether they are the same issues or not but the error thrown back the code are same, that's why I have attached a link for the same!!"
QUESTION
I followed the amazing tutorials from stackoverflow for Move and Operator overloading (e.g. What are the basic rules and idioms for operator overloading?), and the following situation is baffling me. Nothing fancy in the code, just printing when special member functions are called.
The main code:
...ANSWER
Answered 2022-Feb-10 at 18:27Returning a local variable of type T
from a function with with the same1 return type T
is a special case.
It at least automatically moves the variable, or, if the compiler is smart enough to perform so-called NRVO, eliminates the copy/move entirely and constructs the variable directly in the right location.
Function parameters (unlike regular local variables) are not eligible for NRVO, so you always get an implicit move in (2).
This doesn't happen in (1). The compiler isn't going to analyze +=
to understand what it returns; this rule only works when the operand of return
is a single variable.
Since +=
returns an lvalue reference, and you didn't std::move
it, the copy constructor is called.
1 Or a type that differs only in cv-qualifiers.
QUESTION
I have a list of elements which internally is separated by 0s. The format is like that:
...ANSWER
Answered 2022-Feb-06 at 13:12You can use this:
QUESTION
In Python, the @
operator relays to the __matmul__
property of an element. This comes in handy when implementing a method that stays agnostic of the actual backend. For example
ANSWER
Answered 2022-Jan-13 at 20:56The @
operator was added as PEP 465 for __matmul__
. There is no such thing (and no dunder method) for the outer product.
In fact, the outer product is a simple multiplication (*
) once the first array got reshaped:
QUESTION
I have a complex nested structured array (often used as a recarray). Its simplified for this example, but in the real case there are multiple levels.
...ANSWER
Answered 2022-Jan-12 at 16:43The statement zeros['data_val']
creates a view into the array, which may already be non-contiguous at that point. You can extract multiple values of x
because c
is an array type, meaning that x
has clearly defined strides and shape. The semantics of the statement zeros[:, 'x']
are very unclear. For example, what happens to data_string
, which has no x
? I would expect an error; you might expect something else.
The only way I can see the index being simplified, is if you expand c
into A
directly, sort of like an anonymous structure in C, except you can't do that easily with an array.
QUESTION
I was looking for a solution but I couldn't find any.
I have a bash script which executes something like this:
...ANSWER
Answered 2022-Jan-09 at 08:55You may use HELM's lookup function to query the cluster
The lookup function can be used to look up resources in a running cluster. The synopsis of the lookup function is lookup apiVersion, kind, namespace, name -> resource or resource list.
parameter type apiVersion string kind string namespace string name string Both name and namespace are optional and can be passed as an empty string ("").
QUESTION
If have some code like this:
...ANSWER
Answered 2022-Jan-05 at 17:11You shouldn't have a problem with the code. As long as the function is referenced with self.open()
and not open()
, it should work. Just make sure the class does not already have an open()
function.
QUESTION
I am trying to write a program that removes a specific element from a string, but most of the things I use (like filter
) only work for [Char]
. I really just don't want to have to type "['h','e','l','l','o']"
instead of "hello"
. I realize that technically a String
is just a fancy [Char]
, but how would I unfancify it into a standard [Char]
. Also if you have another way to write normal words instead of in an array format please tell me.
ANSWER
Answered 2022-Jan-03 at 12:31In Haskell, the square brackets mean a list, as they also do in Python. Haskell also uses white space syntax.
You can tell what type a String is in Haskell by using :t in the ghci REPL.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install fancy
Make fancy executable. chmod +x fancy
Move fancy to /opt. mv fancy /opt/
Edit and paste following under /etc/rsyslog.conf. vim /etc/rsyslog.conf
Make sure you have set the right Loki URL
Restart rsyslog. systemctl restart rsyslog
Check logs under /var/log/syslog and /var/log/fancy.log
Check example and build a fancy dashboard! Uh fancy :)
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page