epoll | A low-level Node.js binding for the Linux epoll API | Runtime Evironment library
kandi X-RAY | epoll Summary
kandi X-RAY | epoll Summary
A low-level Node.js binding for the Linux epoll API for monitoring multiple file descriptors to see if I/O is possible on any of them. This module was initially written to detect EPOLLPRI events indicating that urgent data is available for reading. EPOLLPRI events are triggered by interrupt generating GPIO pins. The epoll module is used by onoff to detect such interrupts. epoll supports Node.js versions 10, 12, 14, 15 and 16.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of epoll
epoll Key Features
epoll Examples and Code Snippets
version: '3.3'
services:
wechat-1:
image: nginx
container_name: wechat-1
ports:
- 81:80
networks:
- web
depends_on:
- wechat-2
wechat-2:
image: nginx
container_name: wechat-2
ports:
Community Discussions
Trending Discussions on epoll
QUESTION
I'm running Apache ActiveMQ Artemis 2.17.0 inside VM for a month now and just noticed that after around 90 always connected MQTT clients Artemis broker is not accepting new connections. I need Artemis to support at least 200 MQTT clients.
What could be the reason for that? How can I remove this "limit"? Could the VM resources like low memory be causing this?
After restarting Artemis service, all connection are dropped, and I'm able to connect again.
I was receiving this message in logs:
...ANSWER
Answered 2021-Jun-05 at 14:53ActiveMQ Artemis has no default connection limit. I just wrote a quick test based on this which uses the Paho 1.2.5 MQTT client. It spun up 500 concurrent connections using both normal TCP and WebSockets. The test finished in less than 20 seconds with no errors. I'm just running this test on my laptop.
I noticed that your journal-buffer-timeout
is 700000
which seems quite high which means you have a very low write speed of 1.43 writes per millisecond (i.e. a slow disk). The journal-buffer-timeout
that is calculated, for example, on my laptop is 4000
which translates into a write-speed of 250 which is significantly faster than yours. My laptop is nothing special, but it does have an SSD. That said, SSDs are pretty common. If this low write-speed is indicative of the overall performance of your VM it may simply be too weak to handle the load you want. To be clear, this value isn't related directly to MQTT connections. It's just something I noticed while reviewing your configuration that may be indirect evidence of your issue.
The journal-buffer-timeout
value is calculated and configured automatically when the instance is created. You can re-calculate this value later and configure it manually using the bin/artemis perf-journal
command.
Ultimately, your issue looks environmental to me. I recommend you inspect your VM and network. TCP dumps may be useful to see perhaps how/why the connection is being reset. Thread dumps from the server during the time of the trouble would also be worth inspecting.
QUESTION
When I run ActiveMQ Artemis in docker I see this basically empty screen:
That doesn't look right... I was expecting this, like I get when using the zip file:
Regardless of whether I use docker or the zip file, it doesn't matter what username or password I enter, I just get logged in regardless, which is a little concerning...
What am I doing wrong?
Longer VersionI'm attempting a "Hello World" style installation of ActiveMQ. It sounds like ActiveMQ Artemis is what I should be using. We'll be using this on Kubernetes, so I found and have followed https://artemiscloud.io/. There is a Quickly deploy a basic Container image that runs the broker right there on the front page. It suggests:
...ANSWER
Answered 2021-May-28 at 14:35The ArtemisCloud container image for ActiveMQ Artemis is designed to run inside a container so the container IP address should be used to access to the console or to other resources.
The container IP address can be obtained by using the command docker inspect
or by reading the container log, ie:
QUESTION
I am currently programming a system that communicates via packets. This also works. I have one server and theoretically infinite clients. When I start only one client, it works very well, but when I start several, I always get a java.lang.IndexOutOfBoundsException
exception after the program has worked for 2-3 minutes.
The Exception
...ANSWER
Answered 2021-May-17 at 16:25So there are multiple issues.
First of in PacketDecoder
you need to first check if there are 4 bytes readable before trying to call byteBuf.readInt()
. So something like:
QUESTION
Intro:
Suppose we have a server, running a single thread, which manages evetns via epoll. We also have two clients A,B which are connected to the server via socket. If now A or B send a message to the server normally an epollin event is triggered and this is processed e.g. with method a(). This means that the epollin events for both clients are processed with exactly the same method a().
Desired:
Is there a way to structure this in a way that the epollin events triggered by two different clients are processed with two different methods? E.g. A sends a message to the server. The Epoll Fd detects an Epollin event. This is processed with mehtode a(). B sends a message to the server. The Epoll Fd again detects an Epollin event. However, this is processed with mehtode b().
...ANSWER
Answered 2021-May-13 at 05:02epoll itself does not associate a specific callback with a file descriptor. It just returns via epoll_wait
on which file descriptors an event occurred and which kind of event. It is fully up to the application on what to do with this information, like handle it directly in the same function where epoll_wait
was called, call a single function a()
for all Epollin events or call different functions a()
, b()
, .... for Epollin on different file desriptors.
QUESTION
Here is some background about the issue that I have:
I have a unix socket of type stream server_fd = socket(AF_UNIX, SOCK_STREAM, 0)
. On server side, the socket is listen(2)
ed to via listen(server_fd, 128)
and bound to an epoll handler that handles EPOLLIN
. When reading from said socket (using the epoll callback), I use accept(2)
to create a new socket for the client which is bound to its own epoll handling EPOLLIN | EPOLLOUT | EPOLLHUP | EPOLLERR
. So far pretty standard.
Here is the problem:
Because the data on server side is dispersed through multiple sources, and the aim is for client side to get the data in neat packages, I do something with a gist like this:
...ANSWER
Answered 2021-May-11 at 21:26From the linux unix(7)
man page:
The
send(2)
MSG_MORE
flag is not supported by UNIX domain sockets.
and also of interest:
The
SO_SNDBUF
socket option does have an effect for UNIX domain sockets, but theSO_RCVBUF
option does not.
So there's no point in using MSG_MORE
in your code; it's for TCP and UDP sockets.
Also, the number of reads on a stream (Be it a TCP socket, UNIX domain socket, pipe, etc.) isn't related to the number of writes on the far end. You have to include things like message boundaries in a higher-level protocol that uses the stream. If doing that is an issue, you might look into a SOCK_SEQPACKET
unix socket instead, combined with writev(2)
to send scattered data (I think that'll cause all the data to be in a single packet).
QUESTION
I'm trying to learn more about the async abstractions used by this codebase I'm working on.
I'm reading Folly
's documentation for two async executor pools in the library, IOThreadPoolExecutor
for io
bound tasks, and CPUThreadPoolExecutor
for cpu
bound tasks (https://github.com/facebook/folly/blob/master/folly/docs/Executors.md).
I'm reading through the descriptions but I don't understand the main difference. It seems like IOThreadPoolExecutor
is built around event_fd
and epoll
loop and CPUThreadPoolExecutor
uses a queue and semaphore.
But that doesn't tell me that much about the benefits and trade-offs.
...ANSWER
Answered 2021-May-11 at 16:18At a high level IPThreadPoolExecutors should be used only if you need a pool of EventBases. If you need a pool of workers, then use CPUThreadPoolExecutor.
CPUThreadPoolExecutorContains a series of priority queues which get constantly picked up by a series of workers. Each worker thread executes threadRun() after created. ThreadRun() is essentially an infinite loop which pulls one task from task queue and executes it. If the task is already expired when it is fetched, then the expire callback is executed instead of the task itself.
IOThreadPoolExecutorEach IO thread runs its own EventBase. Instead of pulling task from task queue like the CPUThreadPoolExecutor, the IOThreadPoolExecutor registers an event to the EventBase of next IO thread. Each IO thread then calls loopForEver() for its EventBase, which essentially calls epoll() to perform async io.
So most of the time you should probably be using a CPUThreadPoolExecutor, as that is the usual use case for having a pool of workers.
QUESTION
I am setting up a cluster of Artemis in Kubernetes with 3 group of master/slave:
...ANSWER
Answered 2021-May-11 at 23:49First, it's important to note that there's no feature to make a client reconnect to the broker from which it disconnected after the client crashes/restarts. Generally speaking the client shouldn't really care about what broker it connects to; that's one of the main goals of horizontal scalability.
It's also worth noting that if the number of messages on the brokers and the number of connected clients is low enough that this condition arises frequently that almost certainly means you have too many brokers in your cluster.
That said, I believe the reason that your client isn't getting the messages it expects is because you're using the default redistribution-delay
(i.e. -1
) which means messages will not be redistributed to other nodes in the cluster. If you want to enable redistribution (which is seems like you do) then you should set it to >= 0, e.g.:
QUESTION
i try to add basic authentication to my Spring Boot non-web application. Tests works fine with @WithMockUser annotation but i cant get proper cURL with correct (as it seems) credentials. My security configuration:
...ANSWER
Answered 2021-May-11 at 09:51When you add spring security to an application you automatically get a default security setup, which is described here in the spring security documentation.
But as soon as you declare a custom security configuration, then you are basically on your own.
What is missing in the above configuration is to explicitly configure that you want http basic
authentication.
this is done by adding:
QUESTION
I have AMQ Artemis cluster, shared-store HA (master-slave), 2.17.0.
I noticed that all my clusters (active servers only) that are idle (no one is using them) using from 10% to 20% of CPU, except one, which is using around 1% (totally normal). I started investigating...
Long story short - only one cluster has a completely normal CPU usage. The only difference I've managed to find that if I connect to that normal cluster's master node and attempt telnet slave 61616
- it will show as connected. If I do the same in any other cluster (that has high CPU usage) - it will show as rejected.
In order to better understand what is happening, I enabled DEBUG
logs in instance/etc/logging.properties
. Here is what master node is spamming:
ANSWER
Answered 2021-May-10 at 06:50Turns out issue was in broker.xml
configuration. In static-connectors
I somehow decided to list only a "non-current server" (e.g. I have srv0 and srv1 - in srv0 I only added connector of srv1 and vice versa).
What it used to be (on 1st master
node):
QUESTION
is it possible in pure rust to write a single-threaded TCP Server? In C I would use the select syscall to "listen" to multiple sockets. I only find solutions where people use unsafe to use epoll/select, but I really want to avoid this. I think this is a basic task and I cant imagine that there is no pure rust solution to solve such a task. Basically I am looking for an abstraction in the standard library.
Here is what I want in C: https://www.gnu.org/software/libc/manual/html_node/Server-Example.html
E.g. using unsafe with select/epoll: https://www.zupzup.org/epoll-with-rust/index.html
...ANSWER
Answered 2021-Apr-16 at 14:18select
and friends are syscalls. To use select
and friends, at one point, Rust needs to call those syscalls. Those syscalls are not expressed in Rust, they use C semantics (when calling them via libc), or assembly (when calling them directly).
From the perspective of Rust that means they're unsafe, the Rust compiler has no way to know what they're doing at all.
That means in order to use them from rust you have two choices:
- call them directly, using
unsafe
- or use a higher-level package which ends up calling them (internally using unsafe), like tokio
Even if there were a (linux-only) pure-rust and rust-targeted reinvention of libc, it would ultimately have to use unsafe
in order to craft the actual syscalls and call into the kernel. So would a hypothetical pure-rust bare-metal unikernel.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install epoll
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page