ipc-bench | Latency benchmarks of Unix IPC mechanisms | Change Data Capture library
kandi X-RAY | ipc-bench Summary
kandi X-RAY | ipc-bench Summary
Some very crude IPC benchmarks. This software is distributed under the MIT License.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ipc-bench
ipc-bench Key Features
ipc-bench Examples and Code Snippets
Community Discussions
Trending Discussions on ipc-bench
QUESTION
I want to create an ultra fast message processing C++ solution which will be CPU bound and micro-services based. It will process lots of request/response messages that are small enough (32 bytes to 1kb per message).
Some services will be placed in different hosts. Some will be on the same host, but in different processes. And some inside the same process, but in different threads.
Currently I'm thinking about communication protocols for such services topology. My first idea was to use TCP-based communication which allows to use loopback for IPC communication on the same host and even for threading communication. The benefit is to have a single communication code which allows to experiment with services topology. It will be easy to host service inside some process or move it to the remote host.
However I understand that if I want to have a low-latency solution with max RPS and speed of message delivery I have to split transport depending on communication type:
Threads communication - the best results could be achieved with circular buffer or LMAX Disruptor pattern.
IPC communication - I think pipes or circular buffer in shared memory are good solutions. Is there a better way to do IPC?
Remote communication - async TCP server/client for sequential message delivery and UDP for multicasting.
Also I'm thinking about Linux solution, but having cross-platform would be great!
I believe Asio C++ Library is a good starting point for remote communications.
My questions are the following:
- Is it worth creating a custom IPC/threading communication solutions or should I start with TCP-based localhost communications?
- Can anyone provide me with some performance comparison results of different IPC techniques (locahost tcp vs pipes vs shared memory) for best RPS of small messages up to 1kb? (E.g. shared memory will be 10-times faster than localhost TCP and 3-times faster than pipes or approximate RPS values to compare).
- Maybe I missed some good low-lattency IPC/RPC technique or library that I should look into?
- Or perhaps some production-ready solution exists for my problem that I can use or purchase the license?
Thanks in advance for your answers and suggestions!
IPC benchmarksI just found and performed low-level IPC benchmarks under Linux (Linux ubuntu 4.4.0 x86_64 i7-6700K 4.00GHz). Message size was 128 bytes and messages count is 1000000. I get following results:
Pipe benchmark:
...ANSWER
Answered 2019-Jan-08 at 12:48You wrote:
an ultra fast message processing C++ solution
That usually means getting hands into everything. Sounds like an interesting library in the end though if you pull it off.
Overall your question is (way) too broad - nevertheless here are my thoughts:
Hard to give any advice here...
Comparisons will be platform/system specific. Eg. TCP may be faster or slower depending on the system.
OpenMP
andboost interprocess
comes to mind.You may wan't to look into or start out with for example
apache thrift
library (albeit its also cross-language - original developed for facebook backend servers i believe) you may do some early experimenting perhaps and get a feel for some issues to consider.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ipc-bench
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page