reproxy | Simple edge server / reverse proxy | Continuous Deployment library
kandi X-RAY | reproxy Summary
kandi X-RAY | reproxy Summary
Reproxy is a simple edge HTTP(s) server / reverse proxy supporting various providers (docker, static, file, consul catalog). One or more providers supply information about the requested server, requested URL, destination URL, and health check URL. It is distributed as a single binary or as a docker container. Server (host) can be set as FQDN, i.e. s.example.com, * (catch all) or a regex. Exact match takes priority, so if there are two rules with servers example.com and example\.(com|org), request to example.com/some/url will match the former. Requested url can be regex, for example ^/api/(.*) and destination url may have regex matched groups in, i.e. For the example above will be proxied to For convenience, requests with the trailing / and without regex groups expanded to /(.*), and destinations in those cases expanded to /$1. I.e. /api/ -> will be translated to ^/api/(.*) -> Both HTTP and HTTPS supported. For HTTPS, static certificate can be used as well as automated ACME (Let's Encrypt) certificates. Optional assets server can be used to serve static files. Starting reproxy requires at least one provider defined. The rest of parameters are strictly optional and have sane default.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of reproxy
reproxy Key Features
reproxy Examples and Code Snippets
Community Discussions
Trending Discussions on reproxy
QUESTION
I ran into a trouble with a bad network performance on Centos. The issue was observed on the latest OpenVZ RHEL7 kernel (3.10 based) on Dell server with 24 cores and Broadcom 5720 NIC. No matter it was host system or OpenVZ container. Server receives RTMP connections and reproxy RTMP streams to another consumers. Reads and writes was unstable and streams froze periodically for few seconds.
I've started to check system with strace and perf. Strace affects system heavily and seems that only perf may help. I've used OpenVZ debug kernel with debugfs enabled. System spends too much time in swapper process (according to perf data). I've built flame graph for the system under the load (100mbit in data, 200 mbit out) and have noticed that kernel spent too much time in tcp_write_xmit and tcp_ack. On the top of these calls I see save_stack syscalls.
On another hand, I tested the same scenario on Amazon EC2 instance (latest Amazon Linux AMI 2017.09) and perf doesn't track such issues. Total amount of samples was 300000, system spends 82% of time according to perf samples in swapper, but net_rx_action (and as consequent tcp_write_xmit and tcp_ack) in swapper takes only 1797 samples (0.59% of total amount of samples). On the top of net_rx_action call in flame graph I don't see any calls related to stack traces.
Output of OpenVZ system looks differently. Among 1833152 samples 500892 (27%) was in swapper process, 194289 samples (10.5%) was in net_rx_action.
Full svg of calls on vzkernel7 is here and svg of EC2 instance calls is here. You may download it and open in browser to interactively check flame graph.
So, I want to ask for help and I have few questions.
- Why flame graph from EC2 instance doesn't contain so much save_stack calls like my server?
- Does perf forces system to call save_stack or it's some kernel setting? May it be disabled and how?
- Does Xen on EC2 guest process all tcp_ack and other syscalls? Is it possible that host system on EC2 server makes some job and guest system doesn't see it?
Thank you for a help.
...ANSWER
Answered 2018-Jan-25 at 09:51I've read kernel sources and have an answer for my questions.
save_stack calls
is caused by the Kernel Address Sanitizer feature that was enabled in OpenVZ debug kernel by CONFIG_KASAN
option. When this options is enabled, on each kmem_cache_free syscall kernel calls __cache_free
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install reproxy
for a binary distribution download the proper file in the release section
docker container available on Docker Hub as well as on Github Container Registry. I.e. docker pull umputun/reproxy or docker pull ghcr.io/umputun/reproxy.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page