reproxy | Reverse http proxy | Proxy library
kandi X-RAY | reproxy Summary
kandi X-RAY | reproxy Summary
Reverse http proxy
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- type script
- type proxy
- All proxy types
- Set a new entry
- NewReproxy creates a new reproxy api
- generate go code
- save to disk
- Create a config file
- Get returns the current configuration
- Load reads a file
reproxy Key Features
reproxy Examples and Code Snippets
go run src/genstatic/genstatic.go --dir=static/ --package=files > src/reproxy/files/data.go
Community Discussions
Trending Discussions on reproxy
QUESTION
I ran into a trouble with a bad network performance on Centos. The issue was observed on the latest OpenVZ RHEL7 kernel (3.10 based) on Dell server with 24 cores and Broadcom 5720 NIC. No matter it was host system or OpenVZ container. Server receives RTMP connections and reproxy RTMP streams to another consumers. Reads and writes was unstable and streams froze periodically for few seconds.
I've started to check system with strace and perf. Strace affects system heavily and seems that only perf may help. I've used OpenVZ debug kernel with debugfs enabled. System spends too much time in swapper process (according to perf data). I've built flame graph for the system under the load (100mbit in data, 200 mbit out) and have noticed that kernel spent too much time in tcp_write_xmit and tcp_ack. On the top of these calls I see save_stack syscalls.
On another hand, I tested the same scenario on Amazon EC2 instance (latest Amazon Linux AMI 2017.09) and perf doesn't track such issues. Total amount of samples was 300000, system spends 82% of time according to perf samples in swapper, but net_rx_action (and as consequent tcp_write_xmit and tcp_ack) in swapper takes only 1797 samples (0.59% of total amount of samples). On the top of net_rx_action call in flame graph I don't see any calls related to stack traces.
Output of OpenVZ system looks differently. Among 1833152 samples 500892 (27%) was in swapper process, 194289 samples (10.5%) was in net_rx_action.
Full svg of calls on vzkernel7 is here and svg of EC2 instance calls is here. You may download it and open in browser to interactively check flame graph.
So, I want to ask for help and I have few questions.
- Why flame graph from EC2 instance doesn't contain so much save_stack calls like my server?
- Does perf forces system to call save_stack or it's some kernel setting? May it be disabled and how?
- Does Xen on EC2 guest process all tcp_ack and other syscalls? Is it possible that host system on EC2 server makes some job and guest system doesn't see it?
Thank you for a help.
...ANSWER
Answered 2018-Jan-25 at 09:51I've read kernel sources and have an answer for my questions.
save_stack calls
is caused by the Kernel Address Sanitizer feature that was enabled in OpenVZ debug kernel by CONFIG_KASAN
option. When this options is enabled, on each kmem_cache_free syscall kernel calls __cache_free
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install reproxy
How to build for all architectures:.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page