warp17 | The Stateful Traffic Generator for Layer 1 to Layer | Proxy library

 by   Juniper C Version: v1.8 License: BSD-3-Clause

kandi X-RAY | warp17 Summary

kandi X-RAY | warp17 Summary

warp17 is a C library typically used in Networking, Proxy applications. warp17 has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

WARP17, The Stateful Traffic Generator for L1-L7 is a lightweight solution for generating high volumes of session based traffic with very high setup rates. WARP17 currently focuses on L5-L7 application traffic (e.g., HTTP) running on top of TCP as this kind of traffic requires a complete TCP implementation. Nevertheless, WARP17 also supports application traffic running on top of UDP.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              warp17 has a low active ecosystem.
              It has 394 star(s) with 75 fork(s). There are 50 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 31 open issues and 58 have been closed. On average issues are closed in 90 days. There are 5 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of warp17 is v1.8

            kandi-Quality Quality

              warp17 has 0 bugs and 0 code smells.

            kandi-Security Security

              warp17 has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              warp17 code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              warp17 is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              warp17 releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.
              It has 4003 lines of code, 447 functions and 18 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of warp17
            Get all kandi verified functions for this library.

            warp17 Key Features

            No Key Features are available at this moment for warp17.

            warp17 Examples and Code Snippets

            No Code Snippets are available at this moment for warp17.

            Community Discussions

            QUESTION

            DPDK Switch Representation testpmd flow commands not working
            Asked 2020-Aug-25 at 05:04

            My question is related to a question I asked earlier. Forward packets between SR-IOV Virtual Function (VF) NICs Basically what I want to do is use 4 SR-IOV functions of Intel 82599ES and direct traffic between VFs as I need. The setup is something like this (don't mind the X710, I use 82599ES now)

            For the sake of simplicity at testing I'm only using one VM running warp17 to generate traffic, send it though VF1 and receive it back from VF3. Since the new dpdk versions have a switching function as described in https://doc.dpdk.org/guides-18.11/prog_guide/switch_representation.html?highlight=switch , I'm trying to use 'testpmd' to configure switching. But it seems to be test pmd doesn't work with any flow commands I enter. All I get is "Bad argument". For example it doesn't work with this command,

            ...

            ANSWER

            Answered 2020-Aug-25 at 05:04

            please note 82599ES uses ixgbe and X710 uses i40e PMD. Both are different and have different properties. As per the documentation comparing ixgbe PMD (http://doc.dpdk.org/guides/nics/ixgbe.html) and i40e PMD (http://doc.dpdk.org/guides/nics/i40e.html) the Flow director functionality that is for the ingress packets (packets received from the external port into ASIC). The function Floating VEB is the feature that you need to use. But this is only present in X710 and not in 82599ES.

            To enable VEB one needs to use -w 84:00.0,enable_floating_veb=1 in X710. But this limits your functionality that you will not able to receive and send on physical port.

            the best option is to use 2 * 10Gbps, where dpdk-0 is used wrap7/pktgen/trex and dpdk-1 is used by vm-1/vm-2/vm-3. the easiest parameter is to control DST MAC address matching to VF.

            setup:

            1. create necessary vf for port-0 and port-1
            2. share the VF to relevant VM.
            3. bind dpdk vf ports to igb_uio.
            4. from traffic generator port-0 in relevant mac address of VF.

            [P.S.] this is the information we have discussed over skype.

            Source https://stackoverflow.com/questions/63516636

            QUESTION

            Forward packets between SR-IOV Virtual Function (VF) NICs
            Asked 2020-Aug-17 at 11:00

            I have an Intel 82599ES 10G NIC which supports Intel SR-IOV. I have successfully created 8 virtual functions (VF) of it and assigned to 2 qemu/kvm VMs (2 VFs per each VM). Both of the VMs run DPDK applications (warp17 on one and my custom application on other) using assigned VFs. What I need to do is test my custom DPDK application by sending traffic through it using warp17. My test setup looks like this, The red arrow represents the traffic path.

            My Physical NIC (PF) use dpdk poll mode driver (igb_uio). What I need to do is route traffic between VFs as shown by the red arrows. I think https://doc.dpdk.org/guides/prog_guide/switch_representation.html has explained switching behavior but I cannot understand it. warp17 and my custom dpdk application both works perfectly on physical hardware. What I trying to do is virtualize my test setup to preserve resources. Has anyone tried to do such configuration?

            ...

            ANSWER

            Answered 2020-Aug-17 at 11:00

            neither X710 fortville and Ninatic 82599ES ASIC does not have internal Bridging or forwarding VERBor feature. The best option is to have software virtual switch like SPP, OVS-DPDK or custom application to forward packets via virtio or tap.

            if you still want to use physical NIC or x710 or 82599ES you will need to have connection at other end and run the logic to direct packets to relevant VF (modifying dst mac).

            Source https://stackoverflow.com/questions/63449237

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install warp17

            NOTE: In the case when we only want to test the TCP control implementation (i.e., the TCP 3-way handshake and TCP CLOSE sequence), WARP17 achieved the maximum setup rate of 8.5M clients/s and 8.5M servers/s, so a total of 17M TCP sessions are handled every second. The tests set up 20 million TCP sessions (i.e., 10 million TCP clients and 10 million TCP servers) on which clients continuously send fixed size requests (with random payload) and wait for fixed size responses from the servers.
            TCP raw traffic link utilization reaches line rate (40Gbps) as we increase the size of the requests and responses. When line rate is achieved the number of packets that actually make it on the wire decreases (due to the link bandwidth):
            TCP raw traffic setup rate is stable at approximately 7M sessions per second (3.5M TCP clients and 3.5M TCP servers per second)
            The tests set up 20 million TCP sessions (i.e., 10 million TCP clients and 10 million TCP servers) on which the clients continuously send HTTP GET requests and wait for the HTTP responses from the servers.
            HTTP traffic link utilization reaches line rate (40Gbps) as we increase the size of the requests and responses. When line rate is achieved the number of packets that actually make it on the wire decreases (due to the link bandwidth):
            HTTP traffic setup rate is stable at approximately 7M sessions per second (3.5M HTTP clients and 3.5M HTTP servers per second)
            The tests continuously send UDP fixed size packets (with random payload) from 10 million clients which are processed on the receing side by 10 million UDP listeners.
            UDP packets are generated at approximately 22 Mpps (for small packets) and as we reach the link bandwidth the rate decreases.
            Run the automated script with <version> as 19.11.3 (the latest LTS supported by warp17).
            Run dep_install.sh as root from the source folder.

            Support

            You can find how to contribute to our project and how to add new L7 application implementations here.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Proxy Libraries

            frp

            by fatedier

            shadowsocks-windows

            by shadowsocks

            v2ray-core

            by v2ray

            caddy

            by caddyserver

            XX-Net

            by XX-net

            Try Top Libraries by Juniper

            py-junos-eznc

            by JuniperPython

            ansible-junos-stdlib

            by JuniperPython

            libxo

            by JuniperC

            open-nti

            by JuniperPython

            go-netconf

            by JuniperGo