ethtool | A simple ethtool `` like '' library for GO

 by   safchain Go Version: v0.3.0 License: Apache-2.0

kandi X-RAY | ethtool Summary

kandi X-RAY | ethtool Summary

ethtool is a Go library typically used in Utilities applications. ethtool has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

The ethtool package aims to provide a library giving a simple access to the Linux SIOCETHTOOL ioctl operations. It can be used to retrieve informations from a network device like statistics, driver related informations or even the peer of a VETH interface.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              ethtool has a low active ecosystem.
              It has 87 star(s) with 55 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 11 open issues and 12 have been closed. On average issues are closed in 244 days. There are 6 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of ethtool is v0.3.0

            kandi-Quality Quality

              ethtool has 0 bugs and 0 code smells.

            kandi-Security Security

              ethtool has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              ethtool code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              ethtool is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              ethtool releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed ethtool and discovered the below as its top functions. This is intended to give you an instant insight into ethtool implemented functionality, and help decide if they suit your requirements.
            • main is the entry point for testing
            • setFeatureBit sets a bit at the specified index .
            • NewEthtool creates a new ethtool
            • SetChannels sets a list of channels for a given interface name
            • MsglvlSet implements the MsglvlSet interface .
            • DriverName returns the driver name for the given interface name
            • BusInfo gets bus information
            • PermAddr returns the address of the given address
            • Stats returns the stats for the given interface name .
            • MsglvlGet returns the value of the given interface name
            Get all kandi verified functions for this library.

            ethtool Key Features

            No Key Features are available at this moment for ethtool.

            ethtool Examples and Code Snippets

            No Code Snippets are available at this moment for ethtool.

            Community Discussions

            QUESTION

            RSS hash for ip over gre packet
            Asked 2022-Mar-14 at 15:03

            I am using Mellanox Technologies MT27800 Family [ConnectX-5], using dpdk multi rx queue with rss "ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP"

            I receive packet with ETH:IP:GRE:ETH:IP:UDP

            I want the load balancing to be according to inner ip+port and not with the gre ip

            I tried adding ETH_RSS_LEVEL_INNERMOST to the rss_hf but i got an error about rss invalid value 0x800000003afbc should be 0xf00000000003afbc

            I am using dpdk 21.11.0 - Is it possible to do it and how? If not how can i do it.

            Is it also supported in dpdk 19.11?

            ...

            ANSWER

            Answered 2022-Mar-12 at 16:36

            Support for GRE and/or Inner RSS is touch and go for many NIC, so most NIC does not work out of box. For example

            1. For Intel Fortville (10gbps, 25Gbps, 40Gbps) NIC: one needs to update firmware and DDP profile to parse and compute inner RSS.
            2. For Intel Columbiaville (100Gbps, 50Gbps, 25Gbps) NIC: one needs to update firmware and then update driver. As I recollect default DDP profile parses GRE.
            3. In case of Mellanox MLX5 (there are multiple variant for ConnectX-5 and Connect-6), some of them support GRE based parsing and RSS, while other require ESWITCH to perform such actions.

            Hence using testpmd one needs to

            1. enable MQ_RSS in port and RX queue configuration
            2. For specific (inner RSS) enable via RTE_FLOW API

            With MT2892 Family [ConnectX-6 Dx] one can enable Inner 5 tuple RSS for GRE encapsulated packet with testpmd.

            1. Start packet generator (DPDK pktgen) use sudo ./usr/local/bin/pktgen --file-prefix=3 -a81:00.0,txq_inline_mpw=128 -l 6-27 -- -P -m "[8-11:12-27].0" -N -s"0:rtp_balanced_gre.pcap"
            2. Start testpmd in interactive mode with multiple RX queues using dpdk-testpmd --socket-mem=1024 --file-prefix=2 -l 7,8-23 -a0000:41:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128,rxq_pkt_pad_en=1,rxq_cqe_comp_en=4 -- --port-numa-config=0,0 --socket-num=0 --burst=128 --txd=8192 --rxd=8192 --mbcache=512 --rxq=16 --txq=16 --nb-cores=8 -a --forward-mode=io --numa --rss-udp --enable-rx-cksum --no-mlockall --no-lsc-interrupt -a --enable-rx-cksum --enable-drop-en --eth-link-speed=100000 --no-rmv-interrupt -i
            3. In interactive shell configure the rule as flow create 0 ingress pattern eth / ipv4 / gre / eth / ipv4 / udp / end actions rss level 2 types ip udp end queues 0 1 2 3 end / end.

            Note:

            1. Ensure to use the right format in match field eth / ipv4 / gre / eth / ipv4 / udp /
            2. if the rule is not set on the device only device RX queue0 will be receiving the packets.

            Source https://stackoverflow.com/questions/71338802

            QUESTION

            Understanding Makefile rule
            Asked 2022-Mar-12 at 17:23

            I have been debugging a linking error for specific target (android), for all other targets, build is successful.

            Error is something like

            ...

            ANSWER

            Answered 2022-Mar-12 at 17:23

            "@" simply silences the current line.

            The "-x" option determines the language, thus $(gcc -x c++) $< compiles the file (prerequisite) as C++ file.

            Source https://stackoverflow.com/questions/69945245

            QUESTION

            Why do I see strange UDP fragmentation on my C++ server?
            Asked 2021-Dec-21 at 11:44

            I have build an UDP server with C++ and I have a couple questions about this.

            Goal:

            I have incomming TCP trafic and I need to sent this further as UDP trafic. My own UDP server then processes this UDP data. The size of the TCP packets can vary.

            Details:

            In my example I have a TCP packet that consists of a total of 2000 bytes (4 random bytes, 1995 'a' (0x61) bytes and the last byte being 'b' (0x62)).

            My UDP server has a buffer (recvfrom buffer) with size larger then 2000 bytes. My MTU size is 1500 everywhere.

            My server is receiving this packet correctly. In my UDP server I can see the received packet has a length of 2000 and if I check the last byte buffer[1999], it prints 'b' (0x62), which is correct. But if I open tcpdump -i eth0 I see only one UDP packet: 09:06:01.143207 IP 192.168.1.1.5472 > 192.168.1.2.9000: UDP, bad length 2004 > 1472. With the tcpdump -i eth0 -X command, I see the data of the packet, but only ~1472 bytes, which does not include the 'b' (0x62) byte.

            The ethtool -k eth0 command prints udp-fragmentation-offload: off.

            So my questions are:

            1. Why do I only see one packet and not two (fragmented part 1 and 2)?
            2. Why dont I see the 'b' (0x62) byte in the tcpdump?
            3. In my C++ server, what buffer size is best to use? I have it now on 65535 because the incomming TCP packets can be any size.
            4. What will happen if the size exceedes 65535 bytes, will I have to make an own fragmentation scheme before sending the TCP packet as UDP?
            ...

            ANSWER

            Answered 2021-Dec-21 at 08:58

            The size of the TCP packets can vary.

            While there is no code shown the sentence above and your description suggests that you are working with wrong assumptions of how TCP works.

            Contrary to UDP, TCP is not a message based protocol but a byte stream. This especially means that it does not guarantee that single send at the sender will be matched by a single recv in the recipient. Thus even if the send is done with 2000 bytes it might still be that the first recv only gets 1400 bytes while another recv will get the rest - no matter if everything would fit into the socket buffer at once.

            Source https://stackoverflow.com/questions/70432649

            QUESTION

            Kubelet service is not running. It seems like the kubelet isn't running or healthy
            Asked 2021-Dec-05 at 08:04

            I have configured 1 master 2 workers. after installation successfully kubernetes. It is OK with worker1 joining cluster but I can not join worker2 to the cluster because kubelet service is not running. It seems like the kubelet isn't running or healthy

            sudo kubectl get nodes:

            NAME STATUS ROLES AGE VERSION
            master1 Ready control-plane,master 23m v1.22.2
            node1 NotReady 4m13s v1.22.2

            I want to know why the kubelet service is not running.

            Here kubelet logs.

            ...

            ANSWER

            Answered 2021-Dec-05 at 08:04

            First, check if swap is diabled on your node as you MUST disable swap in order for the kubelet to work properly.

            Source https://stackoverflow.com/questions/70229371

            QUESTION

            DPDK for general purpose workload
            Asked 2021-Nov-24 at 04:50

            I have deployed OpenStack and configured OVS-DPDK on compute nodes for high-performance networking. My workload is a general-purpose workload like running haproxy, mysql, apache, and XMPP etc.

            When I did load-testing, I found performance is average and after 200kpps packet rate I noticed packet drops. I heard and read DPDK can handle millions of packets but in my case, it's not true. In guest, I am using virtio-net which processes packets in the kernel so I believe my bottleneck is my guest VM.

            I don't have any guest-based DPDK application like testpmd etc. Does that mean OVS+DPDK isn't useful for my cloud? How do I take advantage of OVS+DPDK with a general-purpose workload?

            Updates

            We have our own loadtesting tool which generate Audio RTP traffic which is pure UDP based 150bytes packets and noticed after 200kpps audio quality go down and choppy. In short DPDK host hit high PMD cpu usage and loadtest showing bad audio quality. when i do same test with SRIOV based VM then performance is really really good.

            ...

            ANSWER

            Answered 2021-Nov-24 at 04:50

            When I did load-testing, I found performance is average and after 200kpps packet rate I noticed packet drops. In short DPDK host hit high PMD cpu usage and loadtest showing bad audio quality. when i do same test with SRI

            [Answer] this observation is not true based on the live debug done so far. The reason as stated below

            1. qemu launched were not pinned to specific cores.
            2. comparison done against PCIe pass-through (VF) against vhost-client is not apples to apples comparison.
            3. with OpenStack approach, there are at least 3 bridges before the packets to flow through before reaching VM.
            4. OVS threads were not pinned which led to all the PMD threads running on the same core (causing latency and drops) in each bridge stage.

            To have a fair comparison against SRIOV approach, the following changes have been made with respect to similar question

            Source https://stackoverflow.com/questions/69783902

            QUESTION

            Proper DPDK device and port initialization for transmission
            Asked 2021-Sep-11 at 14:23

            While writing a simple DPDK packet generator I noticed some additional initialization steps that are required for reliable and successful packet transmission:

            1. calling rte_eth_link_get() or rte_eth_timesync_enable() after rte_eth_dev_start()
            2. waiting 2 seconds before sending the first packet with rte_eth_tx_burst()

            So these steps are necessary when I use the ixgbe DPDK vfio driver with an Intel X553 NIC.

            When I'm using the AF_PACKET DPDK driver, it works without those extra steps.

            • Is this a bug or a feature?
            • Is there a better way than waiting 2 seconds before the first send?
            • Why is the wait required with the ixgbe driver? Is this a limitation of that NIC or the involved switch (Mikrotik CRS326 with Marvell chipset)?
            • Is there a more idiomatic function to call than rte_eth_link_get() in order to complete the device initialization for transmission?
            • Is there some way to keep the VFIO NIC initialized (while keeping its link up) to avoid re-initializing it over and over again during the usual edit/compile/test cycle? (i.e. to speed up that cycle ...)

            Additional information: When I connect the NIC to a mirrored port (which is configured via Mikrotik's mirror-source/mirror-target ethernet switch settings) and the sleep(2) is removed then I see the first packet transmitted to the mirror target but not to the primary destination. Thus, it seems like the sleep is necessary to give the switch some time after the link is up (after the dpdk program start) to completely initialize its forwarding table or something like that?

            Waiting just 1 second before the first transmission works less reliable, i.e. the packet reaches the receiver only every odd time.

            My device/port initialization procedure implements the following setup sequence:

            ...

            ANSWER

            Answered 2021-Sep-10 at 21:01

            The difference between the two attempted approaches

            In order to use af_packet PMD, you first bind the device in question to the kernel driver. At this point, a kernel network interface is spawned for that device. This interface typically has the link active by default. If not, you typically run ip link set dev up. When you launch your DPDK application, af_packet driver does not (re-)configure the link. It just unconditionally reports the link to be up on device start (see https://github.com/DPDK/dpdk/blob/main/drivers/net/af_packet/rte_eth_af_packet.c#L266) and vice versa when doing device stop. Link update operation is also no-op in this driver (see https://github.com/DPDK/dpdk/blob/main/drivers/net/af_packet/rte_eth_af_packet.c#L416).

            In fact, with af_packet approach, the link is already active at the time you launch the application. Hence no need to await the link.

            With VFIO approach, the device in question has its link down, and it's responsibility of the corresponding PMD to activate it. Hence the need to test link status in the application.

            Is it possible to avoid waiting on application restarts?

            Long story short, yes. Awaiting link status is not the only problem with application restarts. You effectively re-initialise EAL as a whole when you restart, and that procedure is also eye-wateringly time consuming. In order to cope with that, you should probably check out multi-process support available in DPDK (see https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html).

            This requires that you re-implement your application to have its control logic in one process (also, the primary process) and Rx/Tx datapath logic in another one (the secondary process). This way, you can keep the first one running all the time and re-start the second one when you need to change Rx/Tx logic / re-compile. Restarting the secondary process will re-attach to the existing EAL instance all the time. Hence no PMD restart being involved, and no more need to wait.

            Source https://stackoverflow.com/questions/69133896

            QUESTION

            Why WOL(WakeOnLan) Is Releated To Operating System?
            Asked 2021-Sep-11 at 07:16

            Wikipedia says:

            Wake-on-LAN (WoL) is an Ethernet or Token Ring computer networking standard that allows a computer to be turned on or awakened by a network message.

            But, in another section:

            Responding to the magic packet ... Most WoL hardware functionally is typically blocked by default and needs to be enabled in using the system BIOS. Further configuration from the OS is required in some cases, for example via the Device Manager network card properties on Windows operating systems.

            Why? why do we need to also enable WOL in OS?

            The Problem:

            My actual problem rise when I implement a WOL program to turn on other PCs in a network(connected by LAN) from a local server. But failed, because it needs some extra configurations in the PCs:

            1. The configurations are different from OS to OS (and from version to version).
            2. Some of the configurations are not permanent and need to be done in every OS startup. (for example: in Ubuntu 16.04 I had to run ethtool -s eno1 wol g).

            Is there any way to bypass the OS configurations and only enable WOL from BIOS settings? Or it's the code problem?

            WOL Example: ...

            ANSWER

            Answered 2021-Sep-11 at 07:16

            The OS is involved only to the extent that there's not a standardized way to enable WoL for all hardware. Therefore, you typically need a device driver for the specific hardware to be able to enable the hardware's capability. Loading the OS usually gives you such a device driver.

            Running ethtool every startup should be fairly trivial, especially since (at last if memory serves) running it twice (or more) should be harmless, so you can add it to (for one example) your .bashrc. If you need to ensure it really only happens once when you start up, instead of every time you login, you can add an init script to do that. man init-d-script should get you going pretty easily.

            You have to enable it because most BIOSes leave it disabled by default, so unless you enable it, it won't work.

            As to why the disable it by default: less certain, but my guess is that it's simply because most people don't use it.

            Source https://stackoverflow.com/questions/69140443

            QUESTION

            Why am I receiving packer bigger than with raw packet
            Asked 2021-Aug-23 at 08:52

            I am trying to transfers a packet from an interface to another by using raw packets (just for playing). First I focused on received packets.

            On my machine (archlinux, that has 192.168.30.3 as IP) I created this code:

            ...

            ANSWER

            Answered 2021-Aug-23 at 08:52

            Since the observed total packet length is way greater than that of a typical jumbo frame (MTU 9k), it's apparent that the receiver side employs either Large Receive Offload (LRO) or Generic Receive Offload (GRO) thus reassembling smaller packets into larger ones on the network interface driver level. This might explain why the packet socket in question sees already reassembled (large) packets.

            In this specific case, ethtool -k output indicates clearly that LRO is always disabled whilst GRO is indeed active and can be adjusted. As per the discussion in comments, disabling GRO indeed bears fruit.

            Source https://stackoverflow.com/questions/68883831

            QUESTION

            Docker compose fails to start a service with an error 'unknown option' but docker-compose build on the same command is a success
            Asked 2021-Jun-07 at 12:56

            I have a project which has a docker-compose file and a Dockerfile. The project is open here GitHub

            I'm building a demo project with:

            • Traefik
            • Snort 3
            • A NodeJS API dummy for testing

            The issue is that in my Docker file I have a command like this to run on Snort

            ...

            ANSWER

            Answered 2021-Jun-07 at 12:56

            Your entrypoint is conflicting with the command you want to run:

            Source https://stackoverflow.com/questions/67869735

            QUESTION

            Low throughput with XDP_TX in comparison with XDP_DROP/REDIRECT
            Asked 2021-Mar-19 at 12:55

            I have developed a XDP program that filters packets based on some specific rules and then either drops them (XDP_DROP) or redirects them (xdp_redirect_map) to another interface. This program was well able to process a synthetic load of ~11Mpps (that's all what my traffic generator is capable of) on just four CPU cores.

            Now I've changed that program to use XDP_TX to send the packets out on the interface they were received on instead of redirecting them to another interface. Unfortunately, this simple change caused a big drop in throughput and now it hardly handles ~4Mpps.

            I don't understand, what could be the cause for this or how to debug this further, that's why I'm asking here.

            My minimal test setup to reproduce the issue:

            • Two machines with Intel x520 SFP+ NICs directly connected to each other, both NICs are configured to have as many "combined" queus as the machine has CPU cores.
            • Machine 1 runs pktgen using a sample application from the linux sources: ./pktgen_sample05_flow_per_thread.sh -i ens3 -s 64 -d 1.2.3.4 -t 4 -c 0 -v -m MACHINE2_MAC (4 threads, because this was the config that resulted in the highest generated Mpps even though the machine has way more than 4 cores)
            • Machine 2 runs a simple program that drops (or reflects) all packets and counts the pps. In that program, I've replaced the XDP_DROP return code with XDP_TX. - Whether I swap the src/dest mac addresses before reflecting the packet did never cause a difference in throughput, so I'm leaving this out here.

            When running the program with XDP_DROP, 4 cores on Machine 2 are slightly loaded with ksoftirqd threads while dropping around ~11Mps. That only 4 cores are loaded makes sense, given that pktgen sends out 4 different packets that fill only 4 rx queues becaue of how the hashing in the NIC works.

            But when running the program with XDP_TX, one of the cores is a ~100% busy with ksoftirqd and only ~4Mpps are processed. Here I'm not sure, why that happens.

            Do you have an idea, what might be causing this throughput drop and CPU usage increase?

            Edit: Here some more details about the configuration of Machine 2:

            ...

            ANSWER

            Answered 2021-Mar-19 at 12:55

            I've now upgraded the kernel of Machine 2 to 5.12.0-rc3 and the issue disappeared. Looks like this was a kernel issue.

            If somebody knows more about this or has a changelog regarding this, please let me know.

            Source https://stackoverflow.com/questions/66694072

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install ethtool

            In order to run te.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/safchain/ethtool.git

          • CLI

            gh repo clone safchain/ethtool

          • sshUrl

            git@github.com:safchain/ethtool.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Go Libraries

            go

            by golang

            kubernetes

            by kubernetes

            awesome-go

            by avelino

            moby

            by moby

            hugo

            by gohugoio

            Try Top Libraries by safchain

            goebpf

            by safchainGo

            ejabberd-go-auth

            by safchainGo

            asterisk-scale-poc

            by safchainPython

            sa_dsp

            by safchainC

            jstreamsourcer

            by safchainJava