lkml | Linux Kernel Mailing List viewer | TCP library

 by   sjp38 Go Version: Current License: No License

kandi X-RAY | lkml Summary

kandi X-RAY | lkml Summary

lkml is a Go library typically used in Networking, TCP applications. lkml has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

lkml is a simple, stupid lkml (Linux Kernel Mailing List) viewer.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              lkml has a low active ecosystem.
              It has 5 star(s) with 1 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              lkml has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of lkml is current.

            kandi-Quality Quality

              lkml has no bugs reported.

            kandi-Security Security

              lkml has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              lkml does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              lkml releases are not available. You will need to build from source code and install.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of lkml
            Get all kandi verified functions for this library.

            lkml Key Features

            No Key Features are available at this moment for lkml.

            lkml Examples and Code Snippets

            No Code Snippets are available at this moment for lkml.

            Community Discussions

            QUESTION

            "address_space()" definition as a "Sparse" annotation in the linux kernel
            Asked 2020-Oct-29 at 00:59

            I came across some macros with the definitions like this:

            ...

            ANSWER

            Answered 2020-Jul-25 at 00:18

            address_space() and noderef are attributes. What may be confusing is that they are not GCC attributes. They are Sparse attributes and so they only are meaningful to Sparse when __CHECKER__ is defined and Sparse is enabled.

            address_space() attribute puts a specific restriction on a pointer marking it as such. My understanding is that the numbers that are arguments were chosen arbitrarily and denote that a pointer belongs to a certain class. Thus, if you follow the rules, you should clearly annotate a pointer as belonging to a specific class (such as __user or __iomem, etc) and not mix pointers from different classes. Using these annotations, Sparse as a static checker helps you to spot cases of incorrect usage.

            For address_space(2) aka __iomem I have found a good description here: What is the use of __iomem in linux while writing device drivers? The post from Linus is a excellent description as well.

            Besides {0,1,2,3} there's also {4} which marks __rcu pointers as a separate class. I don't think there are more at this time.

            Source https://stackoverflow.com/questions/63071847

            QUESTION

            Perf Imprecise Call-Graph Report
            Asked 2020-Apr-23 at 18:10

            Recent Intel processors provide a hardware feature (a.k.a., Precise Event-Based Sampling (PEBS)) to access precise information about the CPU state on some sampled CPU events (e.g., e). Here is an extract from Intel 64 and IA-32 Achitecture's Software Developer's Manual: Volume 3:

            18.15.7 Processor Event-Based Sampling (PEBS)

            The debug store (DS) mechanism in processors based on Intel NetBurst microarchitecture allow two types of information to be collected for use in debugging and tuning programs: PEBS records and BTS records.

            Based on Chapter 17 of the same reference, the DS format for x86-64 architecture is as follows: The BTS Buffer records the last N executed branches (N is dependent on the microarchitecture), while the PEBS Buffer records the following registers: IIUC, a counter is set and each event (e) occurrence increments its value. When the counter overflows, an entry is added to both of these buffers. Finally, when these buffers reach a certain size (BTS Absolute Maximum and PEBS Absolute Maximum), an interrupt is generated and the contents of the two buffers are dumped to disk. This will happen, periodically. It seems that the --call-graph dwarf backtrace data is also extracted in the same handler, Right?

            1) Does this mean that LBR and PEBS (--call-graph --lbr) state, perfectly, match together?

            2) How about the --call-graph dwarf output, which is not part of PEBS (as seems obvious in the above reference)? (Some RIP/RSPs do not match the backtrace)

            Precisely, here is an LKML Thread, where Milian Wolff shows that the second question is, NO. But I do not fully understand the reason?

            The answer to the first question is also, NO (expressed by Andi Kleen in the latter messages of the thread), which I do not understand at all.

            3) Does this mean that the whole DWARF call-graph information is completely corrupted?

            The above thread does not show this, and in my experiments I do not see any RIP not matching the backtrace. In other words, can I trust the majority of the backtraces?

            I do not prefer the LBR method which may, itself, be imprecise. It is also limited in the size of the backtrace. Although, here is a patch to overcome the size issue. But this is recent and may be bogus.

            UPDATE:

            • How is it possible to force Perf to store only a single record in PEBS Buffer? Is it only possible to force this configuration, indirectly, e.g., when call-graph information is required for a PEBS event?
            ...

            ANSWER

            Answered 2020-Apr-19 at 03:54

            1) The section of the manual you quoted talks about BTS, not LBR: they are not the same thing. Later in that same thread you quoted, Andi Kleen seems to indicate that the LBR snap time is actually the moment of the PMI (the interrupt that runs the handler) and not the PEBS moment. So I think all three stack approaches have the same problem.

            2) DWARF stack captures definitely do not correspond exactly to the PEBS entry. The PEBS event is recorded by the hardware at runtime, and then only some time later is the CPU interrupted, at which point the stack is unwound. If the PEBS buffer is configured to hold only a single entry, these two things should at least be close and if you are lucky, the PEBS IP will be in the same function that is still at the top of the stack when the handler runs. In that case, the stack is basically correct. Since perf shows you the actual PEBS IP at the top, plus the frames below that from the capture, this ends up working in that case.

            3) If you aren't lucky, the function will have changed between the PEBS capture and the handler running. In this case you get a franken-stack that doesn't make sense: the top function may not be callable from the second-from-the-top function (or something). It is not totally corrupted: it's just that everything except the top frame comes from a point after the PEBS stack was captured, and the top frame comes from PEBS, or something like that. This applies also to --call-graph fp, for the same reasons.

            Most likely you never saw an invalid IP because perf shows the IP from the PEBS sample (that's the theme of that whole thread). I think if you look into the raw sample, you can see both the PEBS IP, and the handler IP, and you can see they usually won't match.

            Overall, you can trust the backtraces for "time" or "cycle" profiling since they are in some sense an accurate sampling representation of execution time: it's just that they don't correspond to the PEBS moment but some time later (but why is that later time any worse than the PEBS time). Basically for this type of profiling you don't really need PEBS at all.

            If you are using a different type of event, and you want fine-grained accounting of where the event took place, that's what PEBS is for. You often don't need a stack trace: just the top frame is enough. If you want stack traces, use them, but know they come from a moment in time a bit later, or use --lbr (if that works).

            Source https://stackoverflow.com/questions/61296464

            QUESTION

            Raspberry Pi 4 used as BLE central device connection issues
            Asked 2020-Mar-23 at 16:00

            For a professional project, i have to use a RPI4 as a central device to connect to a particular peripheral device. For testing, I developped a program to simulate a peripheral on a RPi (thanks to bleno Node.js module) by setting a GATT server and use another RPi as my central with the bluepy Python library.

            All worked fine but when I set the advertising interval higher than 4000ms on my peripheral, connection doesn't work anymore (even on the RPi GATT server or with the production one).

            I tried to use gatttool/hcitool on my central, same issues, it works well but only if advertising interval is less than 4000ms. But it works when I try to connect to my GATT server with my phone with a dedicated application (nRF connect).

            After some research, I found that the Linux kernel only validates values of connection interval within the range 7.5ms - 4000ms (https://lkml.org/lkml/2019/8/2/358), which match exactly with my experimental values. But except if I don't understand something with BLE, connection interval and advertising interval are totally independent and it should not be a problem. In the Bluetooth documentation, I found that maximum advertising interval value should be 10240ms. There is something I don't understand.

            Here is my GATT server running on one RPi4 if you want to reproduce that. I start it using sudo BLENO_ADVERTISING_INTERVAL=xxxx node my_gatt_server.js with xxxx the advertising interval I want in ms.

            ...

            ANSWER

            Answered 2019-Aug-05 at 14:59

            4000 ms is a quite large advertising interval. Anyway, bluepy uses BlueZ, and when BlueZ connects, it first scans and when it finds the wanted device, it stops scanning and initiates a connection. During this connection attempt, there is a timeout before starting scanning again. It could be that this 4000 ms advertising interval is larger than the timeout. It's commonly 2 seconds I think.

            Please start "sudo btmon" on your central device before attempting to connect, let it capture and print the HCI packets and post the output here.

            Advertising interval and Connection interval are completely independent.

            Source https://stackoverflow.com/questions/57360580

            QUESTION

            What is a retpoline and how does it work?
            Asked 2019-Jun-20 at 14:24

            In order to mitigate against kernel or cross-process memory disclosure (the Spectre attack), the Linux kernel1 will be compiled with a new option, -mindirect-branch=thunk-extern introduced to gcc to perform indirect calls through a so-called retpoline.

            This appears to be a newly invented term as a Google search turns up only very recent use (generally all in 2018).

            What is a retpoline and how does it prevent the recent kernel information disclosure attacks?

            1 It's not Linux specific, however - similar or identical construct seems to be used as part of the mitigation strategies on other OSes.

            ...

            ANSWER

            Answered 2018-Jul-20 at 08:39

            The article mentioned by sgbj in the comments written by Google's Paul Turner explains the following in much more detail, but I'll give it a shot:

            As far as I can piece this together from the limited information at the moment, a retpoline is a return trampoline that uses an infinite loop that is never executed to prevent the CPU from speculating on the target of an indirect jump.

            The basic approach can be seen in Andi Kleen's kernel branch addressing this issue:

            It introduces the new __x86.indirect_thunk call that loads the call target whose memory address (which I'll call ADDR) is stored on top of the stack and executes the jump using a the RET instruction. The thunk itself is then called using the NOSPEC_JMP/CALL macro, which was used to replace many (if not all) indirect calls and jumps. The macro simply places the call target on the stack and sets the return address correctly, if necessary (note the non-linear control flow):

            Source https://stackoverflow.com/questions/48089426

            QUESTION

            Is using array arguments in C considered bad practice?
            Asked 2019-Jan-14 at 22:41

            When declaring a function that accesses several consecutive values in memory, I usually use array arguments like

            ...

            ANSWER

            Answered 2019-Jan-14 at 22:41

            QUESTION

            TCP: When is EPOLLHUP generated?
            Asked 2018-Oct-25 at 19:04

            Also see this question, unanswered as of now.

            There is a lot of confusion about EPOLLHUP, even in the man and Kernel docs. People seem to believe it is returned when polling on a descriptor locally closed for writing, i.e. shutdown(SHUT_WR), i.e. the same call that causes an EPOLLRDHUP at the peer. But this is not true, in my experiments I get EPOLLOUT, and no EPOLLHUP, after shutdown(SHUT_WR) (yes, it's counterintuitive to get writable, as the writing half is closed, but this is not the main point of the question).

            The man is poor, because it says EPOLLHUP comes when Hang up happened on the associated file descriptor, without saying what "hang up" means - what did the peer do? what packets were sent? This other article just confuses things further and seems outright wrong to me.

            My experiments show EPOLLHUP arrives once EOF (FIN packets) are exchanged both ways, i.e. once both sides issue shutdown(SHUT_WR). It has nothing to do with SHUT_RD, which I never call. Also nothing to do with close. In terms of packets, I have the suspicion that EPOLLHUP is raised on the ack of the hosts' sent FIN, i.e. the termination initiator raises this event in step 3 of the 4-way shutdown handshake, and the peer, in step 4 (see here). If confirmed, this is great, because it fills a gap that I've been looking for, namely how to poll non-blocking sockets for the final ack, without LINGER. Is this correct?

            (note: I'm using ET, but I don't think it's relevant for this)

            Sample code and output.

            The code being in a framework, I extracted the meat of it, with the exception of TcpSocket::createListener, TcpSocket::connect and TcpSocket::accept, which do what you'd expect (not shown here).

            ...

            ANSWER

            Answered 2018-Oct-25 at 19:04

            For these kind of questions, use the source! Among other interesting comments, there is this text:

            EPOLLHUP is UNMASKABLE event (...). It means that after we received EOF, poll always returns immediately, making impossible poll() on write() in state CLOSE_WAIT. One solution is evident --- to set EPOLLHUP if and only if shutdown has been made in both directions.

            And then the only code that sets EPOLLHUP:

            Source https://stackoverflow.com/questions/52976152

            QUESTION

            Cannot open /proc/self/oom_score_adj when I have the right capability
            Asked 2018-Jun-21 at 13:00

            I'm trying to set the OOM killer score adjustment for a process, inspired by oom_adjust_setup in OpenSSH's port_linux.c. To do that, I open /proc/self/oom_score_adj, read the old value, and write a new value. Obviously, my process needs to be root or have the capability CAP_SYS_RESOURCE to do that.

            I'm getting a result that I can't explain. When my process doesn't have the capability, I'm able to open that file and read and write values, though the value I write doesn't take effect (fair enough):

            ...

            ANSWER

            Answered 2018-Jun-20 at 17:05

            This one was very interesting to crack, took me a while.

            The first real hint was this answer to a different question: https://unix.stackexchange.com/questions/364568/how-to-read-the-proc-pid-fd-directory-of-a-process-which-has-a-linux-capabil - just wanted to give the credit.

            The reason it does not work as is

            The real reason you get "permission denied" there is files under /proc/self/ are owned by root if the process has any capabilities - it's not about CAP_SYS_RESOURCE or about oom_* files specifically. You can verify this by calling stat and using different capabilities. Quoting man 5 proc:

            /proc/[pid]

            There is a numerical subdirectory for each running process; the subdirectory is named by the process ID.

            Each /proc/[pid] subdirectory contains the pseudo-files and directories described below. These files are normally owned by the effective user and effective group ID of the process. However, as a security measure, the ownership is made root:root if the process's "dumpable" attribute is set to a value other than 1. This attribute may change for the following reasons:

            • The attribute was explicitly set via the prctl(2) PR_SET_DUMPABLE operation.

            • The attribute was reset to the value in the file /proc/sys/fs/suid_dumpable (described below), for the reasons described in prctl(2).

            Resetting the "dumpable" attribute to 1 reverts the ownership of the /proc/[pid]/* files to the process's real UID and real GID.

            This already hints to the solution, but first let's dig a little deeper and see that man prctl:

            PR_SET_DUMPABLE (since Linux 2.3.20)

            Set the state of the "dumpable" flag, which determines whether core dumps are produced for the calling process upon delivery of a signal whose default behavior is to produce a core dump.

            In kernels up to and including 2.6.12, arg2 must be either 0 (SUID_DUMP_DISABLE, process is not dumpable) or 1 (SUID_DUMP_USER, process is dumpable). Between kernels 2.6.13 and 2.6.17, the value 2 was also permitted, which caused any binary which normally would not be dumped to be dumped readable by root only; for security reasons, this feature has been removed. (See also the description of /proc/sys/fs/suid_dumpable in proc(5).)

            Normally, this flag is set to 1. However, it is reset to the current value contained in the file /proc/sys/fs/suid_dumpable (which by default has the value 0), in the following circumstances:

            • The process's effective user or group ID is changed.

            • The process's filesystem user or group ID is changed (see credentials(7)).

            • The process executes (execve(2)) a set-user-ID or set-group-ID program, resulting in a change of either the effective user ID or the effective group ID.

            • The process executes (execve(2)) a program that has file capabilities (see capabilities(7)), but only if the permitted capabilities gained exceed those already permitted for the process.

            Processes that are not dumpable can not be attached via ptrace(2) PTRACE_ATTACH; see ptrace(2) for further details.

            If a process is not dumpable, the ownership of files in the process's /proc/[pid] directory is affected as described in proc(5).

            Now it's clear: our process has a capability that the shell used to launch it did not have, thus the dumpable attribute was set to false, thus files under /proc/self/ are owned by root rather than the current user.

            How to make it work

            The fix is as simple as re-setting that dumpable attribute before trying to open the file. Stick the following or something similar before opening the file:

            Source https://stackoverflow.com/questions/50863306

            QUESTION

            Detecting Integer Constant Expressions in Macros
            Asked 2018-Jun-18 at 02:55

            There was a discussion in the Linux kernel mailing list regarding a macro that tests whether its argument is an integer constant expression and is an integer constant expression itself.

            One particularly clever approach that does not use builtins, proposed by Martin Uecker (taking inspiration from glibc's tgmath.h), is:

            ...

            ANSWER

            Answered 2018-Mar-26 at 03:27

            Use the same idea, where the type of a ?: expression depends on whether an argument is a null pointer constant or an ordinary void *, but detect the type with _Generic:

            Source https://stackoverflow.com/questions/49480442

            QUESTION

            why does "1 ? (int*)1 : ((void*)((x) * 0l))" work correctly?
            Asked 2018-Apr-02 at 19:53

            I have been following the C language hack for detecting integer constant expressions using Macros - Idea by Martin Uecker. https://lkml.org/lkml/2018/3/20/805

            When I started playing around with the code, I found a weird behavior by interchanging the expressions of the ternary operator.

            See the below code,

            ...

            ANSWER

            Answered 2018-Apr-02 at 19:53

            So let's get through the standard. C11 6.3.2.3p3

            1. An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant. [...]

            In the construct (void*)((x) * 0l), the value is clearly always 0 for any integral value, therefore it always results in a null pointer given any kind of integer value on most platforms, but to a null-pointer constant iff the x is an integer constant expression.

            Now, the question is why does x ? (void *)0 : (int *)1 work the same as x ? (int *)1 : (void *)0 when wrapped in sizeof(* (...)). For that we need to read 6.5.11p6:

            1. If both the second and third operands are pointers or one is a null pointer constant and the other is a pointer, the result type is a pointer to a type qualified with all the type qualifiers of the types referenced by both operands. Furthermore, if both operands are pointers to compatible types or to differently qualified versions of compatible types, the result type is a pointer to an appropriately qualified version of the composite type; if one operand is a null pointer constant, the result has the type of the other operand; otherwise, one operand is a pointer to void or a qualified version of void, in which case the result type is a pointer to an appropriately qualified version of void.

            So if either the 2nd or 3rd expession is a null pointer constant, the result of the conditional operator has the type of the other expression, i.e. both x ? (void *)0 : (int *)1 and x ? (int*)1 : (void *)0 have type int *!

            And in other case, i.e. x ? (void *)non_constant : (int *)1 the latter part of the bolded text says that the type of that expression must be void *.

            The type is completely decided during the compilation phase, based on the types of the second and the third operands; and the value of the first operand plays no role in it. If this is then used in a sizeof then the entire expression becomes just another integer constant expression.

            Then comes the dubious part: sizeof(* ...). The pointer is "dereferenced", and the sizeof of the dereferenced value is used in calculations. ISO C does not allow the evaluation of sizeof(void) -actually, it doesn't even allow the dereference of a void * - but GCC defines it as sizeof(void).

            As Linus said: "That is either genius, or a seriously diseased mind. - I can't quite tell which."

            By the way, for many similar uses, the GCC builtin __builtin_constant_p would be sufficient - but while it would return 1 for all integer constant expressions, it would return 1 for any other lvalues for which the optimizer can substitute a constant in code; the difference being that only constant integer expressions can be used as for example as non-VLA array dimensions, in static initializers and as bitfield widths.

            Source https://stackoverflow.com/questions/49615338

            QUESTION

            Communicating with Linux kernel developers
            Asked 2018-Feb-02 at 20:57

            What is the correct protocol for communicating with Linux kernel developers with a question or potential bug in a particular section of code?

            I can do git blame -e and email the person who last touched the code, but they may not be the best person to look into it, and if they are on vacation (or too busy to respond) it could be a black hole.

            I can ask on linux-kernel@vger.kernel.org, but that looks like a very high volume mailing list and I'd be concerned about the email being lost in the noise or bothering too many people about a very specific question.

            I can ask on this site, but if it's a highly specific question about code that isn't easily understandable, it is unlikely I'd get a good response.

            Is there an officially recommended way to determine who is the best person to ask about a section of Linux kernel code?

            ...

            ANSWER

            Answered 2017-Aug-23 at 19:41

            You could use the get_maintainer.pl script to see who's responsible for the specific file in question. Use it like so from the linux dir:

            perl scripts/get_maintainer.pl [OPTIONS] -f

            The kernel newbies mailing list can also be a good place to start.

            Source https://stackoverflow.com/questions/45847915

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install lkml

            Installation is very simple. Just use the go get tool if you already set up Go development environment as below:.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/sjp38/lkml.git

          • CLI

            gh repo clone sjp38/lkml

          • sshUrl

            git@github.com:sjp38/lkml.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular TCP Libraries

            masscan

            by robertdavidgraham

            wait-for-it

            by vishnubob

            gnet

            by panjf2000

            Quasar

            by quasar

            mumble

            by mumble-voip

            Try Top Libraries by sjp38

            ash

            by sjp38Python

            kakaobot

            by sjp38Go

            CPU2006-Express

            by sjp38Shell

            goOnAndroidFA

            by sjp38Java

            stream-track

            by sjp38Python