procfs | Rust library for reading the Linux procfs filesystem | File Utils library

 by   eminence Rust Version: v0.15.1 License: Non-SPDX

kandi X-RAY | procfs Summary

kandi X-RAY | procfs Summary

procfs is a Rust library typically used in Utilities, File Utils applications. procfs has no bugs, it has no vulnerabilities and it has low support. However procfs has a Non-SPDX License. You can download it from GitHub, GitLab.

[Minimum rustc version] This crate is an interface to the proc pseudo-filesystem on linux, which is normally mounted as /proc. Long-term, this crate aims to be fairly feature complete, but at the moment not all files are exposed. See the docs for info on what’s supported, or view the [support.md] file in the code repository.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              procfs has a low active ecosystem.
              It has 277 star(s) with 83 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 7 open issues and 54 have been closed. On average issues are closed in 117 days. There are 3 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of procfs is v0.15.1

            kandi-Quality Quality

              procfs has no bugs reported.

            kandi-Security Security

              procfs has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              procfs has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              procfs releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of procfs
            Get all kandi verified functions for this library.

            procfs Key Features

            No Key Features are available at this moment for procfs.

            procfs Examples and Code Snippets

            No Code Snippets are available at this moment for procfs.

            Community Discussions

            QUESTION

            How to disable reading or writing functionality on a proc file?
            Asked 2021-Apr-16 at 23:22

            I am creating a proc file (/proc/key) that a user can write his decryption_key to it and then this key will be used to decrypt the contents of a buffer stored inside a kernel module. Also, I have another proc entry (/proc/decrypted) that will be used to read the contents of the buffer that stores the decrypted text.

            The problem is that I don't want the user to be able to write anything to the (/proc/decrypted) file and I don't want him to read anything from the (/proc/key). How can this be implemented?

            I have pointed the corresponding functions inside the file_operations struct to NULL, but obviously, this is going to cause segmentation faults once the user tries them.

            How can I prevent reading or writing from a procfs? should I just create functions that have no body and point the file_operations struct to them when needed?

            ...

            ANSWER

            Answered 2021-Apr-16 at 22:45

            If you want to disallow reading, you can just omit explicitly setting the .read field of struct file_operation. If the structure is defined as static and therefore initialized to 0, all fields that are not explicitly overridden will default to NULL, and the kernel will simply not do anything and return an error (I believe -EINVAL) whenever user code tries to call read on your open file.

            Alternatively, if you want to return a custom error, you can define a dummy function which only returns an error (like for example return -EFAULT;).

            Do you think the way I am "writing" the key into the buffer is the right way to do it ?

            This is wrong for multiple reasons.

            First, your copy_from_user() is blindly trusting the user count, so this result in a kernel buffer overflow on the temp variable, which is pretty bad. You need to check and/or limit the size first. You are also not checking the return value of copy_from_user(), which you should (and it is not an int, but rather an unsigned long).

            Source https://stackoverflow.com/questions/67132366

            QUESTION

            Kube-Prometheus-Stack Helm Chart v14.40 : Node-exporter and scrape targets unhealthy in Docker For Mac Kubernetes Cluster on macOS Catalina 10.15.7
            Asked 2021-Apr-02 at 11:15

            I have installed kube-prometheus-stack as a dependency in my helm chart on a local Docker for Mac Kubernetes cluster v1.19.7.

            The myrelease-name-prometheus-node-exporter service is failing with errors received from the node-exporter daemonset after installation of the helm chart for kube-prometheus-stack is installed. This is installed in a Docker Desktop for Mac Kubernetes Cluster environment.

            release-name-prometheus-node-exporter daemonset error log

            ...

            ANSWER

            Answered 2021-Apr-01 at 08:10

            This issue was solved recently. Here is more information: https://github.com/prometheus-community/helm-charts/issues/467 and here: https://github.com/prometheus-community/helm-charts/pull/757

            Here is the solution (https://github.com/prometheus-community/helm-charts/issues/467#issuecomment-802642666):

            [you need to] opt-out the rootfs host mount (preventing the crash). In order to do that you need to specify the following value in values.yaml file:

            Source https://stackoverflow.com/questions/66893031

            QUESTION

            When the column values for UID and GID fields in /proc//status file will differ
            Asked 2021-Feb-26 at 14:02

            Here is the output of a sample /proc/pid/status file.

            From the procfs(5) Man page, found that those are Real, Effective, Saved and FileSystem UIDs

            ...

            ANSWER

            Answered 2021-Feb-26 at 14:02

            Taking this as a question:

            So, in any chance those four columns will show different UIDs.

            Yes. Subject to various limitations, processes can change their effective and saved UIDs and GIDs. This is what the setuid(), setgid(), seteuid(), and setegid() functions do.

            The filesystem uid and gid are Linux-specific features that are used mainly, if not entirely, in the context of NFS (see filesystem uid and gid in linux). These can be manipulated with setfsuid() and setfsgid(), subject, again, to limitations.

            For most processes, all the UIDs will the same and all the GIDs will be the same, but it is conceivable that they would all be different. It is a function of the behavior of the process.

            Source https://stackoverflow.com/questions/66387017

            QUESTION

            How does OCI/runc system path constraining work to prevent remounting such paths?
            Asked 2021-Jan-30 at 16:26

            The background of my question is a set of test cases for my Linux-kernel Namespaces discovery Go package lxkns where I create a new child user namespace as well as a new child PID namespace inside a test container. I then need to remount /proc, otherwise I would see the wrong process information and cannot lookup the correct process-related information, such as the namespaces of the test process inside the new child user+PID namespaces (without resorting to guerilla tactics).

            The test harness/test setup is essentially this and fails without --privileged (I'm simplifying to all caps and switching off seccomp and apparmor in order to cut through to the real meat):

            ...

            ANSWER

            Answered 2021-Jan-30 at 16:26

            Quite some more digging turned up this answer to "About mounting and unmounting inherited mounts inside a newly-created mount namespace" which points in the correct direction, but needs additional explanations (not least due to basing on a misleading paragraph about mount namespaces being hierarchical from man pages which Michael Kerrisk fixed some time ago).

            Our starting point is when runc sets up the (test) container, for masking system paths especially in the container's future /proc tree, it creates a set of new mounts to either mask out individual files using /dev/null or subdirectories using tmpfs. This results in procfs being mounted on /proc, as well as further sub-mounts.

            Now the test container starts and at some point a process unshares into a new user namespace. Please keep in mind that this new user namespace (again) belongs to the (real) root user with UID 0, as a default Docker installation won't enable running containers in new user namespaces.

            Next, the test process also unshares into a new mount namespace, so this new mount namespace belongs to the newly created user namespace, but not to the initial user namespace. According to section "restrictions on mount namespaces" in mount_namespaces(7):

            If the new namespace and the namespace from which the mount point list was copied are owned by different user namespaces, then the new mount namespace is considered less privileged.

            Please note that the criterion here is: the "donor" mount namespace and the new mount namespace have different user namespaces; it doesn't matter whether they have the same owner user (UID), or not.

            The important clue now is:

            Mounts that come as a single unit from a more privileged mount namespace are locked together and may not be separated in a less privileged mount namespace. (The unshare(2) CLONE_NEWNS operation brings across all of the mounts from the original mount namespace as a single unit, and recursive mounts that propagate between mount namespaces propagate as a single unit.)

            As it now is not possible anymore to separate the /proc mountpoint as well as the masking submounts, it's not possible to (re)mount /proc (question 1). In the same sense, it is impossible to unmount /proc/kcore, because that would allow unmasking (question 2).

            Now, when deploying the test container using --security-opt systempaths=unconfined this results in a single /proc mount only, without any of the masking submounts. In consequence and according to the man page rules cited above, there is only a single mount which we are allowed to (re)mount, subject to the CAP_SYS_ADMIN capability including also mounting (besides tons of other interesting functionality).

            Please note that it is possible to unmount masked /proc/ paths inside the container while still in the original (=initial) user namespace and when possessing (not surprisingly) CAP_SYS_ADMIN. The (b)lock only kicks in with a separate user namespace, hence some projects striving for deploying containers in their own new user namespaces (which unfortunately has effects not least on container networking).

            Source https://stackoverflow.com/questions/65917162

            QUESTION

            Kubernetes DaemonSet Pods schedule on all nodes expect one
            Asked 2020-Nov-10 at 12:03

            I'm trying to deploy a Prometheus nodeexporter Daemonset in my AWS EKS K8s cluster.

            ...

            ANSWER

            Answered 2020-Nov-10 at 12:03

            As posted in the comments:

            Please add to the question the steps that you followed (editing any values in the Helm chart etc). Also please check if the nodes are not over the limit of pods that can be scheduled on it. Here you can find the link for more reference: LINK.

            no processes occupying 9100 on the given node. @DawidKruk The POD limit was reached. Thanks! I expected them to give me some error regarding that rather than vague node selector property not matching

            Not really sure why the following messages were displayed:

            • node(s) didn't have free ports for the requested pod ports
            • node(s) didn't match node selector

            The issue that Pods couldn't be scheduled on the nodes (Pending state) was connected with the Insufficient pods message in the $ kubectl get events command.

            Above message is displayed when the nodes reached their maximum capacity of pods (example: node1 can schedule maximum of 30 pods).

            More on the Insufficient Pods can be found in this github issue comment:

            That's true. That's because the CNI implementation on EKS. Max pods number is limited by the network interfaces attached to instance multiplied by the number of ips per ENI - which varies depending on the size of instance. It's apparent for small instances, this number can be quite a low number.

            Docs.aws.amazon.com: AWSEC2: User Guide: Using ENI: Available IP per ENI

            -- Github.com: Kubernetes: Autoscaler: Issue 1576: Comment 454100551

            Additional resources:

            Source https://stackoverflow.com/questions/64724219

            QUESTION

            APM Go Agent isn't Sending Data to the APM Server
            Asked 2020-Aug-19 at 05:40

            I have an Elastic APM-Server up and running and it has successfully established connection with Elasticsearch.

            Then I installed an Elastic APM Go agent:

            ...

            ANSWER

            Answered 2020-Aug-19 at 05:40

            Since you didn't mention it above: did you instrument a Go application? The Elastic APM Go "Agent" is a package which you use to instrument your application source code. It is not an independent process, but runs within your application.

            So, first (if you haven't already) instrument your application. See https://www.elastic.co/guide/en/apm/agent/go/current/getting-started.html#instrumenting-source

            Here's an example web server using Echo, and the apmechov4 instrumentation module:

            Source https://stackoverflow.com/questions/63480314

            QUESTION

            Yocto Patch Linux Kernel In-Tree-Module with extern symbol exported from Out-Of-Tree Module
            Asked 2020-Aug-05 at 15:01

            I am using Yocto to build an SD Card image for my Embedded Linux Project. The Yocto branch is Warrior and the Linux kernel version is 4.19.78-linux4sam-6.2.

            I am currently working on a way to read memory from an external QSPI device in the initramfs and stick the contents into a file in procfs. That part works and I echo data into the proc file and read it out successfully later in user space Linux after the board has booted.

            Now I need to use the Linux Kernel module EXPORT_SYMBOL() functionality to allow an in-tree kernel module to know about my out-of-tree custom kernel module exported symbol.

            In my custom module, I do this:

            ...

            ANSWER

            Answered 2020-Aug-05 at 15:01

            There are several ways to achieve what you want (taking into account different aspects, like module can be compiled in or be a module).

            1. Convert Out-Of-Tree module to be In-Tree one (in your custom kernel build). This will require simple export and import as you basically done and nothing special is required, just maybe providing a header with the symbol and depmod -a run after module installation. Note, you have to use modprobe in-tree which reads and satisfies dependencies.
            2. Turn other way around, i.e. export symbol from in-tree module and file it in the out-of-tree. In this case you simply have to check if it has been filed or not (since it's a MAC address the check against all 0's will work, no additional flags needed)

            BUT, these ways are simply wrong. The driver and even your patch clearly show that it supports OF (Device Tree) and your board has support of it. So, this is a first part of the solution, you may provide correct MAC to the network card using Device Tree.

            In the case you want to change it runtime the procfs approach is very strange to begin with. Network device interface in Linux has all means to update MAC from user space at any time user wants to do it. Just use ip command, like /sbin/ip link set <$ETH> addr <$MACADDR>, where <$ETH> is a network interface, for example, eth0 and <$MACADDR> is a desired address to set.

            So, if this question rather about module symbols, you need to find better example for it because it's really depends to use case. You may consider to read How to export symbol from Linux kernel module in this case? as an alternative way to exporting. Another possibility how to do it right is to use software nodes (it's a new concept in recent Linux kernel).

            Source https://stackoverflow.com/questions/63256621

            QUESTION

            Passing from manual Docker host network to Docker Compose bridge
            Asked 2020-Jul-14 at 10:40

            I have 2 docker images a modbus server and a client which I run manually with docker run --network host server and the same with the client and work perfectly. But now I need to add them to a docker-compose file where the network is bridge, what I did like this:

            ...

            ANSWER

            Answered 2020-Jul-14 at 09:35

            You can try using the env variable like AUTO_SERVER_HOST and call this in your code

            Source https://stackoverflow.com/questions/62819037

            QUESTION

            How do I expose custom files similar to /procfs on Linux?
            Asked 2020-Jun-09 at 11:30

            I have a writer process which outputs its status at regular intervals as a readable chunck of wchar_t. I would need to ensure the following properties:

            1. When there's and update, the readers shouldn't read partial/corrupted data
            2. The file should be volatile in memory so that when the writer quits, the file is gone
            3. The file content size is variable
            4. Multiple readers could read the file in parallel, doesn't matter if the content is synced, as long as is non partial for each client
            5. If using truncate and then write, clients should only read the full file and not observe such partial operations

            How could I implement such /procfs-like file, outside /procfs filesystem?

            I was thinking to use classic c Linux file APIs and create something under /dev/shm by default, but I find it hard to implement effectively point 1 and 5 most of all. How could I expose such file?

            ...

            ANSWER

            Answered 2020-Jun-09 at 11:30

            Typical solution is to create a new file in the same directory, then rename (hardlink) it over the old one.

            This way, processes see either an old one or a new one, never a mix; and it only depends on the moment when they open the file.

            The Linux kernel takes care of the caching, so if the file is accessed often, it will be in RAM (page cache). The writer must, however, remember to delete the file when it exits.

            A better approach is to use fcntl()-based advisory record locks (typically over the entire file, i.e. .l_whence = SEEK_SET, .l_start = 0, .l_len = 0).

            The writer will grab a write/exclusive lock before truncating and rewriting the contents, and readers a read/shared lock before reading the contents.

            This requires cooperation, however, and the writer must be prepared to not be able to lock (or grabbing the lock may take undefined amount of time).

            A Linux-only scheme would be to use atomic replacement (via rename/hardlinking), and file leases.

            (When the writer process has an exclusive lease on an open file, it gets a signal whenever another process wants to open that same file (inode, not file name). It has at least a few seconds to downgrade or release the lease, at which point the opener gets access to the contents.)

            Basically, the writer process creates an empty status file, and obtains exclusive lease on it. Whenever the writer receives a signal that a reader wants to access the status file, it writes the current status to the file, releases the lease, creates a new empty file in the same directory (same mount suffices) as the status file, obtains an exclusive lease on that one, and renames/hardlinks it over the status file.

            If the status file contents do not change all the time, only periodically, then the writer process creates an empty status file, and obtains exclusive lease on it. Whenever the writer receives a signal that a reader wants to access the (empty) status file, it writes the current status to the file, and releases the lease. Then, when the writer process' status is updated, and there is no lease yet, it creates a new empty file in the status file directory, takes an exclusive lease on it, and renames/hardlinks over the status file.

            This way, the status file is always updated just before a reader opens it, and only then. If there are multiple readers at the same time, they can open the status file without interruption when the writer releases the lease.

            It is important to note that the status information should be collected in a single structure or similar, so that writing it out to the status file is efficient. Leases are automatically broken if not released soon enough (but there are a few seconds at least to react), and the lease is on the inode – file contents – not the file name, so we still need the atomic replacement.

            Here's a crude example implementation:

            Source https://stackoverflow.com/questions/62276058

            QUESTION

            Pods Pending for Node-Exporter via Helm on EKS
            Asked 2020-May-12 at 19:27

            for the purposes of troubleshooting I decided to deploy a very vanilla implementation of Prometheus NodeExporter via helm install exporter stable/prometheus however I can't get the pods to start. I've searched high and low and I'm not sure where else to turn. I'm able to install many other apps on my cluster with the exception of just this one. I've attached some troubleshooting output for your reference. I believe it may have something to do with "tolerations" but I'm still digging in.

            EKS cluster is running on 3 t2.large which can support up to 35 pods per node, and I'm running a total of 43 pods. Any other ideas for troubleshooting would be greatly appreciated.

            Describe Pods Output

            ...

            ANSWER

            Answered 2020-May-12 at 19:27

            3 node(s) didn't have free ports for the requested pod ports.

            From the error it shows that allocated node port is already in use. As you define hostPort: 9100, it limits the number of places the pod can be scheduled, because each combination must be unique. Ref: https://kubernetes.io/docs/concepts/configuration/overview/#services

            Source https://stackoverflow.com/questions/61745762

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install procfs

            You can download it from GitHub, GitLab.
            Rust is installed and managed by the rustup tool. Rust has a 6-week rapid release process and supports a great number of platforms, so there are many builds of Rust available at any time. Please refer rust-lang.org for more information.

            Support

            Contributions are welcome, especially in the areas of documentation and testing on older kernels. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/eminence/procfs.git

          • CLI

            gh repo clone eminence/procfs

          • sshUrl

            git@github.com:eminence/procfs.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular File Utils Libraries

            hosts

            by StevenBlack

            croc

            by schollz

            filebrowser

            by filebrowser

            chokidar

            by paulmillr

            node-fs-extra

            by jprichardson

            Try Top Libraries by eminence

            terminal-size

            by eminenceRust

            xmltree-rs

            by eminenceRust

            udt-rs

            by eminenceRust

            lifx

            by eminenceRust

            libgit2-rs

            by eminenceRust