util-linux | See | Media Player library
kandi X-RAY | util-linux Summary
kandi X-RAY | util-linux Summary
See also: Documentation/howto-contribute.txt Documentation/howto-build-sys.txt Documentation/howto-pull-request.txt.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of util-linux
util-linux Key Features
util-linux Examples and Code Snippets
Community Discussions
Trending Discussions on util-linux
QUESTION
So I've just created my very first docker image (woohoo) and was able to run it on the original host system where it was created (Ubuntu 20.04 Desktop PC). The image was executed using docker run -it
. The expected command (defined in CMD
which is just a bash script) was run, and the expected output was seen. I assumed this meant I successfully created my very first docker image and so I pushed this to Docker Hub.
GitHub repo with original docker-compose.yml
and Dockerfile
Here's the Dockerfile:
...ANSWER
Answered 2022-Mar-14 at 23:42QUESTION
Currently I am building an image for the IMX8M-Plus Board with a Yocto-Project on Windows using WSL2.
I enlarged the standard size of the WSL2 image from 250G to 400G, as this project gets to around 270G.
The initialization process is identical with the one proposed from compulab -> Github-Link
During the building process the do_configure
step of tensorflow lite fails.
The log of the bitbake process that fails is as following:
...ANSWER
Answered 2022-Mar-07 at 07:54Solution
- Uninstalled Docker
- Deleted every .vhdx file
- Installed Docker
- Created a new "empty" .vhdx file (~700MB after starting Docker and VSCode)
- Relocated it to a new harddrive (The one with 500GB+ left capacity)
- Resized it with diskpart
- Confirmed the resizing with an Ubuntu-Terminal, as I needed to use resize2fs
- Used the same Dockerfile and built just Tensorflow-lite
- Built the whole package afterwards
Not sure what the problem was, seems to must have been some leftover files, that persisted over several build-data deletions.
QUESTION
I am using a Linux build with Yocto on which my application is running. A few times it happened that after rebuilding the image I had an old version of my application. Now before every image build I call in the console
bitbake my_recipe -c cleanall
. Is there any way to force what cleanall
does in the .bb
file with the recipe for my application?
EDIT:
This is my recipe. When I test my new branch I use SRCREV = "${AUTOREV}"
, when I prepare some stable version, I set a proper commit hash to the variable SRCREV
.
ANSWER
Answered 2022-Jan-21 at 11:42Bitbake will only rebuild the package if the PV
(package version) or PR
(recipe revision) changes.
In your recipe, the SRCREV
is changing automatically due to the use of AUTOREV
, however it is not included in PV
, and so the recipe does not get rebuilt because the cache already contains a build for that PV
.
You need to include SRCPV
(source version) in PV
, for example:
QUESTION
I have a system that is due to be upgraded but I'm having conflicts with apt-get -f install
:
ANSWER
Answered 2022-Jan-21 at 01:27Try to flush the cache and reinstall then :
QUESTION
I modified the following Dockerfile to use arm binaries so it works on my M1 MacBook Pro, the original works fine on a MacBook Pro i5.
...ANSWER
Answered 2022-Jan-06 at 16:14I changed the platform to amd64 and it worked!
FROM --platform=linux/amd64 alpine:latest
QUESTION
Context
I have a jenkins that builds a docker image for a raspberry pi 2. It is using buildx to emulate the ArmV7 environment during build. This worked great until recently I got random errors during installing the apk packages.
Dockerfile
...ANSWER
Answered 2021-Nov-22 at 14:18ok, looks like i found my solution here: https://gitlab.alpinelinux.org/alpine/aports/-/issues/12406
quote from Lyle Franklin:
I hit this error when trying to build a cross-platform ARM64 docker image from a AMD64 host. However, running
docker run --rm --privileged linuxkit/binfmt:v0.8 or update-binfmts --enable
prior to running the build seems to avoid the issue. My understanding Docker will try to use upstream QEMU if it is installed and registered with the kernel, otherwise Docker will fallback to using a built-in forked version of QEMU. The build error above only showed up for me with the forked QEMU.
So I will probably add docker run --rm --privileged linuxkit/binfmt:v0.8 && update-binfmts --enable
to my pipeline file if I encounter the error again, for now running it once solved the issue.
QUESTION
I'm stuck in the Busy Box and don't find a way out. Here's what I did:
...ANSWER
Answered 2021-Oct-15 at 07:34I had the same issue with an SD card on my Raspberry PI, did a clone of the broken SD card to a good one using WIN32 Disk Imager.
The new card was then successfully verified by fsck and started properly.
Of course, you might be able to do it on another Linux system using
QUESTION
When I use the column command to reformat output, I know, I need to pass $'...'
format to its -s (separator) option, if the separator is an ANSI C backslash escape char.
Example:
file1 and file2:
...ANSWER
Answered 2021-Aug-31 at 10:38Anybody can explain the result of the tests above?
test1
and test2
Column tries to use locale (i.e. UTF-8) to parse input, 0x99
by itself (not preceded by 0xc2
) is an invalid Unicode sequence.
There is a bug in column
that does not check if the string passed to -s
is a valid Unicode string. column
calls wcspbrk
to find input_separator
(i.e. $'\x99'
) in the input stream. Because the string 0x99
is an invalid UTF-8 sequence, column
calls wcspbrk
with NULL
as second argument, and it causes seg fault.
QUESTION
When using the standard DockerFile available here, GraphDB fails to start with the following output:
...ANSWER
Answered 2021-Jul-09 at 13:13The issue comes from an update in the base image. From a few weeks adopt switched to alpine 3.14 which has some issues with older container runtime (runc). The issue can be seen in the release notes: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0
Updating your Docker will fix the issue. However, if you don't wish to update your Docker, there's a workaround.
Some additional info: The cause of the issue is that for some reason containers running in older docker versions and alpine 3.14 seem to have issues with the test flag "-x" so an if [ -x /opt/java/openjdk/bin/java ] returns false, although java is there and is executable.
You can workaround this for now by
- Pull the GraphDB distribution
- Unzip it
- Open "setvars.in.sh" in the bin folder
- Find and remove the if block around line 32
if [ ! -x "$JAVA" ]; then echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME" exit 1 fi
- Zip it again and provide it in the Dockerfile without pulling it from maven.ontotext.com
Passing it to the Dockerfile is done with 'ADD' You can check the GraphDB free version's Dockerfile for a reference on how to pass the zip file to the Dockerfile https://github.com/Ontotext-AD/graphdb-docker/blob/master/free-edition/Dockerfile
QUESTION
I'm trying to set up my environment to use Yocto's generated SDK to compile my out-of-tree module, but for some reason, I'm getting an error.
cp: cannot stat 'arch/arm/kernel/module.lds': No such file or directory
I'm using Poky distribution and meta-raspberrypi which is needed because I'm using the RPI ZeroW board. Apart from this everything works fine. I'm able to compile the entire image and load it on the board.
Here is the line I've added to local.conf
TOOLCHAIN_TARGET_TASK_append = " kernel-devsrc"
as I've found in the documentation.
Also below you can find the whole log from the compilation.
...ANSWER
Answered 2021-Jun-07 at 11:16Missing the module.lds file in the latest kernel. Apply the following source code as a patch in the kernel and build the image.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install util-linux
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page