iproute2 | iproute2 with support for new Qdiscs | Networking library
kandi X-RAY | iproute2 Summary
kandi X-RAY | iproute2 Summary
This is a set of utilities for Linux networking.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of iproute2
iproute2 Key Features
iproute2 Examples and Code Snippets
Community Discussions
Trending Discussions on iproute2
QUESTION
Trying to build a docker image with the execution of a pre-requisites installation script inside the Dockerfile fails for fetching packages via apt-get from archive.ubuntu.com.
Using the apt-get
command inside the Dockerfile works flawless, despite being behind a corporate proxy, which is setup via the ENV
command in the Dockerfile.
Anyway, executing the apt-get
command from a bash-script in a terminal inside the resulting docker container or as "postCreateCommand" in a devcontainer.json of Visual Studio Code does work as expected too. But it won't work in my case for the invocation of a bash script from inside a Dockerfile.
It simply will tell:
ANSWER
Answered 2022-Jan-19 at 20:43As pointed out in the comment section underneath the question:
using sudo to launch the command, wiping out all the current vars set in the current environment, more specifically your proxy settings
So that is the case.
The solution is either to remove sudo from the bash script and invoke the script as root inside the Dockerfile.
Or, using sudo
will work with ENV
variables, just apply sudo -E
.
QUESTION
I'm buildinga a docker image using a Dockerfile to build it. I have put ARG DEBIAN_FRONTEND=noninteractive
in the beginning of the Dockerfile to avoid debconf warnings while building.
The warnings does not show up when using apt-get install
inside the Dockerfile. However when executing a sh script (install_dependencies.sh) from the Dockerfile that contains apt-get install commands, the warnings show up again. I also tried to set DEBIAN_FRONTEND=noninteractive
inside the sh script itself.
I can solve this by adding echo 'debconf debconf/frontend select Noninteractive' | sudo debconf-set-selections
in the sh script before the apt-get install commands but I would want to avoid that, since any fail in the script would leave debconf select to Noninteractive.
Dockerfile:
...ANSWER
Answered 2021-Dec-05 at 17:24Drop sudo
in your script, there is point to use it if you're running as root. This is also the reason that DEBIAN_FRONTEND has no effect - sudo drops your current user's environment for security reasons, you'd have to use with -E option to make it work.
QUESTION
The following webpage contains all the source code URLs for the LFS project:
https://linuxfromscratch.org/lfs/view/systemd/chapter03/packages.html
I've wriiten some python3 code to retrieve all these URLs from that page:
...ANSWER
Answered 2021-Jul-02 at 16:28You can use .string
property (returns the string passed into the function).
Code Example
QUESTION
I have a project which has a docker-compose file and a Dockerfile. The project is open here GitHub
I'm building a demo project with:
- Traefik
- Snort 3
- A NodeJS API dummy for testing
The issue is that in my Docker file I have a command like this to run on Snort
...ANSWER
Answered 2021-Jun-07 at 12:56Your entrypoint is conflicting with the command you want to run:
QUESTION
I want to create a virtual private network in which I can manage virtual machines's interfaces with iproute2.
For example, with AZ CLI, I create two virtual machines in subnet 172.16.1.0/24, each machine has one interface:
az network vnet create -g test -n net --address-prefix 172.16.0.0/16 --ddos-protection false --vm-protection false
az network vnet subnet create -g test --vnet-name net -n subnet1 --address-prefixes 172.16.1.0/24 --network-security-group test
az network nic create -g test -n vm1-nic --vnet-name net --subnet subnet1 --private-ip-address 172.16.1.10 --public-ip-address vm1-pub
az network nic create -g test -n vm2-nic --vnet-name net --subnet subnet1 --private-ip-address 172.16.1.11 --public-ip-address vm2-pub
az vm create -g test -n vm1 --image rhel --size Standard_F4 --generate-ssh-keys --nics vm1-nic
az vm create -g test -n vm2 --image rhel --size Standard_F4 --generate-ssh-keys --nics vm2-nic
Then I connect on vm1 with ssh, ping 172.16.1.11 should work.
It is possible to change vm's network interfaces ip addresses with iproute2 command? Like I put 10.100.0.1/24 on vm1's interface and 10.100.0.2/24 on vm2's interface with iprout2 command and I ping to 10.100.0.2 from 10.100.0.1.
I want to understand how virtual machines are connected, the connection is simulated as a wired connection which we can configure network interfaces?
...ANSWER
Answered 2021-Mar-24 at 02:19See the description for the static IP address here:
If you manually set the private IP address within the operating system, make sure it matches the private IP address assigned to the Azure network interface. Otherwise, you can lose connectivity to the VM.
It means if you want to change the IP address within the VM, you need first to change the configuration of the VM NIC in Azure, then you can change the IP address within the VM using the command. If not, you can't change it. Generally, all the things of the VM are configured by Azure.
QUESTION
Where is the sub command "depends" of "apt" documented, especially its output format and the meaning of the pipe symbol in the output?
"man apt" doesn't mention this sub command at all.
Example invocation:
...ANSWER
Answered 2021-Mar-18 at 18:58It is documented on apt-cache
manpages:
QUESTION
I have three Azure Pipeline agents built on Ubuntu 18.04 images and deployed to a Kubernetes cluster. Agents are running the latest version, 2.182.1, but this problem also happened using 2.181.0.
Executing build pipelines individually works just fine. Build completes successfully every time. But whenever a second pipeline starts while another pipeline is already running, it fails - every time - on the "Checkout" job with the following error:
The working folder U:\azp\agent\_work\1\s is already in use by the workspace ws_1_34;Project Collection Build Service (myaccount) on computer linux-agent-deployment-78bfb76d.
These are three separate and distinct agents running as separate containers. Why would a job from one container be impacting a job running on a different container? Concurrent builds work all day long on my non-container Windows servers.
The container agents are deployed as a standard Kubernetes "deployment" object:
...ANSWER
Answered 2021-Mar-05 at 17:59Solution has been found. Here's how I resolved this for anyone coming across this post:
I discovered a helm chart for Azure Pipeline agents - emberstack/docker-azure-pipelines-agent - and after poking around in the contents, discovered what was staring me in the face the last couple of days: "StatefulSets"
Simple, easy to test, and working well so far. I refactored my k8s manifest as a StatefulSet object and the agents are up and able to run builds concurrently. Still more testing to do, but looking very positive at this point.
QUESTION
I want to parse output of popen(3)
, but it looks like I have to repeat popen()
call.
ANSWER
Answered 2021-Jan-04 at 16:18You are parsing the output of the pipe using fscanf()
, which consumes the input from the stream until it satisfies the specified format. For this reason without a further fscanf
will just read any new data present in the stream. What you need, instead is to store the input into a string (an array of char
s).
In order to accomplish this task you can just use fgets()
in order to read the whole line. In this way you'll always be able to pass the stored string to sscanf()
, or to perform any other parsing action required to retrieve the values the string contains:
QUESTION
I am using a VSCode devcontainer to write a Java application. I put it down for about a month, came back to work on it and now I'm getting some unfamiliar errors.
ConfigurationHere I'll provide my relevant configuration files for my devcontainer environment.
My Dockerfile is as below:
...ANSWER
Answered 2021-Jan-03 at 11:20The problem is Gson is attempting to use reflection to access a private field of some class in the java.lang
module.
In the days before Java 9, this was fine. With Java 9 this can throw exceptions.
The quick and dirty workaround would be to use an -add-opens
option to the java
command line. See the first reference for more information.
Another option would be to implement a custom object mapper to serialize / deserialize the JDK class (or classes) that triggering this. It is a bit dodgy for your application's serialization / deserialization to depend on the details of private fields of JDK classes. They may change, causing your application to break without warning.
(My guess is that this is caused by the errorInformation
field ...)
For more information about these options, see:
- How to solve InaccessibleObjectException ("Unable to make {member} accessible: module {A} does not 'opens {package}' to {B}") on Java 9?
- Gson Advanced — Custom Serialization for Simplification
I checked the Docker image I'm running this in and it does appear to have been updated recently.
Yes. It looks like you are using the newly Java 16 EA release now. I'm not sure this is advisable. It is certainly inadvisable to have your docker image updated under your feet. You should be developing against a specific target (major) version of Java.
UPDATE
However, for this problem to have "suddenly" started happening due to Java version change in your container, the previous version must have been Java 8 or earlier.
On reviewing the Java 16 page on the OpenJDK site, I see that it is implementing JEP 396: Strongly Encapsulate JDK Internals by Default which would have the effect of stopping Gson from messing with the access of private fields. If you read the JEP there may be another workaround.
QUESTION
I have a docker container that has a healthcheck running every 1 min. I read that appending "|| kill 1" to the healthcheck in dockerfile can stop the container after healthcheck fails, but it does not seem to be working for me and I cannot find an example that works.
Does anybody know how I can stop the container after marked as unhealthy? I currently have this in my dockerfile:
...ANSWER
Answered 2020-Dec-03 at 05:50Try Changing from kill
to exit 1
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install iproute2
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page