runtime-spec | OCI Runtime Specification | Continuous Deployment library

 by   opencontainers Go Version: v1.1.0-rc.3 License: Apache-2.0

kandi X-RAY | runtime-spec Summary

kandi X-RAY | runtime-spec Summary

runtime-spec is a Go library typically used in Devops, Continuous Deployment, Docker applications. runtime-spec has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

The Open Container Initiative develops specifications for standards on Operating System process and application containers.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              runtime-spec has a medium active ecosystem.
              It has 2839 star(s) with 532 fork(s). There are 207 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 61 open issues and 159 have been closed. On average issues are closed in 358 days. There are 23 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of runtime-spec is v1.1.0-rc.3

            kandi-Quality Quality

              runtime-spec has 0 bugs and 0 code smells.

            kandi-Security Security

              runtime-spec has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              runtime-spec code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              runtime-spec is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              runtime-spec releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.
              It has 460 lines of code, 2 functions and 4 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of runtime-spec
            Get all kandi verified functions for this library.

            runtime-spec Key Features

            No Key Features are available at this moment for runtime-spec.

            runtime-spec Examples and Code Snippets

            No Code Snippets are available at this moment for runtime-spec.

            Community Discussions

            QUESTION

            snyk container test from private repository
            Asked 2021-Nov-13 at 09:35

            I'm trying to use snyk with a privately hosted repository that is managed using podman.

            snyk container test --username="user" --password="pass" --platform="linux/arm64" oci.example.com/image -d

            I've tried using oci.example.com/image:latest oci.example.com/image:arm64 also and making sure they exist on the repository.

            The error I keep getting is: snyk-test error: FailedToRunTestError: OCI manifest found, but accept header does not support OCI manifests

            I can reproduce the same error using the API directly: curl -u 'user:pass' -i -H "Accept: application/vnd.docker.distribution.manifest.v2+json" https://oci.example.com/v2/mailpile/image/latest

            This works though: curl -u 'user:pass' -i -H "Accept: application/vnd.oci.image.manifest.v1+json" https://oci.example.com/v2/[IMAGE]/manifests/latest

            I wonder what I'm missing. Maybe snyk relies on a distribution.manifest that podman push oci.example.com/image does not seem to provide, suspected after reading: https://podman.io/blogs/2021/10/11/multiarch.html Due to the way image-name references are internally processed, you should not use the usual podman push and podman rmi subcommands. THEY WILL NOT DO WHAT YOU EXPECT! Instead, you’ll want to use podman manifest push --all and podman manifest rm (similarly for buildah). These will push/remove the manifest list itself instead of the contents. Similarly for tagging if you’re on Podman v3.4, use the buildah tag command instead. I also verified this peeking with manifest inspect, indeed it seems it only attaches image and no distribution.manifest by default.

            The OpenSUSE Debian Podman repo latest version:

            ...

            ANSWER

            Answered 2021-Nov-13 at 09:35

            Steps to fix:

            podman build --format=docker -t oci.example.com/image .

            podman push oci.example.com/image oci.example.com/image

            Source https://stackoverflow.com/questions/69879081

            QUESTION

            Create Nuget package for different platforms, architectures and .net versions
            Asked 2021-Oct-10 at 17:51

            I have a C# 7.3 project Foo which I want to publish as a Nuget package. I want my package be available for different architectures (x86 & x64), different platforms (Windows & Linux) and different .Net versions (4.8 framework & 5.0).

            Foo C# code itself does not contain any architecture-specific code, it consists only of pure C# 7.3. But this project uses my custom platform-specific .dll and .so on Linux (say ExternalLib.dll / ExternalLib.dll.so). I have one of them for each Windows x64, Windows x86 and etc.

            I know that in the NuGet package you place runtime-specific components in /runtimes folder structure like /runtimes/win10-x64/native/ExternalLib.dll. As documentation says, these will be used only for runtime and I need to specify compile-time references.

            I built my Foo project in Any CPU configuration for each net4.7 and net5.0 and placed each Foo.dll in /lib folder. So my final folder structure for the module is

            ...

            ANSWER

            Answered 2021-Oct-10 at 17:51

            I finally did it. After a research, this is my approach:

            1. If you write a C# library, use only .NetStandard, so both .Net and .NetFramework applications can use your library. I used .NetStandard2.0 which can be used from both .Net5.0 and .NetFramework4.8. See this for all compatible versions.

            2. Clear NuGet cache

            3. Provide xml documentation (optional)

            Final .nuspec file:

            Source https://stackoverflow.com/questions/69393627

            QUESTION

            How does OCI/runc system path constraining work to prevent remounting such paths?
            Asked 2021-Jan-30 at 16:26

            The background of my question is a set of test cases for my Linux-kernel Namespaces discovery Go package lxkns where I create a new child user namespace as well as a new child PID namespace inside a test container. I then need to remount /proc, otherwise I would see the wrong process information and cannot lookup the correct process-related information, such as the namespaces of the test process inside the new child user+PID namespaces (without resorting to guerilla tactics).

            The test harness/test setup is essentially this and fails without --privileged (I'm simplifying to all caps and switching off seccomp and apparmor in order to cut through to the real meat):

            ...

            ANSWER

            Answered 2021-Jan-30 at 16:26

            Quite some more digging turned up this answer to "About mounting and unmounting inherited mounts inside a newly-created mount namespace" which points in the correct direction, but needs additional explanations (not least due to basing on a misleading paragraph about mount namespaces being hierarchical from man pages which Michael Kerrisk fixed some time ago).

            Our starting point is when runc sets up the (test) container, for masking system paths especially in the container's future /proc tree, it creates a set of new mounts to either mask out individual files using /dev/null or subdirectories using tmpfs. This results in procfs being mounted on /proc, as well as further sub-mounts.

            Now the test container starts and at some point a process unshares into a new user namespace. Please keep in mind that this new user namespace (again) belongs to the (real) root user with UID 0, as a default Docker installation won't enable running containers in new user namespaces.

            Next, the test process also unshares into a new mount namespace, so this new mount namespace belongs to the newly created user namespace, but not to the initial user namespace. According to section "restrictions on mount namespaces" in mount_namespaces(7):

            If the new namespace and the namespace from which the mount point list was copied are owned by different user namespaces, then the new mount namespace is considered less privileged.

            Please note that the criterion here is: the "donor" mount namespace and the new mount namespace have different user namespaces; it doesn't matter whether they have the same owner user (UID), or not.

            The important clue now is:

            Mounts that come as a single unit from a more privileged mount namespace are locked together and may not be separated in a less privileged mount namespace. (The unshare(2) CLONE_NEWNS operation brings across all of the mounts from the original mount namespace as a single unit, and recursive mounts that propagate between mount namespaces propagate as a single unit.)

            As it now is not possible anymore to separate the /proc mountpoint as well as the masking submounts, it's not possible to (re)mount /proc (question 1). In the same sense, it is impossible to unmount /proc/kcore, because that would allow unmasking (question 2).

            Now, when deploying the test container using --security-opt systempaths=unconfined this results in a single /proc mount only, without any of the masking submounts. In consequence and according to the man page rules cited above, there is only a single mount which we are allowed to (re)mount, subject to the CAP_SYS_ADMIN capability including also mounting (besides tons of other interesting functionality).

            Please note that it is possible to unmount masked /proc/ paths inside the container while still in the original (=initial) user namespace and when possessing (not surprisingly) CAP_SYS_ADMIN. The (b)lock only kicks in with a separate user namespace, hence some projects striving for deploying containers in their own new user namespaces (which unfortunately has effects not least on container networking).

            Source https://stackoverflow.com/questions/65917162

            QUESTION

            In grails 4.0.5, moving datasource definition to runtime.groovy causes DB not to be created during tests
            Asked 2020-Nov-14 at 15:22

            I need to move the configuration of the datasource to runtime.groovy, because that configuration code need to access some of my classes.

            In previous versions of grails, this was not an issue. However i find that if I move the environments block, and the default datasource block to runtime.groovy, Hibernate will not create the database, and my functional tests fail, obviously.

            This is the configuration that I removed from application.yml:

            ...

            ANSWER

            Answered 2020-Nov-14 at 14:01

            The config you show for your runtime.groovy should be in grails-app/conf/application.groovy (does not need to be runtime.groovy). The problem with your config is you have this:

            Source https://stackoverflow.com/questions/64829311

            QUESTION

            Azure batch windows container workloads remove quotes from provided command line
            Asked 2020-Aug-11 at 21:19

            When using windows container workloads on Azure Batch quotes are stripped from command line arguments if they do not contain a space.

            We are using the newest C# SDK 13.0.0 and node VMs SKU windows server 2019 with containers.

            Repro: Create job and a task running inside a docker container (e.g. based on the docker image mcr.microsoft.com/windows/servercore:ltsc2019) use command line of cmd /S /C mkdir "c:/foo" - see that inside the docker container the command is executed as cmd /S /C mkdir c:/foo - which will fail.

            The same issue is described in this open containers PR: https://github.com/opencontainers/runtime-spec/commit/deb4d954eafc4fc65f04c00a08e08c3e69df32d0

            Edit: I realized that this was more as statement than a question... so here is the question: What is a workaround/fix for this behavior?

            Edit powershell -EncodedCommand Solution: I accepted the environment variable solution but we are using a different one. We use "https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_powershell_exe?view=powershell-5.1#-encodedcommand-base64encodedcommand" - doing so hides quotes and backslashes from CommandLineToArgsvW and docker - meaning the commandline gets put together properly inside the docker container on the azure batch vm. (In C# this is done by var base64EncodedCmd = System.Convert.ToBase64String(System.Text.Encoding.Unicode.GetBytes(cmd));) Take care to escape powershell properly as well. e.g. the encoded command could be something lik "& my.exe --% --input \\\"userprovidedparam\\"" (using powershells "stop parsing" --%) otherwise e.g. a value of $(Get-Date) would be evaluated

            ...

            ANSWER

            Answered 2020-Aug-06 at 22:41

            Given this is an issue with the runtime, a potential workaround is to create a .cmd or .bat file that contains the commands you would like to run. You can associate this file as a ResourceFile or bake it into a custom container.

            Source https://stackoverflow.com/questions/63250857

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install runtime-spec

            You can download it from GitHub.

            Support

            Development happens on GitHub for the spec. Issues are used for bugs and actionable items and longer discussions can happen on the mailing list. The specification and code is licensed under the Apache 2.0 license found in the LICENSE file.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/opencontainers/runtime-spec.git

          • CLI

            gh repo clone opencontainers/runtime-spec

          • sshUrl

            git@github.com:opencontainers/runtime-spec.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link