s2i | OpenShift S2I images for Java and Karaf applications | Cloud library
kandi X-RAY | s2i Summary
kandi X-RAY | s2i Summary
OpenShift S2I images for Java and Karaf applications
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of s2i
s2i Key Features
s2i Examples and Code Snippets
Community Discussions
Trending Discussions on s2i
QUESTION
I have set up VON network on my device and currently trying to set up Permitify on Windows 10. After using ./mange build it will shows error like this:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: docker.io/bcgovimages/von-image:py36-indy1.3.1-dev-441-ew-s2i: not found
using docker version 20.10.12 Any help is appreciated
...ANSWER
Answered 2022-Feb-20 at 14:34There is no tag on the bcgovimages/von-image
image called py36-indy1.3.1-dev-441-ew-s2i
on docker hub. It might have been removed and replaced with a newer one. I quickly tried finding one that looked like your tag, but didn't have any luck.
I'd try using bcgovimages/von-image:py36-1.16-1
since that's the newest one that starts with py36
.
You can check what tags are available here: https://hub.docker.com/r/bcgovimages/von-image/tags?page=1
QUESTION
I am trying to build a docker file using ubi8/s2i-base:rhel8.5
The start of my docker file looks like
...ANSWER
Answered 2021-Nov-12 at 04:07The error is right in the message - pull access denied
. You also have a second issue in your FROM
statement in that you aren't specifying the full path the image you linked.
So you have two things to resolve:
- When you pull from redhat's registry, you'll need to provide the full URL and not just the image name. For the image you linked, that means you need "FROM registry.redhat.io/ubi8/s2i-base:rhel18.5 as Builder`.
- When you use Redhat's registry, they want you to create a service account and login with it. They provide instructions on how to do this under the "Get This Image" tag, but in short you need to use
docker login
to login to registry.redhat.io before you run yourdocker build
command.
After you've resolved those issues, you have a third issue in your Dockerfile that you'll need to resolve:
- You're using the
apk
package manager to attempt to install packages, which is the package manager for Alpine linux. The image you're attempting to extend uses theyum
package manager instead (you can see this in the provided Dockerfile for it), and so will need to use that package manager instead to install your dependencies.
Between those things, your pull access should be authorized correctly, and your build issues resolved.
QUESTION
I have a challenge with loggings in Openshift with Quarkus as the JSON format I'm trying to get does work when testing locale, but when running it on Openshift I keep getting normal log lines. The problem with this is that log messages are all single lines in the Elasticsearch database and added MDC values are not included.
Main questions:
- Is this default behavior on Openshift OKD, or might it be a problem with the base image?
- Is there a way to adjust this behavior?
- How else might I make sure that logs are processed in JSON style for my specific project in Openshift OKD?
locale, when run with: mvnw.cmd compile quarkus:dev
the log is in json format: e.g:
ANSWER
Answered 2020-Oct-26 at 12:28On Openshift the loggerManager was of type java.util.logManager
while on local system the loggerManager was of type org.jboss.logmanager.LogManager
By starting on Openshift the java process with the property: -Djava.util.logging.manager=org.jboss.logmanager.LogManager
The log was written as JSON.
QUESTION
Guys I am very new to openshift , I have an Azure Redhat Openshift Cluster deployed and have access to web console as administrator.
I have deployed a sample application inside Openshift cluster using Builtin S2I mode by directly providing the github url and the app is deployed succesfully.
Now my requirement is to implement DNSConfig and Ingress/Egress NetworkPolicy.
I am very new and couldn't understand what exactly is DNSConfig and Ingress/Egress NetworkPolicy .
can someone please explain me what are those and how can we implement those to my demo application that is deployed in that Azure openshift console.
Please help.
Thanks in Advance.
...ANSWER
Answered 2020-Oct-21 at 09:08Your best bet would be to get familiar with the official docs.
Pod's DNS Config allows users more control on the DNS settings for a Pod.
The
dnsConfig
field is optional and it can work with anydnsPolicy
settings. However, when a Pod'sdnsPolicy
is set to "None
", thednsConfig
field has to be specified.Below are the properties a user can specify in the
dnsConfig
field:
nameservers
: a list of IP addresses that will be used as DNS servers for the Pod. There can be at most 3 IP addresses specified. When the Pod'sdnsPolicy
is set to "None", the list must contain at least one IP address, otherwise this property is optional. The servers listed will be combined to the base nameservers generated from the specified DNS policy with duplicate addresses removed.
searches
: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
options
: an optional list of objects where each object may have aname
property (required) and avalue
property (optional). The contents in this property will be merged to the options generated from the specified DNS policy. Duplicate entries are removed.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace- based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
These topics are being explained in more detail in the linked documentations. You can also find some examples that will help you implement it in your use case.
QUESTION
I am trying to build an image using s2i local source code repository following this documentation https://github.com/openshift/source-to-image/blob/master/docs/cli.md . So far I managed to create the image and to generate the s2i scripts using -
s2i create test-image s2i_scripts
. After that I tried to build the image locally using - s2i build . test-image test-image-app
I am running this command while I am in the repository directory.
The result I get after trying to build:
...ANSWER
Answered 2020-Oct-09 at 12:10The reason why it was failing was that I didn't pull a base image to use it for my source code. So I pulled centos7 base image - podman pull centos/python-36-centos7
and then I tried to build again and it worked - s2i build . centos/python-36-centos7 test-image-app
QUESTION
For context, what I am trying to do is create a top-down display in which the screen is just text which is printed every time the player gives input. What this script does is move the Y upward so that I can find things I could do to move the player when the input is given.
...ANSWER
Answered 2020-Jul-24 at 23:40What happened is that when you
QUESTION
I am trying to move a Spring boot application onto a openshift 3 cluster. I am using the Maven fabric8 plugin to generate most of the openshift boilerplate config as well as performing the S2I build. When my pod starts up I can see in the log output that the application starts up but right after spring boot defaults to the default profile (I didn't set a profile yet) the app crashes and the only output I see in the Openshift log is Killed... I couldn't find anything of value googleing except it seemed that it could be openJDK that tries to grab more memory than is a available for a single pod. I added a fabric8 fragment that limits the memory a single container is able to use but I still get the same error when I start up the pod. I ran OC describe pod and saw a exit code 143. i'm running out of ideas and would appreciate Any idea on how I can debug this further/how to resolve this sort of issue? Also not sure if this is related but even though my application.yml is set up to enable SSL the route that fabric8 creates is always a HTTP URL and not a HTTPS URL. i'm wondering if that could be the cause of the pod going into a crashloop as the readinessProbe and wellnessProbe can't hit the actuator endpoints?
The console output, maven pom file, application.yml and fabric8 config is below.
...ANSWER
Answered 2020-Jun-28 at 09:13Add below in the application.yml
QUESTION
I am running the code below in the Google Colab notebook. I am facing very odd problem related to File not Found error. Both 'atis.test.pkl' and 'atis.train.pkl' are in the same folder (c:\users\downloads). The code is not throwing error for 'atis.train.pkl', but for 'atis.test.pkl', it is giving error 'File not Found'.
The traceback is as follows:
...ANSWER
Answered 2020-Mar-26 at 20:43Colab runs on a cloud virtual machine, and has no access to your local filesystem (i.e. paths such as c:\users\downloads\
). To access files from Python scripts running in Colaboratory, you will first have to upload those files to the Colab VM.
This notebook outlines how to upload files from various sources: https://colab.research.google.com/notebooks/io.ipynb
QUESTION
On my local workstation I'm using the following:
- Openshift: 4.2.13
- CRC version: 1.4.0+d5bb3a3
I'm trying to use the S2I process to deploy an application to my local cluster starting from a base image and a source code stored in a github repository.
To do that I have:
- Created a
kubernetes.io/ssh-auth
secret for pulling the source code using the UI - Created a
kubernetes.io/dockerconfigjson
secret (calledquayio
) for pulling the image from quay.io - Linked the registry secret to builder and default service accounts
ANSWER
Answered 2020-Mar-03 at 07:14I found the problem. The image stream tag was wrong.
Here the correct oc command:
QUESTION
I tried to build a minimal Wildfly distribution with galleon with s2i which turned out pretty well so far. But now my application has missing dependencies for batch.
So I tried to add batch
to GALLEON_PROVISION_LAYERS
but it seems there is no batch layer.
Here is my simplyfied s2i:
...ANSWER
Answered 2020-Feb-14 at 10:34no layer for batch yet, you must use the default config.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install s2i
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page