shell-core | shell functions designed to ease the development | Command Line Interface library
kandi X-RAY | shell-core Summary
kandi X-RAY | shell-core Summary
A library of shell functions designed to ease the development of shell scripts written for both bash and zsh.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of shell-core
shell-core Key Features
shell-core Examples and Code Snippets
Community Discussions
Trending Discussions on shell-core
QUESTION
I require the use of F# 4.5 running on the .NET Framework (not .NET Core). I would like this environment running in a docker container as it'll run periodically on our Jenkins build server. I thought that I could use the existing .NET Framework SDK Image but it only has F# for .NET Core/.NET 5.
So I attempted to install F# into the running container (should that work, I would add it to the image itself) but I am not having any luck. Here was my attempt...
Create a
project
folderDownload vs_BuildTools.exe to
project
Create a Dockerfile using the .NET Framework SDK 4.8 image
...
ANSWER
Answered 2021-May-21 at 17:29I was able to solve the issue by installing the F# Compiler Tools using paket. The F# Compiler Tools for F# 4.5 runs on .NET Framework (or or mono) unlike F# 5 which runs on .NET Core (or .NET 5).
DETAILS Create the DockerfileMy Dockerfile doesn't look too much different from before. I still base it on the Microsoft .NET Framework SDK 4.8 image as I want access to the .NET SDK.
QUESTION
I have a Windows Server 2019 that works as a build node for Jenkins. The Windows box does not have a GUI, there is only SSH access that drops into PowerShell Core. The box has been configured with Ansible and all software is installed using Chocolatey.
...ANSWER
Answered 2021-Mar-15 at 12:43That loader loads nunit projects. You want the VSProjectLoader
.
Install with
QUESTION
I have three Azure Pipeline agents built on Ubuntu 18.04 images and deployed to a Kubernetes cluster. Agents are running the latest version, 2.182.1, but this problem also happened using 2.181.0.
Executing build pipelines individually works just fine. Build completes successfully every time. But whenever a second pipeline starts while another pipeline is already running, it fails - every time - on the "Checkout" job with the following error:
The working folder U:\azp\agent\_work\1\s is already in use by the workspace ws_1_34;Project Collection Build Service (myaccount) on computer linux-agent-deployment-78bfb76d.
These are three separate and distinct agents running as separate containers. Why would a job from one container be impacting a job running on a different container? Concurrent builds work all day long on my non-container Windows servers.
The container agents are deployed as a standard Kubernetes "deployment" object:
...ANSWER
Answered 2021-Mar-05 at 17:59Solution has been found. Here's how I resolved this for anyone coming across this post:
I discovered a helm chart for Azure Pipeline agents - emberstack/docker-azure-pipelines-agent - and after poking around in the contents, discovered what was staring me in the face the last couple of days: "StatefulSets"
Simple, easy to test, and working well so far. I refactored my k8s manifest as a StatefulSet object and the agents are up and able to run builds concurrently. Still more testing to do, but looking very positive at this point.
QUESTION
How do I find out more information about xml-rpc
for powershell
?
ANSWER
Answered 2020-Nov-15 at 22:22There is no XML-RPC functionality built into PowerShell.
Providing such functionality therefore requires third-party code.
In your case, it seems your script requires the XmlRpc
module, available from the PowerShell Gallery here (which means you can install it with Install-Module XmlRpc
).
However, this module was last updated more than 5 years ago, predating PowerShell's cross-platform edition, PowerShell (Core), so you'll have to find out yourself if it works on a Unix-like platform such as Ubuntu.
QUESTION
I'm new to GitLab and not sure if this is possible, I have on premise GitLab set up and working as well as Artifactory private repo.
I always used DinD configuration, using Docker as the main image with DinD service, then in the stages logging in and pulling different images from the private repo.
But I heard it's possible to do this without DinD to have shorter execution times. And the needed image is pulled in the beginning of a stage.
Instead of this:
...ANSWER
Answered 2020-Jan-13 at 18:33Your CI is failing because it doesn't know the credentials for artifactory.example.com
when downloading the base image artifactory.example.com:5000/repo/powershell-core:6.3
.
For you to understand, I'll explain the different steps of the 2 CIs you gave and then I'll give tracks for a solution.
In your first CI (the one using DinD), what happens is :
- The Gitlab-runner executor download the image
docker:18.09.8-dind
and then it starts the image as a service for your CI - The Gitlab-runner executor download the base image
docker:latest
and then it uses it to execute your jobrun_script
- In your docker
docker:latest
, the jobrun_script
log in your private repository with your credentials and through the DinD service - Then it download your image
artifactory.example.com:5000/repo/powershell-core:6.3
and run a script using it, all through the DinD service
In this one, you are trying just to execute your image artifactory.example.com:5000/repo/powershell-core:6.3
and run a script using it.
You are right, for a simple goal like that no DinD is necessary.
Here is the analysis of what your CI is doing :
- The Gitlab-runner executor try to download the base image specified by the job
run_script
:artifactory.example.com:5000/repo/powershell-core:6.3
- The repository
artifactory.example.com
ask for credentials - The executor doesn't know any credentials so it returns an error and it stops
As you can see, in this case, the job run_script
was never executed because the executor failed to download the base image specified by the job.
The before_script
part of the job, which is responsible for the login, is not executed either because the before_script
is executed in the base image and this last one couldn't be downloaded by the executor.
Thus, the solution is simply to give the credentials to the executor so it can login and then download the base image of your job.
Also, the before_script
part of your job should be removed because it is not executed at the time you intended and therefore not necessary.
So what you seek is a way to give the credentials for your repository artifactory.example.com
to the Gilab-runner executor your job is using.
Sadly, there is no unique way to do such a thing because it depends on the executor that you are using.
Since you didn't specify the executor in your question, I'll give the solutions that, I think, are the most used and convenient for Docker and Kubernetes.
DOCKER_AUTH_CONFIG
One solution which works with several executors is to define a (secret) variable directly in Gitlab as explained in this Gitlab documentation.
Here, I present the second method to prepare your credentials. Adapt the following to generate your own auth
fied :
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install shell-core
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page