workflow | Neuro4j Workflow is a light-weight workflow engine | BPM library
kandi X-RAY | workflow Summary
Support
Quality
Security
License
Reuse
- Writes the message to System out .
- Creates a UUID .
- Create a node .
- Lookup a custom node .
- Checks the type of the parameter .
- Retrieves the configuration for a command .
- Evaluate parameter value .
- Get the next transition
- Executes the given flow .
- Parse a flow parameter .
workflow Key Features
workflow Examples and Code Snippets
Trending Discussions on workflow
Trending Discussions on workflow
QUESTION
I have been using github actions for quite sometime but today my deployments started failing. Below is the error from github action logs
Command: git
Arguments: ls-remote --tags --heads git://github.com/adobe-webplatform/eve.git
Directory: /home/runner/work/stackstream-fe/stackstream-fe
Output:
fatal: remote error:
The unauthenticated git protocol on port 9418 is no longer supported.
Upon investigation, it appears that below section in my yml file is causing the issue.
- name: Installing modules
run: yarn install
I have looked into this change log but can't seem to comprehend the issue.
Additional Details: Server: EC2 Instance Github actions steps:
steps:
- name: Checkout
uses: actions/checkout@v2
- id: vars
run: |
if [ '${{ github.ref }}' == 'refs/heads/master' ]; then echo "::set-output name=environment::prod_stackstream" ; echo "::set-output name=api-url::api" ; elif [ '${{ github.ref }}' == 'refs/heads/staging' ]; then echo "::set-output name=environment::staging_stackstream" ; echo "::set-output name=api-url::stagingapi" ; else echo "::set-output name=environment::dev_stackstream" ; echo "::set-output name=api-url::devapi" ; fi
- uses: pCYSl5EDgo/cat@master
id: slack
with:
path: .github/workflows/slack.txt
- name: Slack Start Notification
uses: 8398a7/action-slack@v3
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
ENVIRONMENT: '`${{ steps.vars.outputs.environment }}`'
COLOR: good
STATUS: '`Started`'
with:
status: custom
fields: workflow,job,commit,repo,ref,author,took
custom_payload: |
${{ steps.slack.outputs.text }}
- name: Installing modules
env:
REACT_APP_API_URL: 'https://${{ steps.vars.outputs.api-url }}mergestack.com/api/v1'
run: yarn install
- name: Create Frontend Build
env:
REACT_APP_API_URL: 'https://${{ steps.vars.outputs.api-url }}mergestack.com/api/v1'
run: yarn build
- name: Deploy to Frontend Server DEV
if: ${{ contains(github.ref, 'dev') }}
uses: easingthemes/ssh-deploy@v2.1.5
env:
SSH_PRIVATE_KEY: ${{ secrets.DEV_KEY }}
ARGS: '-rltgoDzvO --delete'
SOURCE: 'deploy/'
REMOTE_HOST: ${{ secrets.DEV_HOST }}
REMOTE_USER: plyfolio-dev
TARGET: '/home/plyfolio-dev/${{ steps.vars.outputs.environment }}/fe/deploy'
package.json file
{
"name": "stackstream-fe",
"version": "1.0.0",
"authors": [
"fayyaznofal@gmail.com"
],
"private": true,
"dependencies": {
"@fortawesome/fontawesome-svg-core": "^1.2.34",
"@fortawesome/free-solid-svg-icons": "^5.15.2",
"@fortawesome/react-fontawesome": "^0.1.14",
"@fullcalendar/bootstrap": "^5.5.0",
"@fullcalendar/core": "^5.5.0",
"@fullcalendar/daygrid": "^5.5.0",
"@fullcalendar/interaction": "^5.5.0",
"@fullcalendar/react": "^5.5.0",
"@lourenci/react-kanban": "^2.1.0",
"@redux-saga/simple-saga-monitor": "^1.1.2",
"@testing-library/jest-dom": "^5.11.9",
"@testing-library/react": "^11.2.3",
"@testing-library/user-event": "^12.6.0",
"@toast-ui/react-chart": "^1.0.2",
"@types/jest": "^26.0.14",
"@types/node": "^14.10.3",
"@types/react": "^16.9.49",
"@types/react-dom": "^16.9.8",
"@vtaits/react-color-picker": "^0.1.1",
"apexcharts": "^3.23.1",
"availity-reactstrap-validation": "^2.7.0",
"axios": "^0.21.1",
"axios-mock-adapter": "^1.19.0",
"axios-progress-bar": "^1.2.0",
"bootstrap": "^5.0.0-beta2",
"chart.js": "^2.9.4",
"chartist": "^0.11.4",
"classnames": "^2.2.6",
"components": "^0.1.0",
"dotenv": "^8.2.0",
"draft-js": "^0.11.7",
"echarts": "^4.9.0",
"echarts-for-react": "^2.0.16",
"firebase": "^8.2.3",
"google-maps-react": "^2.0.6",
"history": "^4.10.1",
"i": "^0.3.6",
"i18next": "^19.8.4",
"i18next-browser-languagedetector": "^6.0.1",
"jsonwebtoken": "^8.5.1",
"leaflet": "^1.7.1",
"lodash": "^4.17.21",
"lodash.clonedeep": "^4.5.0",
"lodash.get": "^4.4.2",
"metismenujs": "^1.2.1",
"mkdirp": "^1.0.4",
"moment": "2.29.1",
"moment-timezone": "^0.5.32",
"nouislider-react": "^3.3.9",
"npm": "^7.6.3",
"prop-types": "^15.7.2",
"query-string": "^6.14.0",
"react": "^16.13.1",
"react-apexcharts": "^1.3.7",
"react-auth-code-input": "^1.0.0",
"react-avatar": "^3.10.0",
"react-bootstrap": "^1.5.0",
"react-bootstrap-editable": "^0.8.2",
"react-bootstrap-sweetalert": "^5.2.0",
"react-bootstrap-table-next": "^4.0.3",
"react-bootstrap-table2-editor": "^1.4.0",
"react-bootstrap-table2-paginator": "^2.1.2",
"react-bootstrap-table2-toolkit": "^2.1.3",
"react-chartist": "^0.14.3",
"react-chartjs-2": "^2.11.1",
"react-color": "^2.19.3",
"react-confirm-alert": "^2.7.0",
"react-content-loader": "^6.0.1",
"react-countdown": "^2.3.1",
"react-countup": "^4.3.3",
"react-cropper": "^2.1.4",
"react-data-table-component": "^6.11.8",
"react-date-picker": "^8.0.6",
"react-datepicker": "^3.4.1",
"react-dom": "^16.13.1",
"react-draft-wysiwyg": "^1.14.5",
"react-drag-listview": "^0.1.8",
"react-drawer": "^1.3.4",
"react-dropzone": "^11.2.4",
"react-dual-listbox": "^2.0.0",
"react-facebook-login": "^4.1.1",
"react-flatpickr": "^3.10.6",
"react-google-login": "^5.2.2",
"react-hook-form": "^7.15.2",
"react-i18next": "^11.8.5",
"react-icons": "^4.2.0",
"react-image-lightbox": "^5.1.1",
"react-input-mask": "^2.0.4",
"react-jvectormap": "^0.0.16",
"react-leaflet": "^3.0.5",
"react-meta-tags": "^1.0.1",
"react-modal-video": "^1.2.6",
"react-notifications": "^1.7.2",
"react-number-format": "^4.7.3",
"react-perfect-scrollbar": "^1.5.8",
"react-rangeslider": "^2.2.0",
"react-rating": "^2.0.5",
"react-rating-tooltip": "^1.1.6",
"react-redux": "^7.2.1",
"react-responsive-carousel": "^3.2.11",
"react-router-dom": "^5.2.0",
"react-script": "^2.0.5",
"react-scripts": "3.4.3",
"react-select": "^4.3.1",
"react-sparklines": "^1.7.0",
"react-star-ratings": "^2.3.0",
"react-super-responsive-table": "^5.2.0",
"react-switch": "^6.0.0",
"react-table": "^7.6.3",
"react-toastify": "^7.0.3",
"react-toastr": "^3.0.0",
"react-twitter-auth": "0.0.13",
"reactstrap": "^8.8.1",
"recharts": "^2.0.8",
"redux": "^4.0.5",
"redux-saga": "^1.1.3",
"reselect": "^4.0.0",
"sass": "^1.37.5",
"simplebar-react": "^2.3.0",
"styled": "^1.0.0",
"styled-components": "^5.2.1",
"toastr": "^2.1.4",
"typescript": "^4.0.2",
"universal-cookie": "^4.0.4"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^2.27.0",
"@typescript-eslint/parser": "^2.27.0",
"@typescript-eslint/typescript-estree": "^4.15.2",
"eslint-config-prettier": "^6.10.1",
"eslint-plugin-prettier": "^3.1.2",
"husky": "^4.2.5",
"lint-staged": "^10.1.3",
"prettier": "^1.19.1",
"react-test-renderer": "^16.13.1",
"redux-devtools-extension": "^2.13.8",
"redux-mock-store": "^1.5.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build && mv build ./deploy/build",
"build-local": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},
"lint-staged": {
"*.{js,ts,tsx}": [
"eslint --fix"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
ANSWER
Answered 2022-Mar-16 at 07:01First, this error message is indeed expected on Jan. 11th, 2022.
See "Improving Git protocol security on GitHub".
January 11, 2022 Final brownout.
This is the full brownout period where we’ll temporarily stop accepting the deprecated key and signature types, ciphers, and MACs, and the unencrypted Git protocol.
This will help clients discover any lingering use of older keys or old URLs.
Second, check your package.json
dependencies for any git://
URL, as in this example, fixed in this PR.
As noted by Jörg W Mittag:
For GitHub Actions:There was a 4-month warning.
The entire Internet has been moving away from unauthenticated, unencrypted protocols for a decade, it's not like this is a huge surprise.Personally, I consider it less an "issue" and more "detecting unmaintained dependencies".
Plus, this is still only the brownout period, so the protocol will only be disabled for a short period of time, allowing developers to discover the problem.
The permanent shutdown is not until March 15th.
As in actions/checkout issue 14, you can add as a first step:
- name: Fix up git URLs
run: echo -e '[url "https://github.com/"]\n insteadOf = "git://github.com/"' >> ~/.gitconfig
That will change any git://github.com/
into https://github.com/
.
For all your repositories, you can set:
git config --global url."https://github.com/".insteadOf git://github.com/
You can also use SSH, but GitHub Security reminds us that, as of March 15th, 2022, GitHub stopped accepting DSA keys. RSA keys uploaded after Nov 2, 2021 will work only with SHA-2 signatures.
The deprecated MACs, ciphers, and unencrypted Git protocol are permanently disabled.
So this (with the right key) would work:
git config --global url."git@github.com:".insteadOf git://github.com/
That will change any git://github.com/
(unencrypted Git protocol) into git@github.com:
(SSH URL).
QUESTION
Github Actions were working in my repository till yesterday. I didnt make any changes in .github/workflows/dev.yml file or in DockerFile.
But, suddenly in recent pushes, my Github Actions fail with the error
Setup, Build, Publish, and Deploy
Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under
'/home/runner/work/_actions/GoogleCloudPlatform/github-actions/master/setup-gcloud'.
Did you forget to run actions/checkout before running your local
action?
May I know how to fix this
This is the sample .yml file I am using.
name: Release to Development
on:
push:
branches:
- 'master'
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
# Setup gcloud CLI
- uses: GoogleCloudPlatform/github-actions/setup-gcloud@master
with:
version: '270.0.0'
service_account_email: ${{ secrets.GCLOUD_EMAIL_DEV }}
service_account_key: ${{ secrets.GCLOUD_AUTH_DEV }}
# Configure docker to use the gcloud command-line tool as a credential helper
- run: |
# Set up docker to authenticate
# via gcloud command-line tool.
gcloud auth configure-docker
# Build the Docker image
- name: Build
run: |
docker build -t "$REGISTRY_HOSTNAME"/"$GKE_PROJECT"/"$IMAGE":"$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" .
# Push the Docker image to Google Container Registry
- name: Publish
run: |
docker push $REGISTRY_HOSTNAME/$GKE_PROJECT/$IMAGE:$GITHUB_SHA
# Set up kustomize
- name: Set up Kustomize
run: |
curl -o kustomize --location https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
chmod u+x ./kustomize
# Deploy the Docker image to the GKE cluster
- name: Deploy
run: |
ANSWER
Answered 2021-Jul-27 at 13:24I fixed it by changing uses
value to
uses: google-github-actions/setup-gcloud@master
QUESTION
I would like to limit concurrency to one run for my workflow:
on:
pull_request:
paths:
- 'foo/**'
push:
paths:
- 'foo/**'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
However, I found out that for push
head_ref
is empty and run_id
is always unique (as described here: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-using-a-fallback-value)
How I can create a concurrency key that will be constant across pull_request
and push
events?
ANSWER
Answered 2022-Feb-06 at 21:23I am using this concurrency key for my workflows in similar case:
group: ${{ github.workflow }}-${{ github.ref }}
I wanted to limit it to have single workflow running on a single branch - I am cancelling previous runs. But this allows to have multiple runs across different branches at the same time - not sure what's your case exactly.
If you want to have one instance of workflow running across whole repository, you can just go for:
group: ${{ github.workflow }}
QUESTION
I'm setting up a reusable workflow using GitHub actions: https://docs.github.com/en/actions/learn-github-actions/reusing-workflows
Since the calling workflow and called workflow are both in the same repo, I want to reference the latest commit of the called workflow inside my calling workflow's uses
statement.
Example:
uses: owner/repo/.github/workflows/called-workflow.yml@${{GITHUB_SHA}}
That ${{GITHUB_SHA}}
doesn't get interpolated, so I get the following error:
Invalid workflow file : .github/workflows/calling-workflow.yml#L1
handling usage of workflow "owner/repo/.github/workflows/called-workflow.yml@${{GITHUB_SHA}}": can't obtain workflow file: reference to workflow should be either a valid branch, tag, or commit
How can I set the ref
to the latest commit when calling a workflow within a workflow?
ANSWER
Answered 2021-Nov-08 at 21:48Unfortunatelly, it's not possible to use expressions with uses
now.
One possible workaround (that I used for myself) is to push reusable workflow(s) to one of the stable branches (main
/master
/develop
/etc.) and use SHA as a ref
.
Additional benefit here is that using SHA is actually the recommended way described here.
QUESTION
This question is complementary to figuring out why this error (which started as a
zef
error) occurs.
Apparently, in certain circumstances the repository chain accessible from $*REPO
may vary. Namely, in a GitHub action such as this one, where raku is part of a Docker image, all of a sudden the repository chain becomes:
(inst#/github/home/.raku inst#/usr/share/perl6/site inst#/usr/share/perl6/vendor inst#/usr/share/perl6/core ap# nqp# perl5#)
The first directory does not actually exist; this should be /home/raku/.raku
instead. So a couple of questions
- Why does rakudo install that unexisting directory as part of the repository chain
- Is there any workaround that allows to simply change that value to the right one?
I don't really understand what's the cause for this happening. Initializing the container involves a long command line like this one:
/usr/bin/docker create --name d043d929507d4885927ac95002160d52_jjmereloalpinerakugha202110_1e6e32 --label 6a6825 --workdir /__w/p6-pod-load/p6-pod-load --network github_network_da048828784a46c3b413990beeaed866 -e "HOME=/github/home" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work":"/__w" -v "/home/runner/runners/2.285.1/externals":"/__e":ro -v "/home/runner/work/_temp":"/__w/_temp" -v "/home/runner/work/_actions":"/__w/_actions" -v "/opt/hostedtoolcache":"/__t" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" --entrypoint "tail" jjmerelo/alpine-raku:gha-2021.10 "-f" "/dev/null"
Where, effectively there seems to be an environment variable set to that value. So it might be that the environment variable HOME
is the one that determines that, instead of whatever happened in the installation. But I don't know if that's a feature, or a bug.
ANSWER
Answered 2022-Feb-09 at 18:32You need to set RAKULIB
to wherever your libraries were initially installed, as is done here:
# Environment
ENV PATH="${WORKDIR}/.raku/bin:${WORKDIR}/.raku/share/perl6/site/bin:${PATH}" \
ENV="${WORKDIR}/.profile"\
RAKULIB="inst#/home/raku/.raku"
This part is underdocumented, but the inst#
prefix reflects the fact that it includes precomp units, and the directory is where it was initially installed. Thus, no matter where HOME
is, Rakudo will always find the original installed modules.
I don't have an answer to the second part, but I can answer the first part: the first element in the repository chain is taken from $HOME
every time rakudo is started up. If the value of $HOME
when zef
or any other module was installed was different from the current one (case in point), one workaround would be do something like
HOME=/home/raku zef --version
Maybe create a shell function or an alias that does so, if you don't want to carry around the variable that way. Long term solution, really no idea.
QUESTION
When I run npm ci
on Github Actions I got the error:
Run npm ci
npm ERR! bindings not accessible from watchpack-chokidar2:fsevents
npm ERR! A complete log of this run can be found in:
npm ERR! /home/runner/.npm/_logs/2021-09-17T15_18_42_465Z-debug.log
Error: Process completed with exit code 1.
What can be?
My .github/workflows/eslint.yaml
name: ESLint
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Use Node.js
uses: actions/setup-node@v1
with:
node-version: '14.x'
- run: npm ci
- run: npm run lint
My package.json
{
"name": "@blinktrade/uikit",
"version": "1.0.0",
"main": "dist/index.js",
"license": "MIT",
"devDependencies": {
"@babel/plugin-transform-react-jsx": "^7.14.9",
"@babel/preset-env": "^7.15.6",
"@babel/preset-react": "^7.14.5",
"@babel/preset-typescript": "^7.15.0",
"@storybook/addon-essentials": "^6.3.8",
"@storybook/react": "^6.3.8",
"@testing-library/jest-dom": "^5.14.1",
"@testing-library/react": "^12.1.0",
"@testing-library/user-event": "^13.2.1",
"@types/jest": "^27.0.1",
"@types/react": "^17.0.21",
"@typescript-eslint/eslint-plugin": "^4.31.1",
"@typescript-eslint/parser": "^4.31.1",
"eslint": "^7.32.0",
"eslint-plugin-react": "^7.25.2",
"husky": "^7.0.2",
"jest": "^27.2.0",
"prettier": "^2.4.1",
"pretty-quick": "^3.1.1",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"rimraf": "^3.0.2",
"typescript": "^4.4.3"
},
"husky": {
"hooks": {
"pre-push": "npm run lint",
"pre-commit": "pretty-quick --staged"
}
},
"scripts": {
"build": "tsc -p .",
"clear": "rimraf dist/",
"format": "prettier '**/*' --write --ignore-unknown",
"lint": "eslint --max-warnings=0 .",
"storybook": "start-storybook -p 4000",
"test": "jest"
}
}
ANSWER
Answered 2021-Sep-20 at 20:57Solved removing packages-lock.json and running again using NodeJS 14 (was 10)
QUESTION
I recently created this post trying to figure out how to reference GitHub Secrets in a GitHub action. I believe I got that solved & figured out and I'm onto a different issue.
Below is a sample of the workflow code as of right now, the issue I need help with is the Create and populate .Renviron file
part.
on: [push, pull_request]
name: CI-CD
jobs:
CI-CD:
runs-on: ${{ matrix.config.os }}
name: ${{ matrix.config.os }} (${{ matrix.config.r }})
strategy:
# we keep a matrix for convenience, but we would typically just run on one
# single OS and R version, aligned with the target deployment environment
matrix:
config:
- {os: ubuntu-20.04, r: 'release', rspm: "https://packagemanager.rstudio.com/cran/__linux__/focal/latest"}
env:
# Enable RStudio Package Manager to speed up package installation
RSPM: ${{ matrix.config.rspm }}
# Access token for GitHub
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repo
uses: actions/checkout@v2
- name: Setup R
uses: r-lib/actions/setup-r@v1
with:
r-version: ${{ matrix.config.r }}
- name: Install system dependencies
run: |
while read -r cmd
do
eval sudo $cmd
done < <(Rscript -e 'writeLines(remotes::system_requirements("ubuntu", "20.04"))')
- name: Install R dependencies
run: |
remotes::install_deps(dependencies = TRUE)
remotes::install_cran("rcmdcheck")
shell: Rscript {0}
- name: Create and populate .Renviron file
env:
AWS_HOST: ${{ secrets.AWS_HOST }}
AWS_PORT: ${{ secrets.AWS_PORT }}
AWS_PW: ${{ secrets.AWS_PW }}
AWS_USER: ${{ secrets.AWS_USER }}
DBNAME: ${{ secrets.DBNAME }}
run: |
touch .Renviron
echo aws_host="$AWS_HOST" >> .Renviron
echo aws_port="$AWS_PORT" >> .Renviron
echo aws_pw="$AWS_PW" >> .Renviron
echo aws_user="$AWS_USER" >> .Renviron
echo dbname="$DBNAME" >> .Renviron
ls ${{ github.workspace }}
shell: bash
- name: Deploy to shinyapps.io
# continuous deployment only for pushes to the main / master branch
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
env:
SHINYAPPS_ACCOUNT: ${{ secrets.SHINYAPPS_ACCOUNT }}
SHINYAPPS_TOKEN: ${{ secrets.SHINYAPPS_TOKEN }}
SHINYAPPS_SECRET: ${{ secrets.SHINYAPPS_SECRET }}
run: Rscript deploy/deploy-shinyapps.R
I believe this .Renviron file is getting created, but I don't know where, and it certainly doesn't look like it's where the rest of the files are. I've tried a number of file path destinations, .Renviron
, ~/.Renviron
, $github.workspace/.Renviron
, /home/runner/work/NBA-Dashboard/NBA-Dashboard/.Renviron
, none of them work. After I create the file I list all of the contents of the workspace directory (which is where I want the file to be) and it's never listed there.
I need that .Renvrion file to be created & listed in that specific directory along with all of these other files so when I continue to the next step and use the rsconnect package to build & deploy my Shiny app it's able to include that file to retrieve environment variables correctly when someone uses the app.
I thought maybe there was some issue with .gitignore so I deleted .Renviron off of that ? But it didn't fix the issue. But ya if anyone has any ideas I'd appreciate it!
ANSWER
Answered 2021-Sep-01 at 09:23The file is there where you expect to be
- name: Create and populate .Renviron file
env:
AWS_HOST: ${{ secrets.AWS_HOST }}
AWS_PORT: ${{ secrets.AWS_PORT }}
AWS_PW: ${{ secrets.AWS_PW }}
AWS_USER: ${{ secrets.AWS_USER }}
DBNAME: ${{ secrets.DBNAME }}
run: |
touch .Renviron
echo aws_host="$AWS_HOST" >> .Renviron
echo aws_port="$AWS_PORT" >> .Renviron
echo aws_pw="$AWS_PW" >> .Renviron
echo aws_user="$AWS_USER" >> .Renviron
echo dbname="$DBNAME" >> .Renviron
echo "cat .Renviron"
cat .Renviron
echo "ls -a ."
ls -a .
echo "ls -a ${{ github.workspace }}"
ls -a ${{ github.workspace }}
shell: bash
you need to run ls -a
to show hidden files.
Run touch .Renviron
touch .Renviron
echo aws_host="$AWS_HOST" >> .Renviron
echo aws_port="$AWS_PORT" >> .Renviron
echo aws_pw="$AWS_PW" >> .Renviron
echo aws_user="$AWS_USER" >> .Renviron
echo dbname="$DBNAME" >> .Renviron
echo "cat .Renviron"
cat .Renviron
echo "ls -a ."
ls -a .
echo "ls -a /home/runner/work/github-actions-manual/github-actions-manual"
ls -a /home/runner/work/github-actions-manual/github-actions-manual
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
AWS_HOST:
AWS_PORT:
AWS_PW:
AWS_USER:
DBNAME:
cat .Renviron
aws_host=
aws_port=
aws_pw=
aws_user=
dbname=
ls -a .
.
..
.Renviron
.git
.github
.gitignore
LICENSE
README.md
change-workflow.ps1
commit-new-workflow.ps1
dist
public
test.ps1
test2.ps1
ls -a /home/runner/work/github-actions-manual/github-actions-manual
.
..
.Renviron
.git
.github
.gitignore
LICENSE
README.md
change-workflow.ps1
commit-new-workflow.ps1
dist
public
test.ps1
test2.ps1
QUESTION
In my iOS project were were able to replicate Combine's Schedulers
implementation and we have an extensive suit of testing, everything was fine on Intel machines all the tests were passing, now we got some of M1 machines to see if there is a showstopper in our workflow.
Suddenly some of our library code starts failing, the weird thing is even if we use Combine's Implementation the tests still failing.
Our assumption is we are misusing DispatchTime(uptimeNanoseconds:)
as you can see in the following screen shot (Combine's implementation)
We know by now that initialising DispatchTime
with uptimeNanoseconds value doesn't mean they are the actual nanoseconds on M1 machines, according to the docs
Creates a
DispatchTime
relative to the system clock that ticks since boot.
- Parameters:
- uptimeNanoseconds: The number of nanoseconds since boot, excluding
time the system spent asleep
- Returns: A new `DispatchTime`
- Discussion: This clock is the same as the value returned by
`mach_absolute_time` when converted into nanoseconds.
On some platforms, the nanosecond value is rounded up to a
multiple of the Mach timebase, using the conversion factors
returned by `mach_timebase_info()`. The nanosecond equivalent
of the rounded result can be obtained by reading the
`uptimeNanoseconds` property.
Note that `DispatchTime(uptimeNanoseconds: 0)` is
equivalent to `DispatchTime.now()`, that is, its value
represents the number of nanoseconds since boot (excluding
system sleep time), not zero nanoseconds since boot.
so, is the test wrong or we should not use DispatchTime
like this?
we try to follow Apple suggestion and use this:
uint64_t MachTimeToNanoseconds(uint64_t machTime)
{
uint64_t nanoseconds = 0;
static mach_timebase_info_data_t sTimebase;
if (sTimebase.denom == 0)
(void)mach_timebase_info(&sTimebase);
nanoseconds = ((machTime * sTimebase.numer) / sTimebase.denom);
return nanoseconds;
}
it didnt help a lot.
Edit: Screenshot code:
func testSchedulerTimeTypeDistance() {
let time1 = DispatchQueue.SchedulerTimeType(.init(uptimeNanoseconds: 10000))
let time2 = DispatchQueue.SchedulerTimeType(.init(uptimeNanoseconds: 10431))
let distantFuture = DispatchQueue.SchedulerTimeType(.distantFuture)
let notSoDistantFuture = DispatchQueue.SchedulerTimeType(
DispatchTime(
uptimeNanoseconds: DispatchTime.distantFuture.uptimeNanoseconds - 1024
)
)
XCTAssertEqual(time1.distance(to: time2), .nanoseconds(431))
XCTAssertEqual(time2.distance(to: time1), .nanoseconds(-431))
XCTAssertEqual(time1.distance(to: distantFuture), .nanoseconds(-10001))
XCTAssertEqual(distantFuture.distance(to: time1), .nanoseconds(10001))
XCTAssertEqual(time2.distance(to: distantFuture), .nanoseconds(-10432))
XCTAssertEqual(distantFuture.distance(to: time2), .nanoseconds(10432))
XCTAssertEqual(time1.distance(to: notSoDistantFuture), .nanoseconds(-11025))
XCTAssertEqual(notSoDistantFuture.distance(to: time1), .nanoseconds(11025))
XCTAssertEqual(time2.distance(to: notSoDistantFuture), .nanoseconds(-11456))
XCTAssertEqual(notSoDistantFuture.distance(to: time2), .nanoseconds(11456))
XCTAssertEqual(distantFuture.distance(to: distantFuture), .nanoseconds(0))
XCTAssertEqual(notSoDistantFuture.distance(to: notSoDistantFuture),
.nanoseconds(0))
}
ANSWER
Answered 2021-Nov-30 at 15:29I think your issue lies in this line:
nanoseconds = ((machTime * sTimebase.numer) / sTimebase.denom)
... which is doing integer operations.
The actual ratio here for M1 is 125/3
(41.666...
), so your conversion factor is truncating to 41
. This is a ~1.6% error, which might explain the differences you're seeing.
QUESTION
I'm trying to use the github actions for first time, I've created and followed the tutorial from github and my .github/workflows/push_main.yml
is :
name: Android CI
on:
push:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: set up JDK 11
uses: actions/setup-java@v1
with:
java-version: 11
# Runs ktlint
- name: Lint
run: ./gradlew ktlintCheck
# Execute unit tests
- name: Unit Test
run: ./gradlew testDebugUnitTest
Also what I'd like to have is when trying to do a rebase or merge to main
have this check and if it works then keep the action of rebase
or merge
I thought to do something like, create a temporal branch do the check there and if it works do the rebase
or merge
into main and then delete the temporal branch but I don't know if there's any other efficient way to do so. Also I've seen that I can run the jobs in parallel will it make it faster?
ANSWER
Answered 2022-Jan-17 at 16:55There is a super convenient way to build, test and aggregate the outcome of changes of some branch before merging using pull requests.
Its common to create a pull request and trigger a workflow doing the checks. Just add "pull_request:" to reuse your existing workflow, to build and test your changes.
name: Android CI
on:
push:
branches: [ main ]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: set up JDK 11
uses: actions/setup-java@v1
with:
java-version: 11
# Runs ktlint
- name: Lint
run: ./gradlew ktlintCheck
# Execute unit tests
- name: Unit Test
run: ./gradlew testDebugUnitTest
Jobs are executed in parallel. Of course that is faster. Common use case is a matrix that defines required test targets, e.g. os versions, node or Java versions.
QUESTION
After upgrading the jenkins plugin Kubernetes Client to version 1.30.3 (also for 1.31.1) I get the following exceptions in the logs of jenkins when I start a build:
Timer task org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$UpdateConnectionCount@2c16d367 failed
java.lang.NoSuchMethodError: 'okhttp3.OkHttpClient io.fabric8.kubernetes.client.HttpClientAware.getHttpClient()'
at org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$UpdateConnectionCount.doRun(KubernetesClientProvider.java:150)
at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:90)
at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:67)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
After some of these eceptions the build itself is cancelled with this error:
java.io.IOException: Timed out waiting for websocket connection. You should increase the value of system property org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.websocketConnectionTimeout currently set at 30 seconds
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:451)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:338)
at hudson.Launcher$ProcStarter.start(Launcher.java:507)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319)
Do you have an idea what can be done?
ANSWER
Answered 2022-Jan-05 at 11:55Downgrade the plugin to kubernetes-client-api:5.10.1-171.vaa0774fb8c20. The latest one has the compatibility issue as of now.
new info: The issue is now solved with upgrading the Kubernetes plugin to version: 1.31.2 https://issues.jenkins.io/browse/JENKINS-67483
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install workflow
You can use workflow like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the workflow component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page