homebrew-openbsd-pcengines-router | automation pertaining to my home internet appliance
kandi X-RAY | homebrew-openbsd-pcengines-router Summary
kandi X-RAY | homebrew-openbsd-pcengines-router Summary
This repo contains revised notes and Ansible collateral from building this fully open source router, circa early Feb 2017. 01/19 update: switch to DNS-over-TLS and a much larger adservers/bad sites blacklist collated from multiple sources. 08/18 update: an official WireGuard userspace implementation is available and working on OpenBSD 6.3. The router now features this revolutionary VPN on tun0. 07/18 update: friends don't let friends have Bufferbloat. FQ-CoDel was merged into 6.2 so take advantage of it. 03/18 update: moved from OpenBSD 6.0-CURRENT to 6.3-STABLE. The router had a solid 400 day uptime. The router is built almost entirely of PCEngines components, running coreboot and OpenBSD, with all modifications to the base install immutable by way of Ansible. A couple of reasons really. Firstly, it undoubtedly would have been more orthodox to put the FreeBSD-based pfSense on this kit and be done with it. However, while I've continued to follow OpenBSD's bi-annual releases with interest, it has been a good few years since I put it on a device for a reason. Plus, with the recent flurry of IoT/consumer router vulns, I get the feeling I'm going to want my home internet appliance protected by OpenBSD in the future. There was definitely a little nod to nostalgia as well. The project was reminiscent of building a homebrew router as a teenager for sharing out the family ISDN connection. Cobbling together an old Pentium box/NICs/Modem, unable to decide between the BSDs, Slackware, Gentoo, messing around with interface settings, pf, ipchains, pppd and so on. Busting out the null modem cable and screen also brought back fond memories from time spent interning in Sun Microsystem's labs. Slapping OpenWRT or DD-WRT on a consumer router just wouldn't have given the same satisfaction. Finally, pragmatically speaking, I wanted a little more muscle in my internet appliance for use cases like: OpenVPN, IRC bouncing, TOR, hosted git, torrenting daemon hooked up to my NAS etc., as opposed to delegating those use cases downstream to yet another device. The apu2c4 is the right mixture of power and efficiency (I read it draws ~6w but haven't checked).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of homebrew-openbsd-pcengines-router
homebrew-openbsd-pcengines-router Key Features
homebrew-openbsd-pcengines-router Examples and Code Snippets
Community Discussions
Trending Discussions on Devops
QUESTION
ci, and i-ve installed my gitlab-runner on a ec2 machine Ubuntu Server 18.04 LTS t2.micro, and when im pushing my code to start the build i get this
But it keeps stucked like this and after 1 hour it timeouts
I really don't know what to do about this problem knowing that i can clone successfully the project manually in my ec2 machine.
Any help is much appreciated if you ever encountered this problem and thanks in advance.
...ANSWER
Answered 2022-Mar-22 at 08:28check your job config or your timeout
QUESTION
I'm trying to use Podman to build an image of a Spring Boot project in IntelliJ. Jetbrain's guide suggests to "Select TCP socket and specify the Podman API service URL in Engine API URL" within Build,Execution,Deployment > Docker (see https://www.jetbrains.com/help/idea/podman.html).
However, when giving the TCP socket found on Podman's documentation (see https://docs.podman.io/en/latest/markdown/podman-system-service.1.html), IntelliJ says it cannot connect.
Finally, when here is the error that appears in terminal:
...ANSWER
Answered 2022-Mar-17 at 16:22Facing the same problem due to podman version upgrade.
Seems like a version downgrade would be required to recover the containers, but haven't tried it yet.
This issue points on deleting the machine and creating it again, but the containers would be lost
https://github.com/containers/podman/issues/13510
QUESTION
I was recently trying to create a docker container and connect it with my SQLDeveloper but I started facing some strange issues. I downloaded the docker image using below pull request:
...ANSWER
Answered 2021-Sep-19 at 21:17There are two issues here:
- Oracle Database is not supported on ARM processors, only Intel. See here: https://github.com/oracle/docker-images/issues/1814
- Oracle Database Docker images are only supported with Oracle Linux 7 or Red Hat Enterprise Linux 7 as the host OS. See here: https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance
Oracle Database ... is supported for Oracle Linux 7 and Red Hat Enterprise Linux (RHEL) 7. For more details please see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1)
The referenced My Oracle Support Doc ID goes on to say that the database binaries in their Docker image are built specifically for Oracle Linux hosts, and will also work on Red Hat. That's it.
Linux being what it is (flexible), lots of people have gotten the images to run on other flavors like Ubuntu with a bit of creativity, but only on x86 processors and even then the results are not guaranteed by Oracle: you won't be able to get support or practical advice when (and it's always when, not if in IT) things don't work as expected. You might not even be able to tell when things aren't working as they should. This is a case where creativity is not particularly rewarded; if you want it to work and get meaningful help, my advice is to use the supported hardware architecture and operating system version. Anything else is a complete gamble.
QUESTION
I would like to add to Gitlab pipeline a stage which verifies that the person approving the MR is different from the person doing the creation/merge (for this to work, I checked the setting in Gitlab that says: "Pipelines must succeed").
...ANSWER
Answered 2022-Feb-12 at 00:24To avoid duplicate pipelines:
QUESTION
I'm currently setuping a CI/CD pipeline in Azure Devops to deploy a NodeJS app on a linux hosted app service (not a VM).
My build and deploy both go smoothly, BUT I need to make sure some packages are installed in the environment after the app has been deployed.
The issue is: whatever apt-get
script I create after the deploy, I have to run then manually for them to actually take effect. In the Pipeline log they seem to have been executed, though.
Here is the part of my yaml code responsible for the deploy, did I miss something?
...ANSWER
Answered 2022-Jan-26 at 16:26For now, went with a "startup.sh" file which I run manually after each deploy. Gonna go through docker later though
QUESTION
I try to make a pretty basic GitLab CI job.
I want:
When I push to develop, gitlab builds docker image with tag "develop"
When I push to main, gitlab checks that current commit has tag, and builds image with it or job is not triggered.
ANSWER
Answered 2022-Jan-24 at 19:45Gitlab CI/CD has multiple 'pipeline sources', and some of the Predefined Variables only exist for certain sources.
For example, if you simply push a new commit to the remote, the value of CI_PIPELINE_SOURCE
will be push
. For push
pipelines, many of the Predefined Variables will not exist, such as CI_COMMIT_TAG
, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
, CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME
, etc.
However if you create a Git Tag either in the GitLab UI or from a git push --tags
command, it will create a Tag pipeline, and variables like CI_COMMIT_TAG
will exist, but CI_COMMIT_BRANCH
will not.
One variable that will always be present regardless what triggered the pipeline is CI_COMMIT_REF_NAME
. For Push sources where the commit is tied to a branch, this variable will hold the branch name. If the commit isn't tied to a branch (ie, there was once a branch for that commit but now it's been deleted) it will hold the full commit SHA. Or, if the pipeline is for a tag, it will hold the tag name.
For more information, read through the different Pipeline Sources (in the description of the CI_PIPELINE_SOURCE
variable) and the other variables in the docs linked above.
What I would do is move this check to the script
section so we can make it more complex for our benefit, and either immediately exit 0
so that the job doesn't run and it doesn't fail, or run the rest of the script.
QUESTION
Since Tomcat just unzips the EAR WAR to the filesystem to serve the app, what is the benefit of using an EAR WAR and what are the drawbacks to just pushing a filesystem to the Tomcat webapps filesystem?
ANSWER
Answered 2021-Dec-25 at 11:32Tomcat supports WAR but not EAR. Anyways , I think your question is about why we normally deploy the application that is packaged as a single WAR rather than the exploded WAR (i.e exploded deployment).
The main advantages for me are :
It is more easy to handle the deployment when you just need to deploy one file versus deploying many files in the exploded WAR deployment.
Because there is only one file get deployed , we can always make sure the application is running in a particular version. If we allow to deploy individual files and someone just update several files to other version , it is difficult to tell what is the exact version that the application is running.
There are already some discussion about such topics before , you can refer this and this for more information.
QUESTION
I have a CI setup using github Action/workflow to run cypress automated test everytime when a merge is done on the repo. The installation steps works fine however i run into issue when executing cypress command, let me show you the code.
CI pipeline in .github/workflows
...ANSWER
Answered 2021-Dec-30 at 16:53After searching for some time turns out i was using cypress 8.7.0 which was causing the issue, i downgraded to cypress 8.5.0 and it started working, hope that helps anyone else having this issue
QUESTION
I just created a pipeline using the YAML file and I am always getting the error "/_Azure-Pipelines/templates/webpart.yml: (Line: 41, Col: 27, Idx: 1058) - (Line: 41, Col: 60, Idx: 1091): While parsing a block mapping, did not find expected key.". I already verified the indentation of my YAML file and that looks fine.
Below is my YAML file.
...ANSWER
Answered 2021-Dec-07 at 10:42It was due to a missing quotation mark in the task PublishBuildArtifacts@1
for the PathtoPublish
. I found this error by using a YAML
extension provided by RedHat
.
Once you enabled that extension and set the formatted for YAML (SHIFT + ALT + F), it should show you the errors in your YAML file.
QUESTION
name: deploy-me
on: [push]
jobs:
deploys-me:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14'
- run: npm install
- run: npm run dev
//Next I want to copy some file from this repo and commit to a different repo and push it
...ANSWER
Answered 2021-Dec-05 at 09:57name: deploy-me
'on':
- push
jobs:
deploy-me:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14'
env:
ACCESS_TOKEN: '${{ secrets.ACCESS_TOKEN }}'
- run: npm install
- run: npm run build
- run: |
cd lib
git config --global user.email "xxx@gmail.com"
git config --global user.name "spark"
git config --global credential.helper cache
git clone https://${{secrets.ACCESS_TOKEN}}@github.com/sparkdevv/xxxxxx
cp index.js clonedFolder/ -f
cd clonedFolder
git add .
git commit -m "$(date)"
git push
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install homebrew-openbsd-pcengines-router
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page