aker-elk-playbook | Ansible Playbook for setting up the ELK/EFK Stack
kandi X-RAY | aker-elk-playbook Summary
kandi X-RAY | aker-elk-playbook Summary
Requirements - RHEL7 or CentOS7+ server/client with no modifications - Fedora 23 or higher needs to have ``yum python2 python2-dnf libselinux-python`` packages. * You can run this against Fedora clients prior to running Ansible ELK: - ``ansible fedora-client-01 -u root -m shell -i hosts -a "dnf install yum python2 libsemanage-python python2-dnf -y"`` - Deployment tested on Ansible 1.9.4 and 2.0.2. Notes - Current ELK version is 5.2.1 but you can checkout the 2.4 branch if you want that series - Sets the nginx htpasswd to admin/admin initially - nginx ports default to 80/8080 for Kibana and SSL cert retrieval (configurable) - Uses OpenJDK for Java - It’s fairly quick, takes around 3minutes on test VM - Filebeat templating is focused around OpenStack service logs - Fluentd can be substituted for the default Logstash - Set ``logging_backend: fluentd`` in ``group_vars/all.yml`` - Install curator by setting ``install_curator_tool: true`` in ``install/group_vars/all.yml``. ELK Server Instructions - Clone repo and setup your hosts file. ELK Client Instructions - Run the client playbook against the generated `elk_server` variable.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of aker-elk-playbook
aker-elk-playbook Key Features
aker-elk-playbook Examples and Code Snippets
Community Discussions
Trending Discussions on Devops
QUESTION
ci, and i-ve installed my gitlab-runner on a ec2 machine Ubuntu Server 18.04 LTS t2.micro, and when im pushing my code to start the build i get this
But it keeps stucked like this and after 1 hour it timeouts
I really don't know what to do about this problem knowing that i can clone successfully the project manually in my ec2 machine.
Any help is much appreciated if you ever encountered this problem and thanks in advance.
...ANSWER
Answered 2022-Mar-22 at 08:28check your job config or your timeout
QUESTION
I'm trying to use Podman to build an image of a Spring Boot project in IntelliJ. Jetbrain's guide suggests to "Select TCP socket and specify the Podman API service URL in Engine API URL" within Build,Execution,Deployment > Docker (see https://www.jetbrains.com/help/idea/podman.html).
However, when giving the TCP socket found on Podman's documentation (see https://docs.podman.io/en/latest/markdown/podman-system-service.1.html), IntelliJ says it cannot connect.
Finally, when here is the error that appears in terminal:
...ANSWER
Answered 2022-Mar-17 at 16:22Facing the same problem due to podman version upgrade.
Seems like a version downgrade would be required to recover the containers, but haven't tried it yet.
This issue points on deleting the machine and creating it again, but the containers would be lost
https://github.com/containers/podman/issues/13510
QUESTION
I was recently trying to create a docker container and connect it with my SQLDeveloper but I started facing some strange issues. I downloaded the docker image using below pull request:
...ANSWER
Answered 2021-Sep-19 at 21:17There are two issues here:
- Oracle Database is not supported on ARM processors, only Intel. See here: https://github.com/oracle/docker-images/issues/1814
- Oracle Database Docker images are only supported with Oracle Linux 7 or Red Hat Enterprise Linux 7 as the host OS. See here: https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance
Oracle Database ... is supported for Oracle Linux 7 and Red Hat Enterprise Linux (RHEL) 7. For more details please see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1)
The referenced My Oracle Support Doc ID goes on to say that the database binaries in their Docker image are built specifically for Oracle Linux hosts, and will also work on Red Hat. That's it.
Linux being what it is (flexible), lots of people have gotten the images to run on other flavors like Ubuntu with a bit of creativity, but only on x86 processors and even then the results are not guaranteed by Oracle: you won't be able to get support or practical advice when (and it's always when, not if in IT) things don't work as expected. You might not even be able to tell when things aren't working as they should. This is a case where creativity is not particularly rewarded; if you want it to work and get meaningful help, my advice is to use the supported hardware architecture and operating system version. Anything else is a complete gamble.
QUESTION
I would like to add to Gitlab pipeline a stage which verifies that the person approving the MR is different from the person doing the creation/merge (for this to work, I checked the setting in Gitlab that says: "Pipelines must succeed").
...ANSWER
Answered 2022-Feb-12 at 00:24To avoid duplicate pipelines:
QUESTION
I'm currently setuping a CI/CD pipeline in Azure Devops to deploy a NodeJS app on a linux hosted app service (not a VM).
My build and deploy both go smoothly, BUT I need to make sure some packages are installed in the environment after the app has been deployed.
The issue is: whatever apt-get
script I create after the deploy, I have to run then manually for them to actually take effect. In the Pipeline log they seem to have been executed, though.
Here is the part of my yaml code responsible for the deploy, did I miss something?
...ANSWER
Answered 2022-Jan-26 at 16:26For now, went with a "startup.sh" file which I run manually after each deploy. Gonna go through docker later though
QUESTION
I try to make a pretty basic GitLab CI job.
I want:
When I push to develop, gitlab builds docker image with tag "develop"
When I push to main, gitlab checks that current commit has tag, and builds image with it or job is not triggered.
ANSWER
Answered 2022-Jan-24 at 19:45Gitlab CI/CD has multiple 'pipeline sources', and some of the Predefined Variables only exist for certain sources.
For example, if you simply push a new commit to the remote, the value of CI_PIPELINE_SOURCE
will be push
. For push
pipelines, many of the Predefined Variables will not exist, such as CI_COMMIT_TAG
, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
, CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME
, etc.
However if you create a Git Tag either in the GitLab UI or from a git push --tags
command, it will create a Tag pipeline, and variables like CI_COMMIT_TAG
will exist, but CI_COMMIT_BRANCH
will not.
One variable that will always be present regardless what triggered the pipeline is CI_COMMIT_REF_NAME
. For Push sources where the commit is tied to a branch, this variable will hold the branch name. If the commit isn't tied to a branch (ie, there was once a branch for that commit but now it's been deleted) it will hold the full commit SHA. Or, if the pipeline is for a tag, it will hold the tag name.
For more information, read through the different Pipeline Sources (in the description of the CI_PIPELINE_SOURCE
variable) and the other variables in the docs linked above.
What I would do is move this check to the script
section so we can make it more complex for our benefit, and either immediately exit 0
so that the job doesn't run and it doesn't fail, or run the rest of the script.
QUESTION
Since Tomcat just unzips the EAR WAR to the filesystem to serve the app, what is the benefit of using an EAR WAR and what are the drawbacks to just pushing a filesystem to the Tomcat webapps filesystem?
ANSWER
Answered 2021-Dec-25 at 11:32Tomcat supports WAR but not EAR. Anyways , I think your question is about why we normally deploy the application that is packaged as a single WAR rather than the exploded WAR (i.e exploded deployment).
The main advantages for me are :
It is more easy to handle the deployment when you just need to deploy one file versus deploying many files in the exploded WAR deployment.
Because there is only one file get deployed , we can always make sure the application is running in a particular version. If we allow to deploy individual files and someone just update several files to other version , it is difficult to tell what is the exact version that the application is running.
There are already some discussion about such topics before , you can refer this and this for more information.
QUESTION
I have a CI setup using github Action/workflow to run cypress automated test everytime when a merge is done on the repo. The installation steps works fine however i run into issue when executing cypress command, let me show you the code.
CI pipeline in .github/workflows
...ANSWER
Answered 2021-Dec-30 at 16:53After searching for some time turns out i was using cypress 8.7.0 which was causing the issue, i downgraded to cypress 8.5.0 and it started working, hope that helps anyone else having this issue
QUESTION
I just created a pipeline using the YAML file and I am always getting the error "/_Azure-Pipelines/templates/webpart.yml: (Line: 41, Col: 27, Idx: 1058) - (Line: 41, Col: 60, Idx: 1091): While parsing a block mapping, did not find expected key.". I already verified the indentation of my YAML file and that looks fine.
Below is my YAML file.
...ANSWER
Answered 2021-Dec-07 at 10:42It was due to a missing quotation mark in the task PublishBuildArtifacts@1
for the PathtoPublish
. I found this error by using a YAML
extension provided by RedHat
.
Once you enabled that extension and set the formatted for YAML (SHIFT + ALT + F), it should show you the errors in your YAML file.
QUESTION
name: deploy-me
on: [push]
jobs:
deploys-me:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14'
- run: npm install
- run: npm run dev
//Next I want to copy some file from this repo and commit to a different repo and push it
...ANSWER
Answered 2021-Dec-05 at 09:57name: deploy-me
'on':
- push
jobs:
deploy-me:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14'
env:
ACCESS_TOKEN: '${{ secrets.ACCESS_TOKEN }}'
- run: npm install
- run: npm run build
- run: |
cd lib
git config --global user.email "xxx@gmail.com"
git config --global user.name "spark"
git config --global credential.helper cache
git clone https://${{secrets.ACCESS_TOKEN}}@github.com/sparkdevv/xxxxxx
cp index.js clonedFolder/ -f
cd clonedFolder
git add .
git commit -m "$(date)"
git push
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install aker-elk-playbook
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page