hardening | security research team | Security library
kandi X-RAY | hardening Summary
kandi X-RAY | hardening Summary
Playbook for system hardening maintained by the #! security research team.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of hardening
hardening Key Features
hardening Examples and Code Snippets
Community Discussions
Trending Discussions on hardening
QUESTION
I'm trying to install this graphing library, but cabal-install is giving me this list of errors (only showing the bottom of the list, since everything above is very long and similar):
...ANSWER
Answered 2021-May-23 at 13:30Instead of editing the DEB_BUILD_HARDENING_PIE
environment variable, I have found another way to have cabal-install not make a PIE, which seems to fix this issue.
cabal --ghc-option="-optl-no-pie" install chart-diagrams
QUESTION
TLDR: I'm having trouble with setting up CSP for NextJS using Material-UI (server side rendering) and served by Nginx (using reverse proxy).
Currently I have issues with loading Material-UI stylesheet, and loading my own styles
using makeStyles
from @material-ui/core/styles
NOTE:
- followed https://material-ui.com/styles/advanced/#next-js to enable SSR
- I looked at https://material-ui.com/styles/advanced/#how-does-one-implement-csp but I'm not sure how I can get nginx to follow the
nonce
values, since nonce are generated as unpredictable string.
default.conf (nginx)
...ANSWER
Answered 2021-Jan-04 at 11:56Yeah, in order to use CSP with Material-UI (and JSS), you need to use a nonce
.
Since you have SSR, I see 2 opts:
You can publish CSP header at server side using next-secure-headers package or even Helmet. I hope you find a way how to pass
nonce
from Next to the Material UI.You can publish CSP header in
nginx
config (how do you do it now) and generate 'nonce' by nginx even it works as reverse proxy. You need to havengx_http_sub_module
orngx_http_substitutions_filter_module
in nginx.
TL;DR; details how it works pls see in https://scotthelme.co.uk/csp-nonce-support-in-nginx/ (it's a little bit more complicated way then just to use$request_id
nginx var)
QUESTION
I would like to be able to use a Google Cloud Composer cluster to launch kubernetes pods from its DAGs onto a separate GKE Autopilot cluster instead of onto the GKE cluster of Cloud Composer.
I have created a GKE autopilot cluster with "control plane global access" set to disabled and only allowing certain authorised networks to connect to the control plane. (based on the recommended security best practices in the documentation)
My pods all fail to launch with the following error message:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='3X.XXX.XXX.XX6', port=443): Max retries exceeded with url: /api/v1/namespaces/sink/pods?labelSelector=dag_id%3Dtest_dag%2Cexecution_date%3D2021-03-17T212059.4745700000-f0b251c80%2Ctask_id%3Dtest_sync (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out',)
I am using the GKEStartPodOperator which previously was able to start pods on a GKE cluster that was self managed (not autopilot) and which did not have "control plane global access" disabled.
Is there any documentation about how to setup Composer to be able to connect to a GKE autopilot cluster that is not exposing global access to the control plane and launch pods?
...ANSWER
Answered 2021-Mar-18 at 00:49Even with GKE Autopilot, you can use the same set of operators that was originally written for use with normal GKE clusters, such as GKEStartPodOperator
. Since the error you are seeing is a timeout to the Kubernetes control plane, it is most likely that your authorized networks setting does not include the addresses used by your Cloud Composer environment's workers.
If you are using a standard Composer environment (non-private IP), you will need to ensure that GCP ranges are included within your authorized networks (because your environment's nodes are assigned ephemeral, public addresses).
If you are using a private IP environment, then you can use private connectivity to reach the Kubernetes control plane, or alternatively, you can configure a Cloud NAT to allow your environment to reach network resources using a static IP address. In the latter case, the IP address of the NAT would need to be included within your authorized networks settings.
QUESTION
I am learning OpenID Connect implementation in ASP.NET Core with a Web API project. My client is currently Postman.
Context (XY problem): I want Sendgrid to report Webhook data with authentication. Sendgrid uses OAuth 2 flow. I have mocked a Sendgrid Webhook invocation on Postman to use.
I followed a few tutorials to set up authorization server, ie. the part that will issue you a token, in particular using a temporary in-memory store based on EF Core. For the moment, this solution is sufficient to me and I'll have to do more researching and prototyping before becoming production-grade for reuse in future project.
I can successfully obtain a token with Postman using hardcoded credentials. Now I want the Controller APIs to validate tokens issued by the very same server. Let me show some code:
Startup.cs
...ANSWER
Answered 2021-Jan-18 at 14:45The key was to add the correct authentication scheme
QUESTION
I had a following error(debian-rules-is-dh_make-template
) from lintian
.
How should I fix to pass the error?
The message showed me that I didn't modify debian/rules
, but I already modified (I added override_dh_auto_clean:
), so I guess that my debian/rules
is insufficient but I can't figure out why my debian/rules
is insufficient...
ANSWER
Answered 2020-Nov-25 at 02:24How about delete commentary in debian/rules
?
QUESTION
I'm using a Raspberry Pi 4 Model B and i want to run the Openthread Border Router application on it as a docker container. I use the command docker run --sysctl "net.ipv6.conf.all.disable_ipv6=0 net.ipv4.conf.all.forwarding=1 net.ipv6.conf.all.forwarding=1" -p 8080:80 --dns=127.0.0.1 -dit --network test-driver-net --volume /dev/ttyACM0:/dev/ttyACM0 --name ot-br --privileged openthread/otbr --radio-url spinel+hdlc+uart:///dev/ttyACM0
to start the container. I have tried the openthread/otbr:latest
and the openthread/otbr:reference-device
(both pushed 10. Nov. 2020) image, both were having the same problem:
The container is started successfully, but the Web-GUI is not available and no network operation takes place. Here is the logged output of the containers if called upon with docker logs ot-br
:
ANSWER
Answered 2020-Nov-19 at 17:06This issue was recently fixed with openthread/ot-br-posix#614 and new Docker images have been pushed. Please try again.
QUESTION
Basic auth is deprecated: https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster
I'm authing like this (same as the docs: https://www.terraform.io/docs/providers/google/d/client_config.html):
...ANSWER
Answered 2020-Oct-18 at 12:04Without being an expert in K8S, I would say that
QUESTION
MacOS now requires that all applications are hardened, signed and notarized. How does one sign and notarize an application created outside of XCode with a tool like PyInstaller?
I've sorted out the signing and notarization for .app
files created outside of XTools. There's a really helpful thread here that shows how to add an entitlements.plist
which fulfills the hardening of PyInstaller .app
files. I believe this also works on command line utilities as well, but could be missing something. Submitting a .dmg
containing a .app
for notarization using altool
will pass the tests and be notarized by Apple.
Submitting a single command line utility using the same process will also pass Notarization, but does not appear signed or notarized to the GateKeeper function on other machines. I assume this has something to do with the fact that a valid Info.plist
file is not included in the PyInstaller binary as detailed in this blog post about building and delivering command line tools for Catalina.
Checking the signature of a signed file using codesign -dvv
indicates that the Info.plist
is "not bound".
ANSWER
Answered 2020-Nov-07 at 23:34Apple requires that all distributed binaries are signed and notarized using a paid Apple Developer account. This can be done using commandline tools for binaries created with tools such as PyInstaller, or compiled using gcc.
Automated Python Script for this ProcessThe script linked below allows you to automate this process using project specific .ini
files.
If you already have a developer account with Developer ID Application
and Developer ID Installer
certificates configured in XCode, skip this step
- Create a developer account with Apple
- https://developer.apple.com and shell out $99 for a developer account. Theives
- Download and install X-Code from the Apple App Store
- Open and run X-Code app and install whatever extras it requires
- Open the preferences pane (cmd+,) and choose Accounts
- click the
+
in the lower left corner - choose
Apple ID
- enter your apple ID and password
- Previously created keys can be downloaded and installed from https://developer.apple.com
- click the
- Select the developer account you wish to use
- Choose Manage Certificates...
- Click the
+
in the lower left corner and choose Developer ID Application - Click the
+
in the lower left corner and choose Developer ID Installer
- Open
KeyChain Access
- Create a "New Password Item"
- Keychain Item Name: Developer-altool
- Account Name: your developer account email
- Password: the application-specific password you just created
NB! Additional args such as --add-data
may be needed to build a functional binary
- Create a onefile binary
pyinstaller --onefile myapp.py
- Add the entitements.plist to the directory (see below)
- List the available keys and locate a Developer ID Application certificate:
security find-identity -p basic -v
QUESTION
Hello fellow Overflowers,
I have 2 Nginx Webservers in my OpenStack Enviroment. I'm trying to set up load balancing with HAProxy right now. Ubuntu 18 is the OS on all servers.
Added the backend IP's to the default config. When I try connect to my LB via Browser I get:
"503 Service Unavailable"
What I know so far:
- Backends are available when I connect directly to them.
- I opened the correct ports in the OpenStack GUI
- I checked the HAProxy logs and found the following:
ANSWER
Answered 2020-Oct-21 at 09:50If you're getting a cannot bind socket
error message then try to run the below command
setsebool -P haproxy_connect_any=1
Or else kill the service which was running on the port you want to use and then restart the haproxy
$fuser -k /tcp
$sudo systemctl restart haproxy
QUESTION
In hardening our ADO projects for security, we found that an org-level user named "Azure Boards" has been granted access to all area paths. We haven't yet found documentation on this user, so we're assuming that this is a built-in user that should not be altered. However, as part of hardening we do need to understand more about this user.
The question is: Where is the documentation for the org-level ADO user named Azure Boards (if any)?
Update per comment request:
...ANSWER
Answered 2020-Sep-01 at 03:40I cannot find doc to describe this service account, I have raised a new feedback ticket in the GitHub and report it to Microsoft Doc teams, you can follow the ticket to get the latest news, I will continue to check the ticket and If have any achievements, I will inform you here.
Update1
This account Azure Boards gets created when you connect Azure Boards to GitHub. It works in the background to support the features that the GitHub connection supports.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install hardening
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page