hacker | Hacker is a Jekyll theme for GitHub Pages | Theme library
kandi X-RAY | hacker Summary
kandi X-RAY | hacker Summary
Hacker is a Jekyll theme for GitHub Pages
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of hacker
hacker Key Features
hacker Examples and Code Snippets
Community Discussions
Trending Discussions on hacker
QUESTION
I need your help. I've programmed a custom cron function within WordPress, which uses the WooCommerce product importer class to automatically import products:
...ANSWER
Answered 2022-Apr-10 at 09:52How about assigning the capability to the '0' User instead of logingin the admin ( Mostly user ID 1 will be an admin user ) while importing and removing the capability after the import is done.
QUESTION
I'm new to Hacker Rank and I'm currently solving problems in the java stack, I tried to solve this algorithm:
A string containing only parentheses is balanced if the following is true: 1. if it is an empty string 2. if A and B are correct, AB is correct, 3. if A is correct, (A) and {A} and [A] are also correct.
Examples of some correctly balanced strings are: "{}()", "[{()}]", "({()})"
Examples of some unbalanced strings are: "{}(", "({)}", "[[", "}{" etc.
Given a string, determine if it is balanced or not.
And I found the following one liner solution which I couldn't understand can someone explain it please?
...ANSWER
Answered 2022-Mar-31 at 14:02replaceAll method needs 2 parameters (regex, replacement); you need to understand regex:
the \\
means whatever the begin is.
(\\)
the begin must to be '('
and the end must be ')'
and whatever between them and like that for rest from regex.
the solution is replacing regex with empty String.
so if your input = (1,2)3(
it will be after replacing 3(
QUESTION
I have been looking at the Wordfence scan results on my site this morning and see 17 instances which seem to imply malware has ben installed on the server. I would be surprised if this were to be the case but wanted to be sure:
One example,
Filename: wp-admin/menu-header-cron.php File Type: Not a core, theme, or plugin file from wordpress.org. Details: This file appears to be installed or modified by a hacker to perform malicious activity. If you know about this file you can choose to ignore it to exclude it from future scans. The matched text in this file is:
The issue type is: Backdoor:PHP/numeric.rce.8527 Description: Remote code execution malware
Looking at the file in question, the content of this file is:
...ANSWER
Answered 2022-Mar-27 at 16:34That snippet is reading the limit parameter then passing is as an URL to get a file. And eval function will just execute it
So its pretty dangerous
QUESTION
What I mean is from attacking from hacker like DOS.
I start intern at startup company, my job is to develop a food delivery application. Right now, I'm using firebase with flutterfire package to authenticate and send data. I allow authentication by Anonymously(<-this method is where I concern, can someone DOS by create new UID rapidly?), Facebook, Gmail and AppleID.
I also need to send sensitive informations like credit card and address to store on server. Is there any security risks that I should concern?
...ANSWER
Answered 2022-Mar-20 at 19:31Firebase protects its services against abuse where reasonable possible. For example, a DoS by creating anonymous user accounts in rapid succession, will quickly lead to a block on that IP address.
Firebase can be used to build secure applications, but (just like any technology) it can also be used in improperly secured applications. Covering the difference between those two cases is as complex a topic as how to build the application itself though, so quite a bit too broad to reasonably write in a single answer here on Stack Overflow.
QUESTION
Ubuntu 20.04: what are the security risks without firewall?
Installed Ubuntu 20.04, but forget to enable firewall using ufw.
SSH 22 port: use keys(2048 bit) for login, no password. Setting UsePAM=true, any risk?
Any other services that may have security holes without firewall, and hackers can break into the server?
...ANSWER
Answered 2022-Mar-07 at 17:40Yes you should enable the firewall. It's an important security layer.
Software has bugs. The firewall layer prevents some bugs or mistakes from causing harm.
Security is layered for the same reason airplanes have redundant systems. Even single engine airplanes are designed to glide when they lose thrust.
SSH and Services You Know AboutWhile proper SSH configuration is another topic, it illustrates a reason firewalls are needed. You're config is on the right track but without reading the entire man-page you're still unsure if it's secure.
If you're unsure about SSH, a firewall can limit access from source IPs that you define adding another layer.
SSH is but one of a handful of services you're running that might be accessible over the public internet. Sometimes services become open to the public unintentionally.
Third Party SoftwareOne type of bug is a software update or install that inadvertently opens a service and exposes that service to the public internet.
I frequently see application installs that open a private service bound to 0.0.0.0 when it should be bound to 127.0.0.1. If you don't know the difference, you aren't alone. Binding to 0.0.0.0 (or *) means open to the public internet.
This isn't just a user-workstation problem. Package managers are susceptible to this too. NPM, Python PIP, and Apt all can run executables on your system.
Checking for Open ServicesRun sudo netstat -n
to show active internet connections.
For example, here's output:
QUESTION
I'm trying to understand the security model in Github Actions (GHA). Let's say I have the following requirements on a public repo:
- Allow pull requests from forked repos to be opened
- GHA should run unit tests on pull requests
- GHA should post unit test results as a PR comment
In order for the third requirement to work, the pull request needs access to the GITHUB_TOKEN with repo write permissions. This will require the following two permissions:
- Run Workflows from fork pull requests.
- Send write tokens to Workflows from fork pull requests
Now if write tokens are sent to the Workflows in fork PRs, what's to prevent a hacker from changing the Workflow in the PR and using it for any number of malicious purposes (creating a malicious release in the original repo or exfiltrating repo secrets)? I understand you can limit the permissions of the token, but this is done within the workflow; the hacker can just as easily remove the limitations as part of the PR.
Is there any way to accomplish the three requirements without this security hole?
...ANSWER
Answered 2022-Mar-05 at 22:10The must-read for this question is: Keeping your GitHub Actions and workflows secure Part 1: Preventing pwn requests
GH Action provides the event type "pull_request_target", which has write permissions and can comment the PR. Do not use this without being careful! The PR's code is untrusted - if you build it, it might inject malicious code and compromise your repository and might steal your secrets.
The proposed solution for this is:
Have a workflow triggered by "pull_request" event. This runs with read-only GITHUB_TOKEN. Here you can run the unit tests. At the end of this workflow, the unit test results are uploaded as a build artifact.
Have another workflow, that is triggered by the event "workflow_run". It runs when the PR-workflow has completed. This second workflow runs in the context of the base repository with write-access GITHUB_TOKEN and all other configured secrets. It can download the artifact from the first workflow run and use this build result to create a comment to the PR.
Important: The incoming artifact data from the first workflow run must still be considered untrusted. But:
When used in a safe manner, like reading PR numbers or reading a code coverage text to comment on the PR, it is safe to use such untrusted data in the privileged workflow context.
See https://securitylab.github.com/research/github-actions-preventing-pwn-requests/ for a complete usage example.
Note: Your screenshot about the configuration for "Fork pull request workflows" has meanwhile been renamed to "Fork pull request workflows in private repositories": With private repos, you are in control with whom you share your code. So you might decide to trust by default. But with public repository, anyone can fork the repo.
Update: All links to the github actions blog post series:
QUESTION
In this section, the codecov documentation says:
The upload token is required for all uploads, except originating from public projects using Travis-CI, Circle CI, Azure, Github Actions.
What prevents "hackers" from uploading a fake codecov file and claiming the file was uploaded from a public repository with Codecov enabled?
What makes public projects special?
...ANSWER
Answered 2022-Feb-28 at 13:40Codecov uses the status of the CI, the progress of the current job, and knowledge from the public API of both the repo and CI providers to determine whether or not a tokenless upload on a public repository should be successful
Source: I work at Codecov at the time of this answer
QUESTION
I am new to the OAuth world and I am trying to understand the benefits of using PKCE over traditional Authorization code grant. (Many of my assumptions could be wrong, so I would thank for your corrections.)
I am a mobile app developer and according to OAuth documentation, client secrets can't be hardcoded in public clients' app code. The reason to avoid hardcoding the client secret is that a hacker could decompile my app and get my client secret.
The hacker with my client secret and my redirect_url, could develop a fake application. If a final user (User1) downloads the real application and the hacker's application (both), the fake application could listen to the real application callback and get the authorization code from it. With the authorization code (from the callback) and the client secret (stolen by decompiling my app), the hacker could get the authorization token and the refresh token and be able to get for example User1's data.
If other users download the real and the fake application, their data would also be in danger. Am I right? Would the hacker need both or could he/she do an attack only with the authorization code? Does the fifth step of the image requires the client secret and authorization code?
The attack is called interception attack.
To solve the the problem of hardcoding client secrets in the public client app and make it impossible for hackers to get the client secret and steal tokens, PKCE was invented. With PKCE, the client app code doesn't need to have the client secret hardcoded as PKCE doesn't need that information to get the tokens of the final users.
The PKCE flow creates a random string, transforms it to a SHA-256 hash value and to Base64. In the second point of the image, that encoded string is sent to the authentication server with the client id. Then the authorization code is sent in the callback and if any malicious app intercepts the code, it wouldn't be able to get the tokens as the fifth point of the image needs the original random string that was created by the legitimate app.
That is great, but if the client secret isn't need any more to get the tokens to access User1 data, how can I avoid a hacker developing a fake app which use PKCE flow with my client id and getting the tokens of the users who think that app is the legitimate one?
As the fifth step of the image don't need any more the client secret to get the tokens, anyone could develop fake apps using my public client id, and if any user downloads the fake app and do the OAuth flow, the hacker could get its tokens and access that users data!
Am I right?
...ANSWER
Answered 2022-Jan-29 at 21:21if the client secret isn't need anymore to get the tokens to access User1 data, how can I avoid a hacker developing a fake App which use PKCE flow with my client id and getting the tokens of the users who think that app is the legitimate one?
OAuth 2.0 or PKCE does not protect against "fake apps".
The PKCE does protect against having a malicious app on the device to steal a token that is intended for another app. E.g. think of a Bank app, it is not good if another app on the device can get the token that the Bank app is using. That is the case illustrated in your picture and that PKCE mitigates against.
As the 5th step of the image don't need anymore the client secret to get the tokens, anyone could develop fake apps using my public client id.
A mobile app cannot protect a client secret, similarly to JavaScript Single Page Applications. Therefore these clients are Public Clients rather than Confidential Clients according to OAuth 2.0. Only Confidential Clients can protect a client secret in a secure way, only those should use client secrets. PKCE is a good technique for Public Clients but might be used for Confidential Clients as well.
if any user downloads the fake app and do the oauth flow, the hacker could get it's tokens and access that users data!
Contact Apple Store or Google Play store for "fake apps", or use e.g. Anti-malware applications. That is the mitigations against "fake apps". PKCE only mitigates the case when another app on the same device try to steal the token that is issued for another app (e.g. a bank app).
QUESTION
How can I test in Solana Anchor if a hacker can call or invoke certain program functions?
Is it by changing the first element inside the signers array:
...ANSWER
Answered 2022-Jan-30 at 08:01For your anchor tests, it will use the provider.wallet as the payer and thus automatically use the provider.wallet as the signer.
You can also add signers to your javascript calls through the signers array field incase your program requires them to be signers.
Tutorial 1 is not a realistic example here, since anyone can come in and modify the accounts.
By default the anchor tests use the provider.wallet as the payer and signer for transactions. If you want to use another wallet, you would have to create another anchor program instance, follow the function below.
QUESTION
I am developing an Sports Mobile App with flutter (mobile client) that tracks it's users activity data. After tracking an activity (swimming, running, walking ...) it calls a REST API developed by me (with springboot) passing that activity data with a POST. Then, my user will be able to view the logs of his tracked activities calling the REST API with a GET.
As I know that my own tracking development isn't as good as Strava, Garmin, Huawei and so on ones, I want to let my app users to connect with their Strava, Garmin and so on accounts to get their activities data, so I need users to authorize my app to get that data using OAuth.
In a first approach, I have managed to develop all the flow of OAuth with flutter using the Authorization Code Grant. The authorization server login is launched by flutter in a user agent (chrome tab), and once the resource owner has done the login and authorize my flutter app, my flutter app takes the authorization code and the calls to the authorization server to get the tokens . So I can say, that my client is my flutter App. When the oauth flow is done, I send the tokens to my Rest API in order to store them in a database.
My first idea was to send those tokens to my backend app in order to store them in a database and develop a process that takes those tokens, consult resource servers, parses each resource server json response actifvities to my rest API activity model ones and store in my database. Then, if a resource owner consults its activities calling my Rest API, he would get a response with all the activities (the mobiles app tracked ones + Strava, Garmin, resource servers etc ones stores in my db).
I have discarded the option to do the call to the resource servers directly from my client and to my rest api when a user pushes a syncronize button and mapping those responses directly in my client because I need the data of those resource servers responses in the backend in order to implement a medal functionality. Further more, Strava, Garmin, etc have limits of usage and I don't want to let my resource owners the hability to push the button the times they want.
Here is the flow of my first idea:
Steps:
Client calls the authorization server launching a user agent to an oauth login. In order to make the resource owner login and authorize. The url and the params are hardcoded are hardcoded in my client.
Resource owner logins and authorize client.
Callback is sent with code.
Client captures code of the callback and makes a post to he authorization server to get the tokens. As some authorization servers accept PKCE, I am using PKCE when its possible, to avoid attacks and hardcoding my client secret in my client. Others like Strava's, don't allow PKCE, so I have to hardcode the client secret in my client in order to get the tokens.
Once the tokens are returned to my client, I send them to my rest api and store in a database identifying the tokens resource owner.
To call the resource server:
One periodic process takes the tokens of each resource owner and updates my database with the activities returned from each resource server.
The resource owner calls the rest api and obtains all the activities.
The problem to this first idea is that some of the authorization servers allow implementing PKCE (Fitbit) and others use the client secret to create the tokens (Strava). As I need the client secret to get the tokens for some of those authorization servers, I have hardcoded the secrets in the client and that is not secure.
I know that it is dangerous to insert the client secrets into the client as a hacker can decompile my client and get the client secret. I can't figure how to get the resource owner tokens of Strava without hardcoding the client secret if PKCE is not allowed in the authorization server.
As I don't want to hardcode my client secrets in my client because it is insafe and I want to store the tokens in my db, I dont see my first approach as a good option. Further more, I am creating a POST request to my REST API in order to store the access token and refresh token in my database and if i am not wrong, that process can be done directly from the backend.
I am in the situation that I have developed a public client (mobile app) that has hardcoded the client secrets because I can't figure how to avoid doing that when PKCE isn't allowed by the authorization server to get the tokens.
So after thinking on all those problems, my second idea is to take advantage of my REST API and do the call to the authorization server from there. So my client would be confidential and I would do the OAuth flow with a Server-side Application.
My idea is based on this image.
In order to avoid the client secret hardcoding in my mobile client, could the following code flow based on the image work and be safe to connect to Strava, Garmin, Polar....?
Strava connection example:
MOBILE CLIENT
Mobile public Client Calls my Rest API to get as a result the URI of Strava Authorization server login with needed params such as: callback, redirect_uri, client_it, etc.
Mobile client Catches the Rest API GET response URI.
Mobile client launches a user agent (Chrome custom tab) and listen to the callback.
USER AGENT
The login prompt to strava is shown to the resource owner.
The resource owner inserts credentials and pushes authorize.
Callback is launched
MOBILE CLIENT
When my client detects the callback, return to client and stract the code from the callback uri.
Send that code to my REST API with a post. (https://myrestapi with the code in the body)
REST API CLIENT
Now, the client is my REST API, as it is going to be the one that calls the authorization server with the code obtained by the mobile client. The client will take that code and with the client secret hardcoded in it will call to the Authorization server. With this approach, the client secret is no more in the mobile client, so it is confidential.
The authorization server returns the tokens and I store them in a database.
THE PROCESS
- Takes those tokens from my database and make calls to the resource servers of strava to get the activities. Then parses those activities to my model and stores them into the database.
Is this second approach a good way to handle the client secrets in order to avoid making them public? Or I am doing something wrong? Whatr flow could I follow to do it in the right way? I am really stuck with this case, and as I am new to OAuth world I am overwhelmed with all the information I have read.
...ANSWER
Answered 2022-Jan-25 at 12:54From what I understand, the main concern here is, you want to avoid hardcoding of client secret.
I am taking keycloak as an example for the authorization server, but this would be same in other authorization server as well since the implementation have to follow the standards
In the authrization servers there are two types of client's one is the
1.Confidential client - These are the one's that require both client-id and client-secret to be passed in your Rest api call
The CURL would be like this, client secret required
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install hacker
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page