isowall | completely isolates a target device | Security library

 by   robertdavidgraham C Version: Current License: Non-SPDX

kandi X-RAY | isowall Summary

kandi X-RAY | isowall Summary

isowall is a C library typically used in Security applications. isowall has no bugs, it has no vulnerabilities and it has low support. However isowall has a Non-SPDX License. You can download it from GitHub.

This is a mini-firewall that completely isolates a target device from the local network. This is for allowing infected machines Internet access, but without endangering the local network.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              isowall has a low active ecosystem.
              It has 281 star(s) with 73 fork(s). There are 20 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of isowall is current.

            kandi-Quality Quality

              isowall has no bugs reported.

            kandi-Security Security

              isowall has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              isowall has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              isowall releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of isowall
            Get all kandi verified functions for this library.

            isowall Key Features

            No Key Features are available at this moment for isowall.

            isowall Examples and Code Snippets

            No Code Snippets are available at this moment for isowall.

            Community Discussions

            QUESTION

            How are code-branch side channel attacks mitigated on Java?
            Asked 2022-Mar-10 at 18:18

            When you are working with secret keys, if your code branches unequally it could reveal bits of the secret keys via side channels. So for some algorithms it should branch uniformly independently of the secret key.

            On C/C++/Rust, you can use assembly to be sure that no compiler optimizations will mess with the branching. However, on Java, the situation is difficult. First of all, it does JIT for desktop, and AOT on Android, so there are 2 possibilities for the code to be optimized in an unpredictable way, as JIT and AOT are always changing and can be different for each device. So, how are side channel attacks that take advantage of branching prevented on Java?

            ...

            ANSWER

            Answered 2022-Mar-10 at 18:18

            When performing side-channel attacks, one of the main ways of doing these are to read the power-consumption of the chip using differential power analysis (DPA). When you have a branch in a code, such as an if statement, this can adversely affect the power draw in such a way that correlations can be made as to which choices are being made. To thwart this analysis, it would be in your interest to have a "linear" power consumption. This can do some degree be mitigated by code, but would ultimately depend upon the device itself. According Brennan et.al [1], some chose to tackle the java JIT issue by caching instructions. In code, the "best" you could do would be to program using canaries, in order to confuse an attacker, as proposed by Brennan et.al [2], and demonstrated in the following (very simplified) example code:

            Source https://stackoverflow.com/questions/71316831

            QUESTION

            Trusting individual invalid certs in mitmproxy
            Asked 2022-Mar-02 at 07:37

            I use mitmproxy to gather intel from outbound AS2 (HTTP) requests leaving our network. The schema goes like this:

            ...

            ANSWER

            Answered 2022-Mar-02 at 07:37

            It's been a while since I've tried to solve this using a custom addon and it seems to work fine so I'll share it here:

            https://gist.github.com/jsmucr/24cf0859dd7c9bba8eb2817d7b0bf4b6

            This approach has a bit of disadvantage and that's the fact that it doesn't check if the peer certificate changes.

            Source https://stackoverflow.com/questions/70515761

            QUESTION

            Ways to stop other android applications from identifying my application?
            Asked 2022-Feb-23 at 11:07

            We have developed a payment application with native android to compete in the local market. Our competitors have made it so that when their applications detect ours, theirs automatically disables itself. The problem is that our users use their applications as well so we wanted our application to be unidentifiable by the other apps. Our solutions for this have been distributing our app manually instead of playstore and generating a unique bundle id for each individual user.

            What else can we do to get around this?

            ...

            ANSWER

            Answered 2022-Feb-23 at 11:07

            till Android 10 any app can list ALL apps installed on device. starting Android 11 there are some limitations and by default you can't list all apps, so you are "safe". BUT 3rd-party may request QUERY_ALL_PACKAGES permission and will detect your app as well. note that Google Play Store have special policy for such apps, not every app may be published with it

            still your app may be detected when it will use this 3rd-party apps API/Service (depends on way for access) and then it will lock itself

            Source https://stackoverflow.com/questions/71169384

            QUESTION

            Log4j vulnerability - Is Log4j 1.2.17 vulnerable (was unable to find any JNDI code in source)?
            Asked 2022-Feb-01 at 15:47

            With regard to the Log4j JNDI remote code execution vulnerability that has been identified CVE-2021-44228 - (also see references) - I wondered if Log4j-v1.2 is also impacted, but the closest I got from source code review is the JMS-Appender.

            The question is, while the posts on the Internet indicate that Log4j 1.2 is also vulnerable, I am not able to find the relevant source code for it.

            Am I missing something that others have identified?

            Log4j 1.2 appears to have a vulnerability in the socket-server class, but my understanding is that it needs to be enabled in the first place for it to be applicable and hence is not a passive threat unlike the JNDI-lookup vulnerability which the one identified appears to be.

            Is my understanding - that Log4j v1.2 - is not vulnerable to the jndi-remote-code execution bug correct?

            References

            This blog post from Cloudflare also indicates the same point as from AKX....that it was introduced from Log4j 2!

            Update #1 - A fork of the (now-retired) apache-log4j-1.2.x with patch fixes for few vulnerabilities identified in the older library is now available (from the original log4j author). The site is https://reload4j.qos.ch/. As of 21-Jan-2022 version 1.2.18.2 has been released. Vulnerabilities addressed to date include those pertaining to JMSAppender, SocketServer and Chainsaw vulnerabilities. Note that I am simply relaying this information. Have not verified the fixes from my end. Please refer the link for additional details.

            ...

            ANSWER

            Answered 2022-Jan-01 at 18:43

            The JNDI feature was added into Log4j 2.0-beta9.

            Log4j 1.x thus does not have the vulnerable code.

            Source https://stackoverflow.com/questions/70310980

            QUESTION

            How to manage OAuth flow in mobile application with server
            Asked 2022-Jan-27 at 12:23

            I am developing an Sports Mobile App with flutter (mobile client) that tracks it's users activity data. After tracking an activity (swimming, running, walking ...) it calls a REST API developed by me (with springboot) passing that activity data with a POST. Then, my user will be able to view the logs of his tracked activities calling the REST API with a GET.

            As I know that my own tracking development isn't as good as Strava, Garmin, Huawei and so on ones, I want to let my app users to connect with their Strava, Garmin and so on accounts to get their activities data, so I need users to authorize my app to get that data using OAuth.

            In a first approach, I have managed to develop all the flow of OAuth with flutter using the Authorization Code Grant. The authorization server login is launched by flutter in a user agent (chrome tab), and once the resource owner has done the login and authorize my flutter app, my flutter app takes the authorization code and the calls to the authorization server to get the tokens . So I can say, that my client is my flutter App. When the oauth flow is done, I send the tokens to my Rest API in order to store them in a database.

            My first idea was to send those tokens to my backend app in order to store them in a database and develop a process that takes those tokens, consult resource servers, parses each resource server json response actifvities to my rest API activity model ones and store in my database. Then, if a resource owner consults its activities calling my Rest API, he would get a response with all the activities (the mobiles app tracked ones + Strava, Garmin, resource servers etc ones stores in my db).

            I have discarded the option to do the call to the resource servers directly from my client and to my rest api when a user pushes a syncronize button and mapping those responses directly in my client because I need the data of those resource servers responses in the backend in order to implement a medal functionality. Further more, Strava, Garmin, etc have limits of usage and I don't want to let my resource owners the hability to push the button the times they want.

            Here is the flow of my first idea:

            Steps:

            1. Client calls the authorization server launching a user agent to an oauth login. In order to make the resource owner login and authorize. The url and the params are hardcoded are hardcoded in my client.

            2. Resource owner logins and authorize client.

            3. Callback is sent with code.

            4. Client captures code of the callback and makes a post to he authorization server to get the tokens. As some authorization servers accept PKCE, I am using PKCE when its possible, to avoid attacks and hardcoding my client secret in my client. Others like Strava's, don't allow PKCE, so I have to hardcode the client secret in my client in order to get the tokens.

            5. Once the tokens are returned to my client, I send them to my rest api and store in a database identifying the tokens resource owner.

            To call the resource server:

            1. One periodic process takes the tokens of each resource owner and updates my database with the activities returned from each resource server.

            2. The resource owner calls the rest api and obtains all the activities.

            The problem to this first idea is that some of the authorization servers allow implementing PKCE (Fitbit) and others use the client secret to create the tokens (Strava). As I need the client secret to get the tokens for some of those authorization servers, I have hardcoded the secrets in the client and that is not secure.

            I know that it is dangerous to insert the client secrets into the client as a hacker can decompile my client and get the client secret. I can't figure how to get the resource owner tokens of Strava without hardcoding the client secret if PKCE is not allowed in the authorization server.

            As I don't want to hardcode my client secrets in my client because it is insafe and I want to store the tokens in my db, I dont see my first approach as a good option. Further more, I am creating a POST request to my REST API in order to store the access token and refresh token in my database and if i am not wrong, that process can be done directly from the backend.

            I am in the situation that I have developed a public client (mobile app) that has hardcoded the client secrets because I can't figure how to avoid doing that when PKCE isn't allowed by the authorization server to get the tokens.

            So after thinking on all those problems, my second idea is to take advantage of my REST API and do the call to the authorization server from there. So my client would be confidential and I would do the OAuth flow with a Server-side Application.

            My idea is based on this image.

            In order to avoid the client secret hardcoding in my mobile client, could the following code flow based on the image work and be safe to connect to Strava, Garmin, Polar....?

            Strava connection example:

            MOBILE CLIENT

            • Mobile public Client Calls my Rest API to get as a result the URI of Strava Authorization server login with needed params such as: callback, redirect_uri, client_it, etc.

            • Mobile client Catches the Rest API GET response URI.

            • Mobile client launches a user agent (Chrome custom tab) and listen to the callback.

            USER AGENT

            • The login prompt to strava is shown to the resource owner.

            • The resource owner inserts credentials and pushes authorize.

            • Callback is launched

            MOBILE CLIENT

            • When my client detects the callback, return to client and stract the code from the callback uri.

            • Send that code to my REST API with a post. (https://myrestapi with the code in the body)

            REST API CLIENT

            • Now, the client is my REST API, as it is going to be the one that calls the authorization server with the code obtained by the mobile client. The client will take that code and with the client secret hardcoded in it will call to the Authorization server. With this approach, the client secret is no more in the mobile client, so it is confidential.

            • The authorization server returns the tokens and I store them in a database.

            THE PROCESS

            • Takes those tokens from my database and make calls to the resource servers of strava to get the activities. Then parses those activities to my model and stores them into the database.

            Is this second approach a good way to handle the client secrets in order to avoid making them public? Or I am doing something wrong? Whatr flow could I follow to do it in the right way? I am really stuck with this case, and as I am new to OAuth world I am overwhelmed with all the information I have read.

            ...

            ANSWER

            Answered 2022-Jan-25 at 12:54

            From what I understand, the main concern here is, you want to avoid hardcoding of client secret.
            I am taking keycloak as an example for the authorization server, but this would be same in other authorization server as well since the implementation have to follow the standards
            In the authrization servers there are two types of client's one is the
            1.Confidential client - These are the one's that require both client-id and client-secret to be passed in your Rest api call

            The CURL would be like this, client secret required

            Source https://stackoverflow.com/questions/70770137

            QUESTION

            Which are safe methods and practices for string formatting with user input in Python 3?
            Asked 2022-Jan-18 at 12:53
            My Understanding

            From various sources, I have come to the understanding that there are four main techniques of string formatting/interpolation in Python 3 (3.6+ for f-strings):

            1. Formatting with %, which is similar to C's printf
            2. The str.format() method
            3. Formatted string literals/f-strings
            4. Template strings from the standard library string module

            My knowledge of usage mainly comes from Python String Formatting Best Practices (source A):

            • str.format() was created as a better alternative to the %-style, so the latter is now obsolete
            • f-strings allow str.format()-like behavior only for string literals but are shorter to write and are actually somewhat-optimized syntactic sugar for concatenation
            • Template strings are safer than str.format() (demonstrated in the first source) and the other two methods (implied in the first source) when dealing with user input

            I understand that the aforementioned vulnerability in str.format() comes from the method being usable on any normal strings where the delimiting braces are part of the string data itself. Malicious user input containing brace-delimited replacement fields can be supplied to the method to access environment attributes. I believe this is unlike the other ways of formatting where the programmer is the only one that can supply variables to the pre-formatted string. For example, f-strings have similar syntax to str.format() but, because f-strings are literals and the inserted values are evaluated separately through concatenation-like behavior, they are not vulnerable to the same attack (source B). Both %-formatting and Template strings also seem to only be supplied variables for substitution by the programmer; the main difference pointed out is Template's more limited functionality.

            My Confusion

            I have seen a lot of emphasis on the vulnerability of str.format() which leaves me with questions of what I should be wary of when using the other techniques. Source A describes Template strings as the safest of the above methods "due to their reduced complexity":

            The more complex formatting mini-languages of the other string formatting techniques might introduce security vulnerabilities to your programs.

            1. Yes, it seems like f-strings are not vulnerable in the same way str.format() is, but are there known concerns about f-string security as is implied by source A? Is the concern more like risk mitigation for unknown exploits and unintended interactions?

            I am not familiar with C and I don't plan on using the clunkier %/printf-style formatting, but I have heard that C's printf had its own potential vulnerabilities. In addition, both sources A and B seem to imply a lack of security with this method. The top answer in Source B says,

            String formatting may be dangerous when a format string depends on untrusted data. So, when using str.format() or %-formatting, it's important to use static format strings, or to sanitize untrusted parts before applying the formatter function.

            1. Do %-style strings have known security concerns?
            2. Lastly, which methods should be used and how can user input-based attacks be prevented (e.g. filtering input with regex)?
              • More specifically, are Template strings really the safer option? and Can f-strings be used just as easily and safely while granting more functionality?
            ...

            ANSWER

            Answered 2022-Jan-18 at 12:53

            It doesn't matter which format you choose, any format and library can have its own downsides and vulnerabilities. The bigger questions you need to ask yourself is what is the risk factor and the scenario you are facing with, and what are you going to do about it. First ask yourself: will there be a scenario where a user or an external entity of some kind (for example - an external system) sends you a format string? If the answer is no, there is no risk. If the answer is yes, you need to see whether this is needed or not. If not - remove it to eliminate the risk. If you need it - you can perform whitelist-based input validation and exclude all format-specific special characters from the list of permitted characters, in order to eliminate the risk. For example, no format string can pass the ^[a-zA-Z0-9\s]*$ generic regular expression.

            So the bottom line is: it doesn't matter which format string type you use, what's really important is what do you do with it and how can you reduce and eliminate the risk of it being tampered.

            Source https://stackoverflow.com/questions/70751249

            QUESTION

            Was slf4j affected with vulnerability issue in log4j
            Asked 2022-Jan-13 at 05:18

            I have implemented a POC and have used slf4j for logging. The zero day vulnerability issue in log4j, did that also impact slf4j logs?

            ...

            ANSWER

            Answered 2022-Jan-03 at 22:16

            It depends. Slf4j is just an api, that can be using behind any of its implementions, being log4j just one. Check which one is using on the back, and if this is log4j and between versions 2.0.0 and 2.15.0 (2.15.0 is the one with the fix, versions 1.x are not affected) you should update it (if it is exposed to users directly or indirectly)

            Source https://stackoverflow.com/questions/70353248

            QUESTION

            Which version of Django REST Framework is affected by IP Spoofing?
            Asked 2022-Jan-12 at 22:14

            REF: https://portswigger.net/daily-swig/ip-spoofing-bug-leaves-django-rest-applications-open-to-ddos-password-cracking-attacks Reported Date: Jan 11 2022

            • Other than providing captcha, what security measure should be taken?
            • Which version of Django and/or Python is affected by IP Spoofing?
            ...

            ANSWER

            Answered 2022-Jan-12 at 22:10

            I did some research into the link you shared, Django's source and Django REST Framework's source.

            Bare-bones Django is not vulnerable to this, since it doesn't uses X-Forwarded-For, and neither is Python.

            Virtually all versions of Django REST Framework are vulnerable, since this commit 9 years ago added the HTTP_X_FORWARDED_FOR check: https://github.com/encode/django-rest-framework/blob/d18d32669ac47178f26409f149160dc2c0c5359c/rest_framework/throttling.py#L155

            For measures you can take to avoid this, since a patch is not yet available, you could implement your own ratelimitter, and replace get_ident to only use REMOTE_ADDR.

            If your Djando REST Framework application is behind a proxy, you might not be vulnerable to this.

            Source https://stackoverflow.com/questions/70688368

            QUESTION

            Can NPM show me the age of packages before installing them?
            Asked 2022-Jan-05 at 10:35

            In light of recent malware in existing npm packages, I would like to have a mechanism that lets me do some basic checks before installing new packages or updating existing ones. My main issue are both the packages I install directly, and also the ones I install indirectly.

            In general I want to get a list of package-version that npm would install before installing it. More specifically I want the age of the packages that would be installed, so I can generate a warning if any of them is less than a day old.

            If I could do that directly with npm, that would be neat, but I'm afraid I need to do some scripting around it.

            specific use case:

            If I executed npm install react-native-gesture-handler on 2021-10-22 it would have executed the post-install hook of a malicious version of ua-parser and my computer would have been compromised, which is something I would like to avoid.

            When I enter npm install react-native-gesture-handler --dry-run, it only tells me which version of react-native-gesture-handler it would have installed, but it would not tell me that it would install a version of ua-parser that was released on that day.

            additional notes:

            • I know that npm i --dry-run exists, but it shows only the direct packages.
            • I know that npm list exists, but it only shows packages after installing (and thus after install-hooks have already done their harm)
            • both only show packages version and not their age
            • I do not know how I would get a list of packages that would come with a install-hook before installing them
            • pointers to alternative ways to deal with malicious npm packages are welcome.
            • so far my best solution would be to do "--ignore-scripts" but that would come with it's own set of problems
            ...

            ANSWER

            Answered 2021-Dec-07 at 07:26

            To find out the malicious package, you will need a script that will check your package for vulnerabilities against national vulnerabilities database

            The National Vulnerability Database includes databases of security checklist references, security related software flaws, misconfigurations, product names, and impact metrics.

            Mostly all software companies use application security tools like Veracode, Snyk or Checkmarx that does this usually in a stage before deployment in the CICD pipeline.

            If you're looking to achieve this locally, you can try

            Source https://stackoverflow.com/questions/69975960

            QUESTION

            Does the Log4j security violation vulnerability affect log4net?
            Asked 2022-Jan-04 at 23:11

            I have recently read about the zero-day issue in Log4J. I work with a few applications, written with .NET, that use the log4net logging library, which is based on Log4j.

            Does log4net have any similar security vulnerabilities as the CVE-2021-44228 vulnerability to Log4j?

            ...

            ANSWER

            Answered 2022-Jan-04 at 23:00

            Does log4net have any similar security vulnerabilities as the CVE-2021-44228 vulnerability to Log4j?

            I don't believe so. If they did, it would be a coincidence. I don't think they share code.

            Source https://stackoverflow.com/questions/70337145

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install isowall

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/robertdavidgraham/isowall.git

          • CLI

            gh repo clone robertdavidgraham/isowall

          • sshUrl

            git@github.com:robertdavidgraham/isowall.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Security Libraries

            Try Top Libraries by robertdavidgraham

            masscan

            by robertdavidgrahamC

            rdpscan

            by robertdavidgrahamC

            heartleech

            by robertdavidgrahamC

            robdns

            by robertdavidgrahamC

            pemcrack

            by robertdavidgrahamC