faas | Failure as a Service | Testing library

 by   futuresimple Python Version: Current License: Apache-2.0

kandi X-RAY | faas Summary

kandi X-RAY | faas Summary

faas is a Python library typically used in Testing applications. faas has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can download it from GitHub.

Provides many misbehavior cases as a Service.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              faas has a highly active ecosystem.
              It has 15 star(s) with 0 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              faas has no issues reported. There are 1 open pull requests and 0 closed requests.
              It has a positive sentiment in the developer community.
              The latest version of faas is current.

            kandi-Quality Quality

              faas has no bugs reported.

            kandi-Security Security

              faas has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              faas is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              faas releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed faas and discovered the below as its top functions. This is intended to give you an instant insight into faas implemented functionality, and help decide if they suit your requirements.
            • Main entry point .
            • Generate FASTA logs .
            • Generate OOM output .
            • Read application token from config file .
            • Decorator that checks the application token .
            • Parse global arguments .
            • Get information about the running process .
            • Stop the listener .
            • Restart the application .
            • Check if a directory exists .
            Get all kandi verified functions for this library.

            faas Key Features

            No Key Features are available at this moment for faas.

            faas Examples and Code Snippets

            No Code Snippets are available at this moment for faas.

            Community Discussions

            QUESTION

            Nomad and consul setup
            Asked 2021-Mar-09 at 20:12

            Should I run consul slaves alongside nomad slaves or inside them? The later might not make sense at all but I'm asking it just in case.

            I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications). The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of 0.0.0.0, localhost, worker node ip worked)

            Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210. Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.

            I'm using latest version of nomad.

            I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl

            And the link below shows how I configured/ran my consul slave: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml

            Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)

            ...

            ANSWER

            Answered 2021-Jan-05 at 18:28

            Nomad jobs listen on a specific host/port pair.

            You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.

            Source https://stackoverflow.com/questions/65578035

            QUESTION

            SAS Viya 3.4 Manipulation of a table in memory
            Asked 2021-Feb-27 at 01:19

            I will give what I want to implement with an example below. After this I will give you my two approches. I am working on a 3.4 SAS Viya Platform.

            EXAMPLE: I have a table (MYTABLE) and this table is promoted on a global caslib (CKCAS). This table contains 10 rows and 5 columns.

            MYTABLE

            column1 column2 column3 column4 date aaa 4567 gtt 44 20210201 aa 5535 faas 44 20210202 fd 23 axv 44 20210203 sd 736 azxq 44 20210204 ghy 9008 feet 44 20210205 lk 3339 wqopp 44 20210206 yj 112 poo 44 20210207 trr 3634 piuy 44 20210208 hrfthr 689 iuyt 44 20210209 rt 2345 uio 44 20210210

            The client asked from me to delete a few rows from the table. His goal here is to retain the latest (by column 'date') 5 days. Below is the "desired" table:

            column1 column2 column3 column4 date lk 339 wqopp 44 20210206 yj 112 poo 44 20210207 try 3634 piuy 44 20210208 hrfthr 689 iuyt 44 20210209 rt 2345 uio 44 20210210

            IMPORTANT! The table needs to be promoted and accessible from all sessions! Right now, there is a job every day that collects data for the client and append them on MYTABLE. This implementation will not change!

            APPROCHE 1

            ...

            ANSWER

            Answered 2021-Feb-27 at 01:19

            It's best to use CAS actions for this; however, the table.deleteRows action was not added until Viya 3.5. Promoted tables were originally meant to be basically immutable: when a table is up and promoted in CAS for everyone, it generally should only be appended to with good data. Bad data, of course, gets in to production systems sometimes and it needs to be modified.

            Since you need to delete rows, the safest way would be to create a copy of it in CASUSER, drop the old table, then promote the updated one. It's likely that their CAS cluster has more than enough memory to do this.

            Double-check if it's partitioned or ordered a specific way before doing this. You can add the partition and order statements to your dataset options. If you need to save the table to persistent storage, use the save statement in proc casutil as well.

            With this method, all changes are done only in CAS.

            Source https://stackoverflow.com/questions/66393254

            QUESTION

            What is the difference between WCF and Azure Function?
            Asked 2021-Feb-18 at 01:51


            I cannot understand the difference between WCF (service oriented) , and Azure Function or AWS lambda ( FaaS). It seems to me both are invoking remote functions, while WCF has a host. but what is the technical difference between them?

            ...

            ANSWER

            Answered 2021-Feb-18 at 01:51

            WCF or the Windows Communication Foundation, is another framework, this time for writing and consuming services. These are either web services, or other, e.g. TCP based services, even MSMQ based services. This is, in my opinion, what you should be looking at for exposing your back-end. WCF provides you the ability to easily specify a contract and implementation, while leaving the hosting of the service and the instantiation to IIS (IIS being Microsoft's web server, which is also running under the covers on Azure).

            Azure, towards you, is a hosting provider. It helps you scale your application servers based on demand (e.g. number of mobile clients downloading & installing your application).

            A little marketing speak: Azure lowers your cost of ownership for your own solutions because it takes away the initial investment in firstly figuring out (guessing) the amount of hardware you need and then building/renting a data center and/or hardware. It also provides some form of middleware for your applications, like AppFabric, so that they can communicate in the "cloud" a bit better. You also get load balancing on Azure, distributed hosting (e.g. Europe datacenters, USA datacenters...), fail safe mechanism already in place (automatic instance instantiation if one were to fail) and obviously, pay as you go & what you use benefits.

            Here is the reference: Introduction to Azure Functions, Azure and WCF

            Source https://stackoverflow.com/questions/66198798

            QUESTION

            openfaas deployment.kubernetes.io/max-replicas vs com.openfaas.scale.max
            Asked 2021-Feb-08 at 09:27

            I have a k8s cluster on which I have installed openfaas in the following way:

            ...

            ANSWER

            Answered 2021-Feb-08 at 09:27

            The OpenFaas is equiped with autoscaler itself with its own alert manager:

            OpenFaaS ships with a single auto-scaling rule defined in the mounted configuration file for AlertManager. AlertManager reads usage (requests per second) metrics from Prometheus in order to know when to fire an alert to the API Gateway.

            After some reading I found out that OpenFaas autoscaler/alertmanger is more focused on API hit rates whereas Kubernetes HPA is more focused on CPU and memory usage so it all depends on what you exactly need.

            So you have two different annotations for two different tools for scaling. The deployment.kubernetes.io/max-replicas=2 is for Kubernetes HPA and com.openfaas.scale.max: 1 is for the OpenFaas autoscaler.

            The OpenFaas has a great example of how you can use HPA instead built in scaler. You can also use custom Prometheus metrics with HPA as described here.

            Source https://stackoverflow.com/questions/66079490

            QUESTION

            How to reuse AWS Lambda functions?
            Asked 2020-Dec-22 at 22:02

            I have an angular application ERP than is designed to run on different local environments for different clients, each hosted using AWS S3. Each of these applications will have their own dedicated API Gateways, using Lambda functions pointing to their respective PostgreSQL databases in Amazon Aurora RDS. The Lambda functions that I currently use points to a single db, an example of which is as follows:

            ...

            ANSWER

            Answered 2020-Dec-22 at 22:02

            Since you have different API Gateways for your different clients, you can use Stage Variables.

            For example, lets imagine you have two clients called Awesome Ales and Great Gins. For both you have two stages each, called Staging and Production. You want to use a different RDS database for each client and each stage.

            Client Staging Production Awesome Ales awesome_ales_staging awesome_ales_production Great Gins great_gins_staging great_gins_production

            Now you need to set the names of those names as stage variable for their respective clients and stages and then in your Lambda, you read the name of the stage variable from the Lambdas event.

            If you use proxy integration for the Lambda and your stage variable is called dbname, it will be accessible as event.stageVariables.dbname:

            Source https://stackoverflow.com/questions/65398890

            QUESTION

            Start container instance on web request to FQDN
            Asked 2020-Dec-17 at 23:09

            Let's say we have a (containerized) backend which is only sparely used. Maybe once every couple of days or so, a (static) web front-end calls an API endpoint of that backend.

            The backend conveniently happens to be stateless. No data store or anything.

            We want to minimize the hosting cost for it, and ideally would like per-second billing. It's only gonna be running for a few minutes every month, and we only want to be charged for that usage. Basically, we want Function as a Service (FaaS), but for a whole backend and not just a single function.

            Azure Container Instances appears to be great fit for this scenario. It can spin up the backend in a container when needed. The backend then can shut itself down again after a certain period of non-usage.

            So, let's create a container instance...

            ...

            ANSWER

            Answered 2020-Dec-17 at 20:36

            Azure Container Instances don't have a wehbook or HTTP trigger that will start them. However, you could use an Azure Function or Logic App that would effectively run az container start for you and then call THAT with HTTP. With either of those approaches, you'd have to setup some IAM permissions to give the Function or Logic App permissions to the ACI resource to start it.

            One approach would be to:

            1. Create an Azure Function with an HTTP trigger and a managed identity
            2. Give the Managed identity contributor access to ACI container group
            3. Run az container start or the equivalent REST call inside the function to start the ACI container
            4. Call the Azure function (using the function token) to start the container.

            Source https://stackoverflow.com/questions/65342897

            QUESTION

            Serializing object with list of object arrays throws NullReferenceException using Json.NET
            Asked 2020-Dec-02 at 14:04

            So I have a lot of data that just needs to be stored and not configured so I've decided to store it as Json in my database.

            ...

            ANSWER

            Answered 2020-Dec-02 at 14:04

            There is no problem with a data structure like this(Array of Array of Objects) when we are working with JSON.

            Try this in your code:

            Source https://stackoverflow.com/questions/65108657

            QUESTION

            Is a WSGI container relevant on AWS Lambda?
            Asked 2020-Dec-01 at 19:53

            I've got a Flask based web application that deploys to AWS Lambda via Zappa. All is well and good.

            The Flask documentation says:

            While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well. Some of the options available for properly running Flask in production are documented here.

            On a stand-alone server, Python is single threaded (Global Interpreter Lock (GIL) etc) and therefore doesn't handle multiple requests well without due care and attention.

            On AWS Lambda (and presumably other FaaS infrastructure) each HTTP requests gets a separate Python instance, so the GIL is not an issue and Lambda takes care of scaling by using multiple function invocations.

            Therefore, is using a WGSI container (Gunicorn, uWGSI, etc) so strongly recommended when running on AWS Lambda? Why or why not?

            Some factors I can guess might be relevant include:

            • cost
            • resources (e.g. database connections)
            • bugs
            • start up performance
            • per request overhead
            ...

            ANSWER

            Answered 2020-Dec-01 at 19:53

            When the documentation talks about "Flask's built-in server", it's talking about the server that you get when you run the command flask run (or in older applications running a command like python my_application.py with a line in the main function like app.run()).

            When you run flask on Lambda (using Zappa or another solution like aws-wsgi or serverless-wsgi), you're not using Flask's built-in server or any server at all; the wrapper code (in Zappa or anything else) is translating the lambda event to a call to your WSGI application.

            Since there is no actual WSGI server, it's not possible to use Gunicorn, uWGSI, etc (well, it may be possible, but it would be very convoluted).

            Source https://stackoverflow.com/questions/64759311

            QUESTION

            Creating a Node.js REST API using Firebase Cloud Functions, without Express?
            Asked 2020-Nov-04 at 15:47

            I am working to create a serverless REST API via Firebase Cloud Functions, which seems to work well but the examples and documentation all seem to use a monolithic solution, since they use the Express framework and essentially map the root http request to the Express app, then let it handle the routing. I understand that this is because the Firebase Hosting platform does not have the ability to handle http verbs.

            My expectation was that a serverless / FaaS approach would have a function for each endpoint, making for easy updates in future since there's no need to update the whole app, just that single service - i.e. a more functional approach.

            What am I missing here? Why is the approach to use a single function to contain an express app? Doesn't this defeat the purpose of a serverless / Cloud Functions approach? And is there any other way of doing this?

            ...

            ANSWER

            Answered 2020-Nov-04 at 15:47

            The documentation shows how to create an endpoint without the help of an Express app, router, or middleware:

            Source https://stackoverflow.com/questions/64682930

            QUESTION

            Serverless Computing versus Function As A Service (FaaS)
            Asked 2020-Jun-10 at 02:11

            From Azure Docs:

            Serverless computing is a cloud-hosted execution environment that runs your code but completely abstracts the underlying hosting environment. You create an instance of the service, and you add your code; no infrastructure configuration or maintenance is required, or even allowed.

            They seem to give serverless computing its own category, which is different than PaaS, Caas or FaaS.

            My issue is that I don't quiet understand the difference between it and FaaS.

            Where does Serverless computing stand from IaaS, PaaS, CaaS, SaaS, FaaS ?

            ...

            ANSWER

            Answered 2020-Jun-10 at 02:11

            You're right, it may be a bit confusing if you're getting started on it. Initially Serverless was used to describe

            -Backend as a Service

            -Functions as a Service

            more info: https://www.martinfowler.com/articles/serverless.html

            Now, many things evolved to Serverless approach. You can pick the latest announcements of SQL Database Serverless, Cosmos Db Serverless etc. So in summary, just consider that serverless is something triggered by event, and billed according to compute resources used and which you don't handle/manage the underlying infrastructure.

            -IaaS is not Serverless

            -PaaS is not Serverless

            -SaaS is not Serverless (but can be implemented using Serverless)

            -CaaS can be serverless

            -FaaS is serverless

            Source https://stackoverflow.com/questions/62294619

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install faas

            You can download it from GitHub.
            You can use faas like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/futuresimple/faas.git

          • CLI

            gh repo clone futuresimple/faas

          • sshUrl

            git@github.com:futuresimple/faas.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link