faas | OpenFaaS - Serverless Functions | Function As A Service library

 by   openfaas Go Version: 0.26.4 License: MIT

kandi X-RAY | faas Summary

kandi X-RAY | faas Summary

faas is a Go library typically used in Serverless, Function As A Service, Docker, Prometheus applications. faas has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

Conceptual architecture and stack, more detail available in the docs.

            kandi-support Support

              faas has a medium active ecosystem.
              It has 23138 star(s) with 1846 fork(s). There are 496 watchers for this library.
              It had no major release in the last 12 months.
              There are 27 open issues and 794 have been closed. On average issues are closed in 49 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of faas is 0.26.4

            kandi-Quality Quality

              faas has 0 bugs and 0 code smells.

            kandi-Security Security

              faas has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              faas code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              faas is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              faas releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of faas
            Get all kandi verified functions for this library.

            faas Key Features

            No Key Features are available at this moment for faas.

            faas Examples and Code Snippets

            No Code Snippets are available at this moment for faas.

            Community Discussions


            Enable use of images from the local library on Kubernetes
            Asked 2022-Mar-20 at 13:23

            I'm following a tutorial https://docs.openfaas.com/tutorials/first-python-function/,

            currently, I have the right image



            Answered 2022-Mar-16 at 08:10

            If your image has a latest tag, the Pod's ImagePullPolicy will be automatically set to Always. Each time the pod is created, Kubernetes tries to pull the newest image.

            Try not tagging the image as latest or manually setting the Pod's ImagePullPolicy to Never. If you're using static manifest to create a Pod, the setting will be like the following:

            Source https://stackoverflow.com/questions/71493306


            Running eleventy build (via npm run) as AWS Lambda function
            Asked 2022-Jan-02 at 18:40

            I have a eleventy Node project, which renders HTML from a JSON file.

            Currently, I run this locally using npm run (which runs the eleventy CLI)

            Here's the workflow I have in my head:

            • put the JSON file in a S3 bucket
            • on each file change, run the HTML build
            • push the output to a different S3 bucket, which serves the web page

            Conceptually, I feel like this would be a standard FaaS use case.

            Practically, I stumble over the fact that the Node.js-Lambda runtime always expects an explicit function handler to be invoked. It seems Eleventy does not provide a standard way to be invoked from code (or I have not discovered this yet).

            I found that I could build my package into a Docker container and run the npm run as Entrypoint. This would surely work, but seems unnecessary, since the Lambda-provided Node.js runtimes should be capable of running my npm build command if I put my packages in the deployment artifact.

            Do I have a knot in my brain? Anything I'm overlooking? Would be happy about any input.



            Answered 2022-Jan-02 at 18:40

            I'm not sure this is supported as I don't see it documented, but I looked at the unit tests for Eleventy and saw many examples of this (https://github.com/11ty/eleventy/tree/master/test). I tried the following and it worked. Note that init an write are both async and I do NOT properly await them, I was just trying to get a simple example:

            Source https://stackoverflow.com/questions/70536908


            Nomad and consul setup
            Asked 2021-Mar-09 at 20:12

            Should I run consul slaves alongside nomad slaves or inside them? The later might not make sense at all but I'm asking it just in case.

            I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications). The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of, localhost, worker node ip worked)

            Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210. Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.

            I'm using latest version of nomad.

            I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl

            And the link below shows how I configured/ran my consul slave: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml

            Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)



            Answered 2021-Jan-05 at 18:28

            Nomad jobs listen on a specific host/port pair.

            You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.

            Source https://stackoverflow.com/questions/65578035


            SAS Viya 3.4 Manipulation of a table in memory
            Asked 2021-Feb-27 at 01:19

            I will give what I want to implement with an example below. After this I will give you my two approches. I am working on a 3.4 SAS Viya Platform.

            EXAMPLE: I have a table (MYTABLE) and this table is promoted on a global caslib (CKCAS). This table contains 10 rows and 5 columns.


            column1 column2 column3 column4 date aaa 4567 gtt 44 20210201 aa 5535 faas 44 20210202 fd 23 axv 44 20210203 sd 736 azxq 44 20210204 ghy 9008 feet 44 20210205 lk 3339 wqopp 44 20210206 yj 112 poo 44 20210207 trr 3634 piuy 44 20210208 hrfthr 689 iuyt 44 20210209 rt 2345 uio 44 20210210

            The client asked from me to delete a few rows from the table. His goal here is to retain the latest (by column 'date') 5 days. Below is the "desired" table:

            column1 column2 column3 column4 date lk 339 wqopp 44 20210206 yj 112 poo 44 20210207 try 3634 piuy 44 20210208 hrfthr 689 iuyt 44 20210209 rt 2345 uio 44 20210210

            IMPORTANT! The table needs to be promoted and accessible from all sessions! Right now, there is a job every day that collects data for the client and append them on MYTABLE. This implementation will not change!

            APPROCHE 1



            Answered 2021-Feb-27 at 01:19

            It's best to use CAS actions for this; however, the table.deleteRows action was not added until Viya 3.5. Promoted tables were originally meant to be basically immutable: when a table is up and promoted in CAS for everyone, it generally should only be appended to with good data. Bad data, of course, gets in to production systems sometimes and it needs to be modified.

            Since you need to delete rows, the safest way would be to create a copy of it in CASUSER, drop the old table, then promote the updated one. It's likely that their CAS cluster has more than enough memory to do this.

            Double-check if it's partitioned or ordered a specific way before doing this. You can add the partition and order statements to your dataset options. If you need to save the table to persistent storage, use the save statement in proc casutil as well.

            With this method, all changes are done only in CAS.

            Source https://stackoverflow.com/questions/66393254


            What is the difference between WCF and Azure Function?
            Asked 2021-Feb-18 at 01:51

            I cannot understand the difference between WCF (service oriented) , and Azure Function or AWS lambda ( FaaS). It seems to me both are invoking remote functions, while WCF has a host. but what is the technical difference between them?



            Answered 2021-Feb-18 at 01:51

            WCF or the Windows Communication Foundation, is another framework, this time for writing and consuming services. These are either web services, or other, e.g. TCP based services, even MSMQ based services. This is, in my opinion, what you should be looking at for exposing your back-end. WCF provides you the ability to easily specify a contract and implementation, while leaving the hosting of the service and the instantiation to IIS (IIS being Microsoft's web server, which is also running under the covers on Azure).

            Azure, towards you, is a hosting provider. It helps you scale your application servers based on demand (e.g. number of mobile clients downloading & installing your application).

            A little marketing speak: Azure lowers your cost of ownership for your own solutions because it takes away the initial investment in firstly figuring out (guessing) the amount of hardware you need and then building/renting a data center and/or hardware. It also provides some form of middleware for your applications, like AppFabric, so that they can communicate in the "cloud" a bit better. You also get load balancing on Azure, distributed hosting (e.g. Europe datacenters, USA datacenters...), fail safe mechanism already in place (automatic instance instantiation if one were to fail) and obviously, pay as you go & what you use benefits.

            Here is the reference: Introduction to Azure Functions, Azure and WCF

            Source https://stackoverflow.com/questions/66198798


            openfaas deployment.kubernetes.io/max-replicas vs com.openfaas.scale.max
            Asked 2021-Feb-08 at 09:27

            I have a k8s cluster on which I have installed openfaas in the following way:



            Answered 2021-Feb-08 at 09:27

            The OpenFaas is equiped with autoscaler itself with its own alert manager:

            OpenFaaS ships with a single auto-scaling rule defined in the mounted configuration file for AlertManager. AlertManager reads usage (requests per second) metrics from Prometheus in order to know when to fire an alert to the API Gateway.

            After some reading I found out that OpenFaas autoscaler/alertmanger is more focused on API hit rates whereas Kubernetes HPA is more focused on CPU and memory usage so it all depends on what you exactly need.

            So you have two different annotations for two different tools for scaling. The deployment.kubernetes.io/max-replicas=2 is for Kubernetes HPA and com.openfaas.scale.max: 1 is for the OpenFaas autoscaler.

            The OpenFaas has a great example of how you can use HPA instead built in scaler. You can also use custom Prometheus metrics with HPA as described here.

            Source https://stackoverflow.com/questions/66079490


            How to reuse AWS Lambda functions?
            Asked 2020-Dec-22 at 22:02

            I have an angular application ERP than is designed to run on different local environments for different clients, each hosted using AWS S3. Each of these applications will have their own dedicated API Gateways, using Lambda functions pointing to their respective PostgreSQL databases in Amazon Aurora RDS. The Lambda functions that I currently use points to a single db, an example of which is as follows:



            Answered 2020-Dec-22 at 22:02

            Since you have different API Gateways for your different clients, you can use Stage Variables.

            For example, lets imagine you have two clients called Awesome Ales and Great Gins. For both you have two stages each, called Staging and Production. You want to use a different RDS database for each client and each stage.

            Client Staging Production Awesome Ales awesome_ales_staging awesome_ales_production Great Gins great_gins_staging great_gins_production

            Now you need to set the names of those names as stage variable for their respective clients and stages and then in your Lambda, you read the name of the stage variable from the Lambdas event.

            If you use proxy integration for the Lambda and your stage variable is called dbname, it will be accessible as event.stageVariables.dbname:

            Source https://stackoverflow.com/questions/65398890


            Start container instance on web request to FQDN
            Asked 2020-Dec-17 at 23:09

            Let's say we have a (containerized) backend which is only sparely used. Maybe once every couple of days or so, a (static) web front-end calls an API endpoint of that backend.

            The backend conveniently happens to be stateless. No data store or anything.

            We want to minimize the hosting cost for it, and ideally would like per-second billing. It's only gonna be running for a few minutes every month, and we only want to be charged for that usage. Basically, we want Function as a Service (FaaS), but for a whole backend and not just a single function.

            Azure Container Instances appears to be great fit for this scenario. It can spin up the backend in a container when needed. The backend then can shut itself down again after a certain period of non-usage.

            So, let's create a container instance...



            Answered 2020-Dec-17 at 20:36

            Azure Container Instances don't have a wehbook or HTTP trigger that will start them. However, you could use an Azure Function or Logic App that would effectively run az container start for you and then call THAT with HTTP. With either of those approaches, you'd have to setup some IAM permissions to give the Function or Logic App permissions to the ACI resource to start it.

            One approach would be to:

            1. Create an Azure Function with an HTTP trigger and a managed identity
            2. Give the Managed identity contributor access to ACI container group
            3. Run az container start or the equivalent REST call inside the function to start the ACI container
            4. Call the Azure function (using the function token) to start the container.

            Source https://stackoverflow.com/questions/65342897


            Serializing object with list of object arrays throws NullReferenceException using Json.NET
            Asked 2020-Dec-02 at 14:04

            So I have a lot of data that just needs to be stored and not configured so I've decided to store it as Json in my database.



            Answered 2020-Dec-02 at 14:04

            There is no problem with a data structure like this(Array of Array of Objects) when we are working with JSON.

            Try this in your code:

            Source https://stackoverflow.com/questions/65108657


            Is a WSGI container relevant on AWS Lambda?
            Asked 2020-Dec-01 at 19:53

            I've got a Flask based web application that deploys to AWS Lambda via Zappa. All is well and good.

            The Flask documentation says:

            While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well. Some of the options available for properly running Flask in production are documented here.

            On a stand-alone server, Python is single threaded (Global Interpreter Lock (GIL) etc) and therefore doesn't handle multiple requests well without due care and attention.

            On AWS Lambda (and presumably other FaaS infrastructure) each HTTP requests gets a separate Python instance, so the GIL is not an issue and Lambda takes care of scaling by using multiple function invocations.

            Therefore, is using a WGSI container (Gunicorn, uWGSI, etc) so strongly recommended when running on AWS Lambda? Why or why not?

            Some factors I can guess might be relevant include:

            • cost
            • resources (e.g. database connections)
            • bugs
            • start up performance
            • per request overhead


            Answered 2020-Dec-01 at 19:53

            When the documentation talks about "Flask's built-in server", it's talking about the server that you get when you run the command flask run (or in older applications running a command like python my_application.py with a line in the main function like app.run()).

            When you run flask on Lambda (using Zappa or another solution like aws-wsgi or serverless-wsgi), you're not using Flask's built-in server or any server at all; the wrapper code (in Zappa or anything else) is translating the lambda event to a call to your WSGI application.

            Since there is no actual WSGI server, it's not possible to use Gunicorn, uWGSI, etc (well, it may be possible, but it would be very convoluted).

            Source https://stackoverflow.com/questions/64759311

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install faas

            Here is a screenshot of the API gateway portal - designed for ease of use with the inception function. Deploy OpenFaaS to Kubernetes, OpenShift, or faasd deployment guides.


            Read the documentation: docs.openfaas.comRead latest news and tutorials on the Official Blog
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries