amazon-elasticsearch-lambda-samples | Data ingestion for Amazon Elasticsearch Service | Cloud Functions library
kandi X-RAY | amazon-elasticsearch-lambda-samples Summary
kandi X-RAY | amazon-elasticsearch-lambda-samples Summary
While some detailed instructions are covered later in this file and elsewhere (in the Lambda documentation), this section aims to show the larger picture that the individual steps work to accomplish. We assume that the data source (an S3 bucket or a Kinesis stream, in this case) and an ES domain are already set up. The zip file thus created is the Lambda Deployment Package.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of amazon-elasticsearch-lambda-samples
amazon-elasticsearch-lambda-samples Key Features
amazon-elasticsearch-lambda-samples Examples and Code Snippets
Community Discussions
Trending Discussions on Cloud Functions
QUESTION
I would like to know if there is a way to periodically delete logs from inside Cloud Logging
.
I have setup Firebase with Cloud Functions
and i have an automatic Cloud Logging
logs injection done for each function call.
I don't want especially to stop sending logs to Cloud Logging
, but i would like to be able to manage my costs by deleting older logs.
ANSWER
Answered 2022-Mar-23 at 16:41You can set a retention policy on your Cloud Logging bucket to match with your requirements, which can auto-delete logs after between 1 day and 10 years.
QUESTION
I'm trying to create a Firebase Function but I'm running into a deploy error, even when deploying the default helloworld
function.
The firebase-debug.log file mentions this:
Could not find image for function projects/picci-e030e/locations/us-central1/functions/helloWorld.
I have been trying to debug and so far have not been able to solve it...
firebase-debug.log
...ANSWER
Answered 2022-Feb-06 at 14:36Could not find image for function projects/picci-e030e/locations/us-central1/functions/helloWorld.
The Firebase Function deployment failed because it cannot find the image built based on your function app. There might be a problem building in your app, it could be your dependencies or files.
I replicated your issue, received the same error and solved it. There's a problem with the package.json
file and package-lock.json
. If you just add(without installing) your dependency in package.json
you should delete or remove your package-lock.json
that will be found in function directory before you deploy it again using the deployment command:
QUESTION
The firebase extension for a distributed counter can be directly installed for the cloud and works just fine. To develop new features for an app I need to do this on the emulator to not interrupt the running server.
As the firebase extensions simply are cloud Functions*, I thought about implementing the cloud function in my emulator by getting the source code from the extension itself. This worked fine for other extentions so far...
Error and Disfunction when implementingWhen implementing the javaScript version that i get the following error:
function ignored because the unknown emulator does not exist or is not running.
This problem can be fixed by rewriting the export line of the index.js
functions, but is wont provide the expected functionality of the extension anyhow:
ANSWER
Answered 2022-Jan-24 at 17:55firebaser here
Firebase Extensions normally declare their triggers in the extension.yaml file, instead of in the code itself. Therefore, in order to emulate an extension in this way, you'd need to move the triggers over to the code.
For your specific example of the 'worker' function, the extension declares what document to listen to here, so we'll copy the document over to the code:
QUESTION
Context: I am training a very similar model per bigquery dataset in Google Vertex AI, but I want to have a custom training image for each existing dataset (in Google BigQuery). In that sense, I need to programatically build a custom Docker Image in the container registry on demand. My idea was to have a Google Cloud Function do it, being triggered by PubSub topic with information regarding which dataset I want to build the training container for. So naturally, the function will write the Dockerfile and pertinent scripts to a /tmp folder within Cloud Functions (the only writable place as per my knowledge). However, when I try to actually build the container within this script, apparently, it doesn't find the /tmp folder or its contents, even though they are there (checked with logging operations).
The troubling code so far:
...ANSWER
Answered 2021-Dec-21 at 11:07I've locally tested building a container image using Cloud Build Client Python library. It turns out to have the same error even the Dockerfile
file is existing in current directory:
error:
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
build steps:
QUESTION
I am trying to setup a Firebase Cloud Functions repo to run mocha test. However, it throws the following error when I use import * as firebase from "firebase-functions-test";
or const firebase = require("firebase-functions-test")();
. You can see in my code that I haven't even called the actual firebase functions yet so I think this a setup issue.
Question: What change do I need to make mocha test running for Firebase Functions testing using import syntax?
Working test code
...ANSWER
Answered 2021-Dec-02 at 09:53This error should be resolved after specifying the latest version of the
QUESTION
Right after my TypeScript project initialization in VSCode using firebase tools for composing Firebase Cloud Functions following the official documentation the very first line of the index.ts
file displays an error:
Parsing error: Cannot read file '\tsconfig.json' eslint [1,1]
and the .eslintrc.js
displays an error:
File is a CommonJS module; it may be converted to an ES6 module.ts(80001)
Since all files are auto-generated these errors are a complete surprise and I want to get rid of them.
VersionsFor the record, here are the versions installed:
...ANSWER
Answered 2021-Nov-16 at 16:17Ok, I have solved the problem with a great help of this github thread False positive Error - TS6133 error (declared but its value is never read) report.
I have changed "noUnusedLocals"
setting in the tsconfig.json
file from the default true
to false
, so the file becomes:
QUESTION
I am new to firebase function and trying to use firebase function with Realtime database (Emulator suite).But when i try to set the value in firebase using the firebase function,it gives an error and doesn't set the value in database.
Error:
...ANSWER
Answered 2021-Nov-05 at 13:59I'm unsure as to the cause of that log message, but I do see that you are returning a response from your function before it completes all of its work. In a deployed function, as soon as the function returns, all further actions should be treated as if they will never be executed as documented here. An "inactive" function might be terminated at any time, is severely throttled and any network calls you make (like setting data in the RTDB) may never be executed.
I know you are new to this, but its a good habit to get into now: don't assume the person calling your function is you. Check for problems like missing query parameters and dodgy data before you blindly action something. The Admin SDK bypasses your database's security rules and if you are not careful a malicious user can cause some damage (e.g. a user that updates /users/$theirUid/roles/admin
to true
).
QUESTION
Firebase has announced in September 2021 that it is possible now to configure its cloud function autoscaling in a way, so that a certain number of instances will always be running (https://firebase.google.com/docs/functions/manage-functions#min-max-instances).
I have tried to set this up, but I can not get it to work: At first I have set the number of minimum instances in Google Cloud Console: Cloud Console Screenshot After doing this I expected that one instance for that cloud function would run at any time. The metrics of that function indicate that it instances were still scaled down to 0: Cloud functions "Active Instances Metric"
So to me it looks a bit as if my setting is ignored here. Am I missing anything? Google Cloud Console shows me that the number of minimum instances has been set to 1 so it seems to know about it but to ignore it. Is this feature only available in certain regions?
I have also tried to set the number of minimum instances using the Firebase SDK for Cloud Functions (https://www.npmjs.com/package/firebase-functions). This gave me the same result, my setting is still ignored.
...ANSWER
Answered 2021-Oct-30 at 06:35According to the Documentation, the Active Instances metrics shows the number of instances that are currently handling the request.
As stated in the Documentation :
Cloud Functions scales by creating new instances of your function. Each of these instances can handle only one request at a time, so large spikes in request volume often causes longer wait times as new instances are created to handle the demand.
Because functions are stateless, your function sometimes initializes the execution environment from scratch, which is called a cold start. Cold starts can take significant amounts of time to complete, so we recommend setting a minimum number of Cloud Functions instances if your application is latency-sensitive.
You can also refer to the Stackoverflow thread where it has been mentioned that
Setting up minInstances does not mean that there will always be that much number of Active Instances. Minimum instances are kept running idle (without CPU > allocated), so are not counted in Active Instances.
QUESTION
I was learning the Go language and tested Google Cloud Functions with go + Google Firestore as the database.
While I was testing the response I got inconsistent responses.
I have used the json Marshaller to convert Firebase data to Json object to return from the API, this API is hosted in the Google Cloud Functions.
...ANSWER
Answered 2021-Oct-23 at 12:31The solution i got after marshal and unmarshal, it works as expected.
QUESTION
I'm trying to trigger Airflow DAG inside of a composer environment with cloud functions. In order to do that I need to get the client id as described here. I've tried with curl command but it doesn't return any value. With a python script I keep getting this error:
...ANSWER
Answered 2021-Sep-28 at 13:00Posting this Community Wiki
for better visibility
.
As mentioned in the comment section by @LEC
this configuration is compatible with Cloud Composer V1
which can be found in GCP Documentation Triggering DAGs with Cloud Functions.
At the moment there can be found two tabs Cloud Composer 1 Guides
and Cloud Composer 2 Guides
.
Under Cloud Composer 1
is code used by the OP, but if you will check Cloud Composer 2
under Manage DAGs
> Triggering DAGs with Cloud Functions you will get information that there is not proper documentation yet.
This documentation page for Cloud Composer 2 is not yet available. Please use the page for Cloud Composer 1.
As solution, please use Cloud Composer V1
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install amazon-elasticsearch-lambda-samples
Deployment Package: The "Deployment Package" is the event handler code files and its dependencies packaged as a zip file. The first step in creating a new Lambda function is to prepare and upload this zip file.
Lambda Configuration: Handler: The name of the main code file in the deployment package, with the file extension replaced with a .handler suffix. Memory: The memory limit, based on which the EC2 instance type to use is determined. For now, the default should do. Timeout: The default timeout value (3 seconds) is quite low for our use-case. 10 seconds might work better, but please adjust based on your testing.
Authorization: Since there is a need here for various AWS services making calls to each other, appropriate authorization is required. This takes the form of configuring an IAM role, to which various authorization policies are attached. This role will be assumed by the Lambda function when running.
The AWS Console is simpler to use for configuration than other methods.
Lambda is currently available only in a few regions (us-east-1, us-west-2, eu-west-1, ap-northeast-1).
Once the setup is complete and tested, enable the data source in the Lambda console, so that data may start streaming in.
The code is kept simple for purposes of illustration. It doesn't batch documents when loading the ES domain, or (for S3 updates) handle eventual consistency cases.
On your development machine, download and install Node.js.
Anywhere, create a directory structure similar to the following: eslambda (place sample code here) | +-- node_modules (dependencies will go here)
Modify the sample code with the correct ES endpoint, region, index and document type.
Install each dependency imported by the sample code (with the require() call), as follows: npm install <dependency> Verify that these are installed within the node_modules subdirectory.
Create a zip file to package the code and the node_modules subdirectory zip -r eslambda.zip *
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page