serverless | ⚡ Serverless Framework – Build web , mobile and IoT | Serverless library
kandi X-RAY | serverless Summary
Support
Quality
Security
License
Reuse
- Add a custom resource to the server .
- exclude node packages
- Download a copy of the given template and download it to a temporary directory .
- Request an AWS service request .
- Create a usage plan .
- Send the data to the cache .
- Set the logway to update log messages .
- organize a resource
- Downloads the template file from the repository directory
- Response handler .
serverless Key Features
serverless Examples and Code Snippets
SELECT @@version
SELECT @@version
SELECT
SERVERPROPERTY('EngineEdition') AS EngineEdition -- 5 = SQL Database, 6 = Microsoft Azure Synapse Analytics, 11 = Azure Synapse serverless SQL pool
GO
function Blog({ posts }) {
return (
{posts.map((post) => (
- {post.title}
))}
)
}
// This function gets called at build time on server-side.
// It may be called again, on a serverless function, if
// revalidation is enabled and a new request comes in
export async function getStaticProps() {
const res = await fetch('https://.../posts')
const posts = await res.json()
return {
props: {
posts,
},
// Next.js will attempt to re-generate the page:
// - When a request comes in
// - At most once every 10 seconds
revalidate: 10, // In seconds
}
}
export default Blog
// + refer as close folder
// - refer as open folder
// > refer as file
main
- backend
+ models
+ routes
> package.json
> package-lock.json
> server.js
- frontend
+ public
- src
+ pages
+ components
> package.json
> package-lock.json
{
// initially add link whatever you suppose to be link of your site
// then after deploying to vercel, you get the exact link
// then replace it with vercel link in "homepage" key
"homepage": "https://awesome-app.vercel.app",
"name": "awesome-app",
"version": "1.0.0,
"private": true,
...rest of your frontend package.json file
}
// you wrapped your app with either BrowserRouter or HashRouter
// add basename prop so it refer to your package.json homepage
// if your route are '/awesome-route' then is converted to
// https://awesome-app.vercel.app/awesome-route
// or HashRouter
...your routes
- frontend
+ build // build folder at root of your frontend
+ public
- src
+ pages
+ components
> package.json
> package-lock.json
// i use "dotenv" package
// in your case must be located at "main > backend > .env"
// see the final file structure at bottom if you don't understand
if (process.env.NODE_ENV !== 'production') {
require('dotenv').config({path: __dirname+'/.env'});
}
const express = require('express');
const mongoose = require('mongoose');
const path = require('path');
const app = express();
app.use(express.json());
const port = process.env.PORT || 5000;
mongoose.connect(process.env.mongoURI, {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
useFindAndModify: false,
// remove poolSize or set according to your need
// read docs before setting poolSize
// default to 5
poolSize: 1
})
.then(() => {
app.listen(port);
})
// all your routes should go here
app.use('/some-route', require(path.join(__dirname, 'api', 'routes', 'route.js'));
// static files (build of your frontend)
if (process.env.NODE_ENV === 'production') {
app.use(express.static(path.join(__dirname, '../frontend', 'build')));
app.get('/*', (req, res) => {
res.sendFile(path.join(__dirname, '../frontend', 'build', 'index.html'));
})
}
{
"version": 2,
"builds": [
{
"src": "./backend/server.js", // path to your server.js file
"use": "@vercel/node"
},
{
"src": "./frontend/build", // path to your build folder
"use": "@vercel/static"
}
],
// rewrites any request to api call with server.js
// now your "app.use('/some-route')" would work as normal as localhost
// no need to change your codes to serverless way
// also remember here "/(.*)" is not regular js regex
// it follows path-to-regex
// playground link: https://regexr.com
"rewrites": [
{
"source": "/(.*)",
"destination": "/backend/server.js"
}
]
}
// + refer as close folder
// - refer as open folder
// > refer as file
main
- backend
+ models
- routes
> route.js
> package.json
> package-lock.json
> .env
> server.js
- frontend
- build
+ static
> manifest.json
> index.html
+ public
+ src
> package.json
> package-lock.json
> vercel.json // in your main directory's root
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Production-Deployment
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@master
with:
aws-region: ap-southeast-1
role-to-assume: ${{secrets.ROLE_ARN}}
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Serverless Authentication
run: sls config credentials --provider aws --key ${{ env.AWS_ACCESS_KEY_ID }} --secret ${{ env.AWS_SECRET_ACCESS_KEY }}
- name: Deploy to AWS
run: serverless deploy --stage prod --verbose
working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}
import qs from 'qs'
// Transforms the form data from the React Hook Form output to a format Netlify can read
const encode = (data) => {
return qs.stringify(data)
}
// Handles the post process to Netlify so we can access their serverless functions
const handlePost = (formData, event) => {
event.preventDefault()
fetch(`/`, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: encode({ "form-name": 'add-registration-form', ...formData }),
})
.then((response) => {
reset()
if(response.status === 200) {
alert("SUCCESS!")
} else {
alert("ERROR!")
}
console.log(response)
})
.catch((error) => {
console.log(error)
})
}
const sanityClient = require("@sanity/client")
const client = sanityClient({
projectId: process.env.GATSBY_SANITY_PROJECT_ID,
dataset: process.env.GATSBY_SANITY_DATASET,
token: process.env.SANITY_FORM_SUBMIT_TOKEN,
useCDN: false,
})
const qs = require('qs')
const { nanoid } = require('nanoid');
exports.handler = async function (event, context, callback) {
// Pulling out the payload from the body
const { payload } = JSON.parse(event.body)
// Checking which form has been submitted
const isAddRegistrationForm = payload.data.formId === "add-registration-form"
// Build the document JSON and submit it to SANITY
if (isAddRegistrationForm) {
const parsedData = qs.parse(payload.data)
let schedule = parsedData.days
.map(d => (
{
_key: nanoid(),
_type: "classDayTime",
day: d.day,
time: {
_type: "timeRange",
start: d.start,
end: d.end
}
}
))
const addRegistrationForm = {
_type: "addRegistrationForm",
submitDate: new Date().toISOString(),
_studentId: parsedData._id,
classType: parsedData.classType,
schedule: schedule,
language: parsedData.language,
classSize: parsedData.size,
}
const result = await client.create(addRegistrationForm).catch((err) => console.log(err))
}
callback(null, {
statusCode: 200,
})
}
npm install -g serverless
npm install -g serverless-jest-plugin
sls invoke test
SELECT
SERVERPROPERTY('MachineName') AS ComputerName,
SERVERPROPERTY('ServerName') AS InstanceName,
SERVERPROPERTY('Edition') AS Edition, --SQL Azure
SERVERPROPERTY('EditionID') AS EditionID, -- 1674378470 = SQL Database or Azure Synapse Analytics
SERVERPROPERTY('EngineEdition') AS EngineEdition, -- 5 = SQL Database, 6 = Microsoft Azure Synapse Analytics, 11 = Azure Synapse serverless SQL pool
SERVERPROPERTY('ProductVersion') AS ProductVersion,
SERVERPROPERTY('ProductLevel') AS ProductLevel;
GO
namespace CSharp
{
public static class Function
{
[FunctionName("broadcast")]
public static async Task Broadcast([TimerTrigger("*/5 * * * * *")] TimerInfo myTimer,
[SignalR(HubName = "serverless")] IAsyncCollector signalRMessages)
{
await signalRMessages.AddAsync(
new SignalRMessage
{
Target = "newMessage",
Arguments = new[] { "This is a test message!" }
});
}
}
}
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"AzureSignalRConnectionString": "Endpoint=https://myDomain-signalr.service.signalr.net;AccessKey=youarenothavingmysecretkey=;Version=1.0;"
},
"ConnectionStrings": {}
}
"Azure": {
"SignalR": {
"ConnectionString": "Endpoint=https://myDomain-signalr.service.signalr.net;AccessKey=youarenothavingmysecretkey=;Version=1.0;"
}
}
public class Serverless : Hub
{
public async Task NewMessage(string message)
{
await Clients.All.SendAsync("newMessage", message);
}
}
// Add the Azure SignalR Service.
builder.Services.AddSignalR().AddAzureSignalR(builder.Configuration["Azure:SignalR:ConnectionString"]);
var app = builder.Build();
app.MapHub("/serverless");
const connection = new signalR.HubConnectionBuilder()
.withUrl("/serverless")
.withAutomaticReconnect()
.build();
connection.on("newMessage", (message) => {
alert(message)
});
// We need an async function in order to use await, but we want this code to run immediately,
// so we use an "immediately-executed async function"
(async () => {
try {
await connection.start();
}
catch (e) {
console.error(e.toString());
}
})();
sls package
serverless package
Trending Discussions on serverless
Trending Discussions on serverless
QUESTION
Not able to figure out why in upload.js file in the below code snippet is throwing me an error: app.get is not a function.
I have an index.js file where I have configured everything exported my app by module.exports = app and also I have app.set("upload") in it, but when I am trying to import app in upload.js file and using it, it is giving an error error: app.get is not a function.
below is the code of the index.js
const express = require("express");
const app = express();
const multer = require("multer");
const path = require("path");
const uploadRoutes = require("./src/routes/apis/upload.js");
// multer config
const storageDir = path.join(__dirname, "..", "storage");
const storageConfig = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, storageDir);
},
filename: (req, file, cb) => {
cb(null, Date.now() + path.extname(file.originalname));
},
});
const upload = multer({ storage: storageConfig }); // local upload.
//set multer config
app.set("root", __dirname);
app.set("storageDir", storageDir);
app.set("upload", upload);
app.use("/api/upload", uploadRoutes);
const PORT = process.env.PORT || 5002;
if (process.env.NODE_ENV === "development") {
app.listen(PORT, () => {
console.log(`Server running in ${process.env.NODE_ENV} on port ${PORT}`);
});
} else {
module.exports.handler = serverless(app);
}
module.exports = app;
upload.js file
const express = require("express");
const router = express.Router();
const app = require("../../../index");
const uploadDir = app.get("storageDir");
const upload = app.get("upload");
router.post(
"/upload-new-file",
upload.array("photos"),
(req, res, next) => {
const files = req.files;
return res.status(200).json({
files,
});
}
);
module.exports = router;
ANSWER
Answered 2022-Mar-23 at 08:55The problem is that you have a circular dependency.
App requires upload, upload requires app.
Try to pass app as a parameter and restructure upload.js to look like:
const upload = (app) => {
// do things with app
}
module.exports = upload
Then import it in app and pass the reference there (avoid importing app in upload).
import upload from './path/to/upload'
const app = express();
// ...
upload(app)
QUESTION
Edit: Changed title to reflect the problem properly.
I am trying to pick the exact type definition of a specific property inside a interface, but the property is a mapped type [key: string]:
. I tried accessing it using T[keyof T]
because it is the only property inside that type but it returns never
type instead.
is there a way to like Pick
or Interface[[key: string]]
to extract the type?
The interface I am trying to access is type { AWS } from '@serverless/typescript';
export interface AWS {
configValidationMode?: "error" | "warn" | "off";
deprecationNotificationMode?: "error" | "warn" | "warn:summary";
disabledDeprecations?: "*" | ErrorCode[];
frameworkVersion?: string;
functions?: {
[k: string]: { // <--- Trying to pick this property.
name?: string;
events?: (
| {
__schemaWorkaround__: null;
}
| {
schedule:
| string
| {
rate: string[];
enabled?: boolean;
name?: string;
description?: string;
input?:
| string
/// Didn't include all too long..
ANSWER
Answered 2022-Feb-27 at 19:04You can use indexed access types here. If you have an object-like type T
and a key-like type K
which is a valid key type for T
, then T[K]
is the type of the value at that key. In other words, if you have a value t
of type T
and a value k
of type K
, then t[k]
has the type T[K]
.
So the first step here is to get the type of the functions
property from the AWS
type:
type Funcs = AWS["functions"];
/* type Funcs = {
[k: string]: {
name?: string | undefined;
events?: {
__schemaWorkaround__: null;
} | {
schedule: string | {
rate: string[];
enabled?: boolean;
name?: string;
description?: string;
input?: string;
};
} | undefined;
};
} | undefined */
Here AWS
corresponds to the T
in T[K]
, and the string
literal type "functions"
corresponds to the K
type.
Because functions
is an optional property of AWS
, the Funcs
type is a union of the declared type of that property with undefined
. That's because if you have a value aws
of type AWS
, then aws.functions
might be undefined
. You can't index into a possibly undefined
value safely, so the compiler won't let you use an indexed access to type to drill down into Funcs
directly. Something like Funcs[string]
will be an error.
So first we need to remove filter out the undefined
type from Functions
. The easiest way to do this is with the NonNullable
utility type which filters out null
and undefined
from a union type T
:
type DefinedFuncs = NonNullable;
/* type DefinedFuncs = {
[k: string]: {
name?: string | undefined;
events?: {
__schemaWorkaround__: null;
} | {
schedule: string | {
rate: string[];
enabled?: boolean;
name?: string;
description?: string;
input?: string;
};
} | undefined;
};
} */
Okay, now we have a defined type with a string
index signature whose property type is the type we're looking for. Since any string
-valued key can be used to get the property we're looking for, we can use an indexed access type with DefinedFuncs
as the object type and string
as the key type:
type DesiredProp = DefinedFuncs[string];
/* type DesiredProp = {
name?: string | undefined;
events?: {
__schemaWorkaround__: null;
} | {
schedule: string | {
rate: string[];
enabled?: boolean;
name?: string;
description?: string;
input?: string;
};
} | undefined;
} */
Looks good! And of course we can do this all as a one-liner:
type DesiredProp = NonNullable[string];
QUESTION
based on the aws documentation, maximum timeout limit is less that 30 seconds in api gateway.so hooking up an sagemaker endpoint with api gateway wouldn't make sense, if the request/response is going to take more than 30 seconds. is there any workaround ? adding a lambda in between api gateway and sagemaker endpoint is going to add more time to process request/response, which i would like to avoid. also, there will be added time for lambda cold starts and sagemaker serverless endpoints are built on top of lambda so that will also add cold start time. is there a way to invoke the serverless sagemaker endpoints , without these overhead?
ANSWER
Answered 2022-Feb-25 at 08:19You can connect SageMaker endpoints to API Gateway directly, without intermediary Lambdas, using mapping templates https://aws.amazon.com/fr/blogs/machine-learning/creating-a-machine-learning-powered-rest-api-with-amazon-api-gateway-mapping-templates-and-amazon-sagemaker/
You can also invoke endpoints with AWS SDKs (eg CLI, boto3), no need to do it for API GW necessarily.
QUESTION
I am trying to submit google dataproc batch job. As per documentation Batch Job, we can pass subnetwork
as parameter. But when use, it give me
ERROR: (gcloud.dataproc.batches.submit.spark) unrecognized arguments: --subnetwork=
Here is gcloud command I have used,
gcloud dataproc batches submit spark \
--region=us-east4 \
--jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \
--class=org.apache.spark.examples.SparkPi \
--subnetwork="https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east4/subnetworks/network-svc" \
-- 1000
ANSWER
Answered 2022-Feb-01 at 11:28According to dataproc batches docs, the subnetwork URI needs to be specified using argument --subnet
.
Try:
gcloud dataproc batches submit spark \
--region=us-east4 \
--jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \
--class=org.apache.spark.examples.SparkPi \
--subnet="https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east4/subnetworks/network-svc" \
-- 1000
QUESTION
I have created a SAM template with a function in it. After deploying SAM the lambda function gets added and are also displayed while adding lambda function trigger in cognito but when I save it gives a 404 error.
SAM template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >-
description
Globals:
Function:
CodeUri: .
Runtime: nodejs14.x
Resources:
function1:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: function1
Handler: dist/handlers/fun1.handler
error in cognito while adding trigger
[404 Not Found] Allowing Cognito to invoke lambda function cannot be completed.
ResourceNotFoundException (Request ID: e963254b-8d2a-49fa-b012-xxxxxxxx)
Note - if I add a Cognito Sync trigger
in the lambda config dashboard and then try to configure a trigger in the user pool it works.
ANSWER
Answered 2021-Dec-24 at 11:44You can change to old console, set lambda trigger, it's worked. Then you can change to new console again.
QUESTION
Im using Serverless Framework to deploy a Docker image running R to an AWS Lambda.
service: r-lambda
provider:
name: aws
region: eu-west-1
timeout: 60
environment:
stage: ${sls:stage}
R_AWS_REGION: ${aws:region}
ecr:
images:
r-lambda:
path: ./
functions:
r-lambda-hello:
image:
name: r-lambda
command:
- functions.hello
This works fine and I can log into AWS and invoke the lambda function. But I also want to invoke by doing a curl to it, so I added an "events" property to the functions section:
functions:
r-lambda-hello:
image:
name: r-lambda
command:
- functions.hello
events:
- http: GET r-lambda-hello
However, when I deploy with serverless, it does not output the API endpoint. And when I go to API Gateway in AWS, I dont see any APIs here. What am I doing wrong?
EDIT
As per Rovelcio Junior's answer, I went to AWS CloudFormation > Stacks > r-lambda-dev > Resources. But there is now Api Gateway listed in the resources...
EDIT
Heres my DockerFile:
FROM public.ecr.aws/lambda/provided:al2.2021.09.13.11
ENV R_VERSION=4.0.3
RUN yum -y install wget tar openssl-devel libxml2-devel
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
&& wget https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm \
&& yum -y install R-${R_VERSION}-1-1.x86_64.rpm \
&& rm R-${R_VERSION}-1-1.x86_64.rpm
ENV PATH="${PATH}:/opt/R/${R_VERSION}/bin/"
RUN Rscript -e "install.packages(c('httr', 'jsonlite', 'logger', 'paws.storage', 'paws.database', 'readr', 'BiocManager'), repos = 'https://cloud.r-project.org/')"
COPY runtime.R functions.R ${LAMBDA_TASK_ROOT}/
RUN chmod 755 -R ${LAMBDA_TASK_ROOT}/
RUN printf '#!/bin/sh\ncd $LAMBDA_TASK_ROOT\nRscript runtime.R' > /var/runtime/bootstrap \
&& chmod +x /var/runtime/bootstrap
And the output when I deploy:
Serverless: Packaging service...
#1 [internal] load build definition from Dockerfile
#1 sha256:730ec5a8380df019470bdbb6091e9a29cd62f4ef4443be0c14ec2c4979da26ea
#1 transferring dockerfile: 37B 0.0s done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:553479c1392984ccf98fd0cf873e2e2da149ff9a1bc98a0abee6b3e558545181
#2 transferring context: 2B done
#2 DONE 0.0s
#3 [internal] load metadata for public.ecr.aws/lambda/provided:al2.2021.09.13.11
#3 sha256:8c254bed2a05020fafbb65f8dbd8b7925d24019ab56ee85272c4559290756324
#3 DONE 4.7s
#4 [ 1/8] FROM public.ecr.aws/lambda/provided:al2.2021.09.13.11@sha256:9628c6a5372a04289000f7cb9cb9aeb273d7381bdbe1283a07fb86981a06ac07
#4 sha256:2082eea955a6ae3398939e60fe10c5c7b34b262c2e5b82421ece4a9127883f58
#4 DONE 0.0s
#10 [internal] load build context
#10 sha256:8b61403d9fd75cf8a55c7294afa45fe717dc75c5783b7b749c304687556372c6
#10 transferring context: 108B done
#10 DONE 0.0s
#6 [ 3/8] RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && wget https://cdn.rstudio.com/r/centos-7/pkgs/R-4.0.3-1-1.x86_64.rpm && yum -y install R-4.0.3-1-1.x86_64.rpm && rm R-4.0.3-1-1.x86_64.rpm
#6 sha256:22644d17f1156ee8911a76c1f9af4c3894f22f41e347e611f4d382da3bf54356
#6 CACHED
#11 [ 4/8] COPY runtime.R functions.R /var/task/
#11 sha256:163032f10dc70da4ceb3d6a8824b7f81def9dda7d75e745074f7fdd2c639253e
#11 CACHED
#13 [ 5/8] RUN chmod 755 -R /var/task/
#13 sha256:606c9651f2ba1aadde5e6928c1fffa5e6a397762ef1abdf14aeea2940c16cfd8
#13 CACHED
#5 [ 6/8] RUN yum -y install wget tar openssl-devel libxml2-devel
#5 sha256:a5bb99c3107595ebcce135aec74510b7d5438acc6900e4bd5db1bec97f9c61b5
#5 CACHED
#7 [ 7/8] RUN Rscript -e "install.packages(c('httr', 'jsonlite', 'logger', 'paws.storage', 'paws.database', 'readr', 'BiocManager'), repos = 'https://cloud.r-project.org/')"
#7 sha256:465b4b4ff27a57cacb401f8b0c9335fadca31fa68081cd5f56f22c9b14e9c17a
#7 CACHED
#14 [8/8] RUN printf '#!/bin/sh\ncd $LAMBDA_TASK_ROOT\nRscript runtime.R' > /var/runtime/bootstrap && chmod +x /var/runtime/bootstrap
#14 sha256:74b7d704dc21ccab7da6fd953240a5331d75229af210def5351bd5c5bf943eed
#14 CACHED
#15 exporting to image
#15 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#15 exporting layers done
#15 writing image sha256:9fabde8e59e85c4ffe09ec70550b3baeba6dd422cd54f05e17e5fac6c9c9db32 done
#15 naming to docker.io/library/serverless-r-lambda-dev:r-lambda done
#15 DONE 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Serverless: Login to Docker succeeded!
ANSWER
Answered 2021-Dec-15 at 23:26The way your events.http is configured looks wrong. Try replacing it with:
- http:
path: r-lambda-hello
method: get
This might be helpful as well: https://github.com/serverless/examples
I also found this blog useful: Build a serverless API with Amazon Lambda and API Gateway
QUESTION
I'd like to use CockroachDB Serverless for my Ecto application. How do I specify the connection string?
I get an error like this when trying to connect.
[error] GenServer #PID<0.295.0> terminating
** (Postgrex.Error) FATAL 08004 (sqlserver_rejected_establishment_of_sqlconnection) codeParamsRoutingFailed: missing cluster name in connection string
(db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
CockroachDB Serverless says to connect by including the cluster name in the connection string, like this:
postgresql://username:@free-tier.gcp-us-central1.cockroachlabs.cloud:26257/defaultdb?sslmode=verify-full&sslrootcert=$HOME/.postgresql/root.crt&options=--cluster%3Dcluster-name-1234
but I'm not sure how to get Ecto to create this connection string via its configuration.
ANSWER
Answered 2021-Oct-28 at 00:48This configuration allows Ecto to connect to CockroachDB Serverless correctly:
config :myapp, MyApp.repo,
username: "username",
password: "xxxx",
database: "defaultdb",
hostname: "free-tier.gcp-us-central1.cockroachlabs.cloud",
port: 26257,
ssl: true,
ssl_opts: [
cert_pem: "foo.pem",
key_pem: "bar.pem"
],
show_sensitive_data_on_connection_error: true,
pool_size: 10,
parameters: [
options: "--cluster=cluster-name-1234"
]
QUESTION
I have created a RDS cluster with 2 instances using terraform. When I am upgrading the RDS from front-end, it modifies the cluster. But when I do the same using terraform, it destroys the instance.
We tried create_before_destroy, and it gives error.
We tried with ignore_changes=engine but that didn't make any changes.
Is there any way to prevent it?
resource "aws_rds_cluster" "rds_mysql" {
cluster_identifier = var.cluster_identifier
engine = var.engine
engine_version = var.engine_version
engine_mode = var.engine_mode
availability_zones = var.availability_zones
database_name = var.database_name
port = var.db_port
master_username = var.master_username
master_password = var.master_password
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.engine_mode == "serverless" ? null : var.preferred_backup_window
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
vpc_security_group_ids = var.vpc_security_group_ids
db_cluster_parameter_group_name = var.create_cluster_parameter_group == "true" ? aws_rds_cluster_parameter_group.rds_cluster_parameter_group[0].id : var.cluster_parameter_group
skip_final_snapshot = var.skip_final_snapshot
deletion_protection = var.deletion_protection
allow_major_version_upgrade = var.allow_major_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [availability_zones]
}
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.engine_mode == "serverless" ? 0 : var.cluster_instance_count
identifier = "${var.cluster_identifier}-${count.index}"
cluster_identifier = aws_rds_cluster.rds_mysql.id
instance_class = var.instance_class
engine = var.engine
engine_version = aws_rds_cluster.rds_mysql.engine_version
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
db_parameter_group_name = var.create_db_parameter_group == "true" ? aws_db_parameter_group.rds_instance_parameter_group[0].id : var.db_parameter_group
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [engine_version]
}
}
Error:
resource \"aws_rds_cluster_instance\" \"cluster_instances\" {\n\n\n\nError: error creating RDS Cluster (aurora-cluster-mysql) Instance: DBInstanceAlreadyExists: DB instance already exists\n\tstatus code: 400, request id: c6a063cc-4ffd-4710-aff2-eb0667b0774f\n\n on
Plan output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
+/- create replacement and then destroy
Terraform will perform the following actions:
# module.rds_aurora_create[0].aws_rds_cluster.rds_mysql will be updated in-place
~ resource "aws_rds_cluster" "rds_mysql" {
~ allow_major_version_upgrade = false -> true
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1"
id = "aurora-cluster-mysql"
tags = {}
# (33 unchanged attributes hidden)
}
# module.rds_aurora_create[0].aws_rds_cluster_instance.cluster_instances[0] must be replaced
+/- resource "aws_rds_cluster_instance" "cluster_instances" {
~ arn = "arn:aws:rds:us-east-1:account:db:aurora-cluster-mysql-0" -> (known after apply)
~ availability_zone = "us-east-1a" -> (known after apply)
~ ca_cert_identifier = "rds-ca-" -> (known after apply)
~ dbi_resource_id = "db-32432432SDF" -> (known after apply)
~ endpoint = "aurora-cluster-mysql-0.jkjk.us-east-1.rds.amazonaws.com" -> (known after apply)
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1" # forces replacement
~ id = "aurora-cluster-mysql-0" -> (known after apply)
+ identifier_prefix = (known after apply)
+ kms_key_id = (known after apply)
+ monitoring_role_arn = (known after apply)
~ performance_insights_enabled = false -> (known after apply)
+ performance_insights_kms_key_id = (known after apply)
~ port = 3306 -> (known after apply)
~ preferred_backup_window = "07:00-09:00" -> (known after apply)
~ preferred_maintenance_window = "thu:06:12-thu:06:42" -> (known after apply)
~ storage_encrypted = false -> (known after apply)
- tags = {} -> null
~ tags_all = {} -> (known after apply)
~ writer = true -> (known after apply)
# (12 unchanged attributes hidden)
}
Plan: 1 to add, 1 to change, 1 to destroy.
ANSWER
Answered 2021-Oct-30 at 13:04Terraform is seeing the engine version change on the instances and is detecting this as an action that forces replacement.
Remove (or ignore changes to) the engine_version
input for the aws_rds_cluster_instance
resources.
AWS RDS upgrades the engine version for cluster instances itself when you upgrade the engine version of the cluster (this is why you can do an in-place upgrade via the AWS console).
By excluding the engine_version
input, Terraform will see no changes made to the aws_rds_cluster_instance
s and will do nothing.
AWS will handle the engine upgrades for the instances internally.
If you decide to ignore changes, use the ignore_changes
argument within a lifecycle
block:
resource "aws_rds_cluster_instance" "cluster_instance" {
engine_version = aws_rds_cluster.main.engine_version
...
lifecycle {
ignore_changes = [engine_version]
}
}
QUESTION
I am trying to deploy a REST API in AWS using serverless. Node version 14.17.5.
My directory structure:
When I deploy the above successfully I get the following error while trying to access the api.
2021-09-28T18:32:27.576Z undefined ERROR Uncaught Exception {
"errorType": "Error",
"errorMessage": "Must use import to load ES Module: /var/task/lambda.js\nrequire() of ES modules is not supported.\nrequire() of /var/task/lambda.js from /var/runtime/UserFunction.js is an ES module file as it is a .js file whose nearest parent package.json contains \"type\": \"module\" which defines all .js files in that package scope as ES modules.\nInstead rename lambda.js to end in .cjs, change the requiring code to use import(), or remove \"type\": \"module\" from /var/task/package.json.\n",
"code": "ERR_REQUIRE_ESM",
"stack": [
"Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /var/task/lambda.js",
"require() of ES modules is not supported.",
"require() of /var/task/lambda.js from /var/runtime/UserFunction.js is an ES module file as it is a .js file whose nearest parent package.json contains \"type\": \"module\" which defines all .js files in that package scope as ES modules.",
"Instead rename lambda.js to end in .cjs, change the requiring code to use import(), or remove \"type\": \"module\" from /var/task/package.json.",
"",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1089:13)",
" at Module.load (internal/modules/cjs/loader.js:937:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:778:12)",
" at Module.require (internal/modules/cjs/loader.js:961:19)",
" at require (internal/modules/cjs/helpers.js:92:18)",
" at _tryRequire (/var/runtime/UserFunction.js:75:12)",
" at _loadUserApp (/var/runtime/UserFunction.js:95:12)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object. (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:1072:14)"
]
}
As per the suggestion in the error I tried changing the lambda.js to lambda.cjs. Now I get the following error
2021-09-28T17:32:36.970Z undefined ERROR Uncaught Exception {
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'lambda'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'lambda'",
"Require stack:",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object. (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:1072:14)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)",
" at Module.load (internal/modules/cjs/loader.js:937:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:778:12)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:76:12)",
" at internal/main/run_main_module.js:17:47"
]
}
serverless.yml
service: APINAME #Name of your App
useDotenv: true
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x # Node JS version
memorySize: 512
timeout: 15
stage: dev
region: us-east-1 # AWS region
lambdaHashingVersion: 20201221
functions:
api:
handler: lambda.handler
events:
- http: ANY /{proxy+}
- http: ANY /
lambda.js
import awsServerlessExpress from 'aws-serverless-express'
import app from './index.js'
const server = awsServerlessExpress.createServer(app)
export const handler = (event, context) => {
awsServerlessExpress.proxy(server, event, context)
}
aws-cli commands
docker run --rm -it amazon/aws-cli --version
docker run --rm -it amazon/aws-cli configure
docker run --rm -it amazon/aws-cli serverless deploy
serverless commands:
docker run --rm -it amazon/aws-cli serverless deploy
serverless config credentials --provider aws --key --secret
node ./node_modules/serverless/bin/serverless config credentials --provider aws --key --secret
After reading up a couple of answers I have tried the following:
- Made sure package.json includes "type": "module"
- Deleted node_modules and package-lock.json and reinstalled all of them (since the version of node was updated during development)
What am I doing wrong?
ANSWER
Answered 2021-Nov-02 at 10:00Converted all imports to require()
and all exports to module.exports
Removed "type": "module"
from package.json
Everything works like a charm. It is not a solution to the question asked but making things work became more important.
QUESTION
We are a team of 5 developers working on a video rendering implementation. This implementation consists out of two parts.
- A live video preview in the browser using angular + konva.
- A node.js (node 14) serverless (AWS lambda container) implementation using konva-node that pipes frames to ffmpeg for rendering a mp4 video in higher quality for later download.
Both ways are working for us. Now we extracted the parts of the animation that are the same for frontend and backend implementation to an internal library. We imported them in BE and FE. That also works nicely for most parts.
We noticed here that konva-node is deprecated since a short time. Documentation says to use canvas
+ konva
instead on node.js. But this just doesn't work. If we don't use konva-node we cannot create a stage without a 'container'
value. Also we cannot create a raw image buffer anymore, because stage.toCanvas()
actually returns a HTMLCanvas, which does not have this functionality.
- So what does konva-node actually do to konva API?
- Is node.js still supported after deprecation of konva-node?
- How can we get
toBuffer()
andnew Stage()
functionality without konva-node in node.js?
import konvaNode = require('konva-node');
this.stage = new konvaNode.Stage({
width: stageSize.width,
height: stageSize.height
});
// [draw stuff on stage here]
// create raw frames to pipe to ffmpeg
const frame = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
import Konva from 'konva';
this.stage = new Konva.Stage({
width: stageSize.width,
height: stageSize.height,
// connect stage to html element in browser
container: 'container'
});
// [draw stuff on stage here]
Finally in an ideal world (if we could just Konva in frontend and backend without konva-node the following should be possible for a shared code.
loading imagespublic static loadKonvaImage(element, canvas): Promise {
return new Promise(resolve => {
let image;
if (canvas) {
// node.js canvas image
image = new canvas.Image();
} else {
// html browser image
image = new Image();
}
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Many props to the developer for the good work. We would look forward to use the library for a long time, but how can we if some core functionality that we rely on is outdated shortly after we started the project?
Another stack overflow answer mentioned Konva.isBrowser = false;
. Maybe this is used to differentiate between a browser and a node canvas?
ANSWER
Answered 2021-Sep-27 at 21:36So what does konva-node actually do to konva API?
It slightly patches Konva code to use canvas
nodejs library to use 2d canvas API. So, Konva will not use browser DOM API.
Is node.js still supported after deprecation of konva-node?
Yes. https://github.com/konvajs/konva#4-nodejs-env
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
You can try to use this:
const canvas = layer.getNativeCanvasElement();
const buffer = canvas.toBuffer();
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install serverless
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page