Popular New Releases in Terraform
terraform
v1.1.9
backstage
v1.1.1
salt
v3004.1
pulumi
v3.30.0
terraformer
0.8.19
Popular Libraries in Terraform
by hashicorp go
32174 MPL-2.0
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
by bregman-arie python
22045 NOASSERTION
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
by backstage typescript
16116 Apache-2.0
Backstage is an open platform for building developer portals
by saltstack python
12241 Apache-2.0
Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
by pulumi go
12104 Apache-2.0
Pulumi - Developer-First Infrastructure as Code. Your Cloud, Your Language, Your Way 🚀
by GoogleCloudPlatform go
7233 Apache-2.0
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
by hashicorp go
7174 MPL-2.0
Terraform AWS provider
by chef ruby
6865 Apache-2.0
Chef Infra, a powerful automation platform that transforms infrastructure into code automating how infrastructure is configured, deployed and managed across any environment, at any scale
by infracost go
6374 Apache-2.0
Cloud cost estimates for Terraform in pull requests💰📉 Love your cloud bill!
Trending New libraries in Terraform
by backstage typescript
16116 Apache-2.0
Backstage is an open platform for building developer portals
by infracost go
6374 Apache-2.0
Cloud cost estimates for Terraform in pull requests💰📉 Love your cloud bill!
by hashicorp typescript
3425 MPL-2.0
Define infrastructure resources using programming constructs and provision them using HashiCorp Terraform
by cloudquery go
2221 MPL-2.0
The open-source cloud asset inventory powered by SQL.
by cloudskiff go
1603 Apache-2.0
Detect, track and alert on infrastructure drift
by Qovery rust
1460 GPL-3.0
The simplest way to deploy your apps on any cloud provider
by cycloidio go
805 MIT
Read your tfstate or HCL to generate a graph specific for each provider, showing only the resources that are most important/relevant.
by milliHQ typescript
796 Apache-2.0
Terraform module for building and deploying Next.js apps to AWS. Supports SSR (Lambda), Static (S3) and API (Lambda) pages.
by aws-cloudformation rust
737 Apache-2.0
Guard offers a policy-as-code domain-specific language (DSL) to write rules and validate JSON- and YAML-formatted data such as CloudFormation Templates, K8s configurations, and Terraform JSON plans/configurations against those rules.
Top Authors in Terraform
1
90 Libraries
58073
2
19 Libraries
573
3
17 Libraries
9272
4
14 Libraries
1045
5
14 Libraries
199
6
14 Libraries
390
7
13 Libraries
235
8
11 Libraries
702
9
11 Libraries
12588
10
10 Libraries
425
1
90 Libraries
58073
2
19 Libraries
573
3
17 Libraries
9272
4
14 Libraries
1045
5
14 Libraries
199
6
14 Libraries
390
7
13 Libraries
235
8
11 Libraries
702
9
11 Libraries
12588
10
10 Libraries
425
Trending Kits in Terraform
No Trending Kits are available at this moment for Terraform
Trending Discussions on Terraform
json.Marshal(): json: error calling MarshalJSON for type msgraph.Application
Web3js fails to import in Vue3 composition api project
how to connect an aws api gateway to a private lambda function inside a vpc
Terraform AWS Provider Error: Value for unconfigurable attribute. Can't configure a value for "acl": its value will be decided automatically
Programmatically Connecting a GitHub repo to a Google Cloud Project
Kubernetes NodePort is not available on all nodes - Oracle Cloud Infrastructure (OCI)
Can you pass blocks as variables in Terraform, referencing the type of a resource's nested block contents?
trigger lambda function from DynamoDB
Terraform: Inappropriate value for attribute "ingress" while creating SG
How to fix "Function not implemented - Failed to initialize inotify (Errno::ENOSYS)" in rails
QUESTION
json.Marshal(): json: error calling MarshalJSON for type msgraph.Application
Asked 2022-Mar-27 at 23:59What specific syntax or configuration changes must be made in order to resolve the error below in which terraform is failing to create an instance of azuread_application
?
THE CODE:
The terraform code that is triggering the error when terraform apply
is run is as follows:
1variable "tenantId" { }
2variable "clientId" { }
3variable "clientSecret" { }
4variable "instanceName" { }
5
6terraform {
7 required_providers {
8 azuread = {
9 source = "hashicorp/azuread"
10 version = "2.5.0"
11 }
12 }
13}
14
15provider "azuread" {
16 tenant_id = var.tenantId
17 client_id = var.clientId
18 client_secret = var.clientSecret
19}
20
21resource "azuread_application" "appRegistration" {
22 display_name = var.instanceName
23 app_role {
24 allowed_member_types = ["User", "Application"]
25 description = "Admins can manage roles and perform all task actions"
26 display_name = "Admin"
27 enabled = true
28 id = "1b19509b-32b1-4e9f-b71d-4992aa991967"
29 value = "admin"
30 }
31}
32
THE ERROR:
The error and log output that result from running the above code with terraform apply
are:
1variable "tenantId" { }
2variable "clientId" { }
3variable "clientSecret" { }
4variable "instanceName" { }
5
6terraform {
7 required_providers {
8 azuread = {
9 source = "hashicorp/azuread"
10 version = "2.5.0"
11 }
12 }
13}
14
15provider "azuread" {
16 tenant_id = var.tenantId
17 client_id = var.clientId
18 client_secret = var.clientSecret
19}
20
21resource "azuread_application" "appRegistration" {
22 display_name = var.instanceName
23 app_role {
24 allowed_member_types = ["User", "Application"]
25 description = "Admins can manage roles and perform all task actions"
26 display_name = "Admin"
27 enabled = true
28 id = "1b19509b-32b1-4e9f-b71d-4992aa991967"
29 value = "admin"
30 }
31}
322021/10/05 17:47:18 [DEBUG] module.ad-admin.azuread_application.appRegistration:
33 apply errored, but we're indicating that via the Error pointer rather than returning it:
34 Could not create application: json.Marshal():
35 json: error calling MarshalJSON for type msgraph.Application:
36 json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners: encountered DirectoryObject with nil ODataId
37
382021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
392021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
402021/10/05 17:47:18 [TRACE] EvalApplyProvisioners: azuread_application.appRegistration has no state, so skipping provisioners
412021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
422021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
432021/10/05 17:47:18 [TRACE] vertex "module.ad-admin.azuread_application.appRegistration": visit complete
44
452021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.application_id (expand)" errored, so skipping
462021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal.appRegistrationSP" errored, so skipping
472021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.application_id" errored, so skipping
482021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.appId (expand)" errored, so skipping
492021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal_password.appRegistrationSP_pwd" errored, so skipping
502021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.appId" errored, so skipping
512021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment_vault" errored, so skipping
522021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment" errored, so skipping
532021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azuread\"] (close)" errored, so skipping
542021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azurerm\"] (close)" errored, so skipping
552021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin (close)" errored, so skipping
562021/10/05 17:47:18 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
572021/10/05 17:47:18 [TRACE] dag/walk: upstream of "root" errored, so skipping
582021/10/05 17:47:18 [TRACE] statemgr.Filesystem: creating backup snapshot at terraform.tfstate.backup
592021/10/05 17:47:18 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 391
602021/10/05 17:47:18 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
612021/10/05 17:47:18 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
62
63Error: Could not create application
64
65 on ..\..\..\..\modules\ad-admin\active-directory.tf line 69, in resource "azuread_application" "appRegistration":
66 69: resource "azuread_application" "appRegistration" {
67
68json.Marshal(): json: error calling MarshalJSON for type msgraph.Application:
69json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners:
702021/10/05 17:47:18 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate
71encountered DirectoryObject with nil ODataId
72
terraform -version
gives:
Terraform v1.0.8 on windows_amd64
ANSWER
Answered 2021-Oct-07 at 18:35This was a bug, reported as GitHub issue:
The resolution to the problem in the OP is to upgrade the version from 2.5.0
to 2.6.0
in the required_providers
block from the code in the OP above as follows:
1variable "tenantId" { }
2variable "clientId" { }
3variable "clientSecret" { }
4variable "instanceName" { }
5
6terraform {
7 required_providers {
8 azuread = {
9 source = "hashicorp/azuread"
10 version = "2.5.0"
11 }
12 }
13}
14
15provider "azuread" {
16 tenant_id = var.tenantId
17 client_id = var.clientId
18 client_secret = var.clientSecret
19}
20
21resource "azuread_application" "appRegistration" {
22 display_name = var.instanceName
23 app_role {
24 allowed_member_types = ["User", "Application"]
25 description = "Admins can manage roles and perform all task actions"
26 display_name = "Admin"
27 enabled = true
28 id = "1b19509b-32b1-4e9f-b71d-4992aa991967"
29 value = "admin"
30 }
31}
322021/10/05 17:47:18 [DEBUG] module.ad-admin.azuread_application.appRegistration:
33 apply errored, but we're indicating that via the Error pointer rather than returning it:
34 Could not create application: json.Marshal():
35 json: error calling MarshalJSON for type msgraph.Application:
36 json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners: encountered DirectoryObject with nil ODataId
37
382021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
392021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
402021/10/05 17:47:18 [TRACE] EvalApplyProvisioners: azuread_application.appRegistration has no state, so skipping provisioners
412021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
422021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
432021/10/05 17:47:18 [TRACE] vertex "module.ad-admin.azuread_application.appRegistration": visit complete
44
452021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.application_id (expand)" errored, so skipping
462021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal.appRegistrationSP" errored, so skipping
472021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.application_id" errored, so skipping
482021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.appId (expand)" errored, so skipping
492021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal_password.appRegistrationSP_pwd" errored, so skipping
502021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.appId" errored, so skipping
512021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment_vault" errored, so skipping
522021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment" errored, so skipping
532021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azuread\"] (close)" errored, so skipping
542021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azurerm\"] (close)" errored, so skipping
552021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin (close)" errored, so skipping
562021/10/05 17:47:18 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
572021/10/05 17:47:18 [TRACE] dag/walk: upstream of "root" errored, so skipping
582021/10/05 17:47:18 [TRACE] statemgr.Filesystem: creating backup snapshot at terraform.tfstate.backup
592021/10/05 17:47:18 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 391
602021/10/05 17:47:18 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
612021/10/05 17:47:18 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
62
63Error: Could not create application
64
65 on ..\..\..\..\modules\ad-admin\active-directory.tf line 69, in resource "azuread_application" "appRegistration":
66 69: resource "azuread_application" "appRegistration" {
67
68json.Marshal(): json: error calling MarshalJSON for type msgraph.Application:
69json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners:
702021/10/05 17:47:18 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate
71encountered DirectoryObject with nil ODataId
72terraform {
73 required_providers {
74 azuread = {
75 source = "hashicorp/azuread"
76 version = "2.6.0"
77 }
78 }
79}
80
QUESTION
Web3js fails to import in Vue3 composition api project
Asked 2022-Mar-14 at 03:36I've created a brand new project with npm init vite bar -- --template vue
. I've done an npm install web3
and I can see my package-lock.json
includes this package. My node_modules
directory also includes the web3
modules.
So then I added this line to main.js
:
1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3' <-- This line
4
5
6createApp(App).mount('#app')
7
And I get the following error:
I don't understand what is going on here. I'm fairly new to using npm
so I'm not super sure what to Google. The errors are coming from node_modules/web3/lib/index.js
, node_modules/web3-core/lib/index.js
, node_modules/web3-core-requestmanager/lib/index.js
, and finally node_modules/util/util.js
. I suspect it has to do with one of these:
- I'm using Vue 3
- I'm using Vue 3 Composition API
- I'm using Vue 3 Composition API SFC
<script setup>
tag (but I imported it inmain.js
so I don't think it is this one) web3js
is in Typescript and my Vue3 project is not configured for Typescript
But as I am fairly new to JavaScript and Vue and Web3 I am not sure how to focus my Googling on this error. My background is Python, Go, Terraform. Basically the back end of the back end. Front end JavaScript is new to me.
How do I go about resolving this issue?
ANSWER
Answered 2022-Mar-14 at 03:36Polyfilling the Node globals and modules enables the web3
import to run in the browser:
- Install the ESBuild plugins that polyfill Node globals/modules:
1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3' <-- This line
4
5
6createApp(App).mount('#app')
7npm i -D @esbuild-plugins/node-globals-polyfill
8npm i -D @esbuild-plugins/node-modules-polyfill
9
Configure
optimizeDeps.esbuildOptions
to use these ESBuild plugins.Configure
define
to replaceglobal
withglobalThis
(the browser equivalent).
1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3' <-- This line
4
5
6createApp(App).mount('#app')
7npm i -D @esbuild-plugins/node-globals-polyfill
8npm i -D @esbuild-plugins/node-modules-polyfill
9import { defineConfig } from 'vite'
10import GlobalsPolyfills from '@esbuild-plugins/node-globals-polyfill'
11import NodeModulesPolyfills from '@esbuild-plugins/node-modules-polyfill'
12
13export default defineConfig({
14 ⋮
15 optimizeDeps: {
16 esbuildOptions: {
17 2️⃣
18 plugins: [
19 NodeModulesPolyfills(),
20 GlobalsPolyfills({
21 process: true,
22 buffer: true,
23 }),
24 ],
25 3️⃣
26 define: {
27 global: 'globalThis',
28 },
29 },
30 },
31})
32
Note: The polyfills add considerable size to the build output.
Option 2: Use pre-bundled scriptweb3
distributes a bundled script at web3/dist/web3.min.js
, which can run in the browser without any configuration (listed as "pure js"). You could configure a resolve.alias
to pull in that file:
1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3' <-- This line
4
5
6createApp(App).mount('#app')
7npm i -D @esbuild-plugins/node-globals-polyfill
8npm i -D @esbuild-plugins/node-modules-polyfill
9import { defineConfig } from 'vite'
10import GlobalsPolyfills from '@esbuild-plugins/node-globals-polyfill'
11import NodeModulesPolyfills from '@esbuild-plugins/node-modules-polyfill'
12
13export default defineConfig({
14 ⋮
15 optimizeDeps: {
16 esbuildOptions: {
17 2️⃣
18 plugins: [
19 NodeModulesPolyfills(),
20 GlobalsPolyfills({
21 process: true,
22 buffer: true,
23 }),
24 ],
25 3️⃣
26 define: {
27 global: 'globalThis',
28 },
29 },
30 },
31})
32import { defineConfig } from 'vite'
33
34export default defineConfig({
35 ⋮
36 resolve: {
37 alias: {
38 web3: 'web3/dist/web3.min.js',
39 },
40 },
41})
42
Note: This option produces 469.4 KiB smaller output than Option 1.
QUESTION
how to connect an aws api gateway to a private lambda function inside a vpc
Asked 2022-Feb-20 at 12:53I am trying to connect an aws api gateway to a lambda function residing in a VPC then retrieve the secret manager to access a database using python code with boto3. The database and vpc endpoint were created in a private subnet.
lambda function1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65resource "aws_lambda_function" "lambda_test" {
66 function_name = "lambda-test"
67 ...
68
69 # Attach Lambda to VPC
70 vpc_config {
71 subnet_ids = [aws_subnet.private_subnet.id]
72 security_group_ids = [aws_security_group.db.id]
73 }
74}
75
76resource "aws_iam_policy" "lambda_test" {
77 name = "lambda-test"
78
79 policy = <<EOF
80{
81 "Version": "2012-10-17",
82 "Statement": [
83 {
84 "Effect": "Allow",
85 "Action": [
86 "logs:CreateLogStream",
87 "logs:CreateLogGroup",
88 "logs:PutLogEvents",
89 "ec2:DescribeSecurityGroups",
90 "ec2:DescribeSubnets",
91 "ec2:DescribeVpcs",
92 "ec2:DescribeNetworkInterfaces",
93 "ec2:CreateNetworkInterface",
94 "ec2:DeleteNetworkInterface",
95 "ec2:AttachNetworkInterface",
96 "ec2:AssignPrivateIpAddresses",
97 "ec2:UnassignPrivateIpAddresses",
98 "autoscaling:CompleteLifecycleAction",
99 "secretsmanager:GetSecretValue"
100 ],
101 "Resource": [
102 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
103 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
104 ]
105 },
106 {
107 "Effect": "Allow",
108 "Action": "secretsmanager:GetSecretValue",
109 "Resource": [
110 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
111 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
112 ]
113 }
114 ]
115}
116EOF
117}
118
119resource "aws_iam_role" "lambda_test_role" {
120 name = "lambda-test-role"
121
122 assume_role_policy = <<EOF
123{
124 "Version": "2012-10-17",
125 "Id": "",
126 "Statement": [
127 {
128 "Action": "sts:AssumeRole",
129 "Principal": {
130 "Service": [
131 "lambda.amazonaws.com",
132 "secretsmanager.amazonaws.com"
133 ]
134 },
135 "Effect": "Allow",
136 "Sid": ""
137 }
138 ]
139}
140EOF
141}
142
143resource "aws_iam_role_policy_attachment" "lambda_test" {
144 policy_arn = aws_iam_policy.lambda_test.arn
145 role = aws_iam_role.lambda_test_role.name
146}
147
148resource "aws_iam_role_policy_attachment" "lambda_test_vpc_access" {
149 policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
150 role = aws_iam_role.lambda_test_role.name
151}
152
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65resource "aws_lambda_function" "lambda_test" {
66 function_name = "lambda-test"
67 ...
68
69 # Attach Lambda to VPC
70 vpc_config {
71 subnet_ids = [aws_subnet.private_subnet.id]
72 security_group_ids = [aws_security_group.db.id]
73 }
74}
75
76resource "aws_iam_policy" "lambda_test" {
77 name = "lambda-test"
78
79 policy = <<EOF
80{
81 "Version": "2012-10-17",
82 "Statement": [
83 {
84 "Effect": "Allow",
85 "Action": [
86 "logs:CreateLogStream",
87 "logs:CreateLogGroup",
88 "logs:PutLogEvents",
89 "ec2:DescribeSecurityGroups",
90 "ec2:DescribeSubnets",
91 "ec2:DescribeVpcs",
92 "ec2:DescribeNetworkInterfaces",
93 "ec2:CreateNetworkInterface",
94 "ec2:DeleteNetworkInterface",
95 "ec2:AttachNetworkInterface",
96 "ec2:AssignPrivateIpAddresses",
97 "ec2:UnassignPrivateIpAddresses",
98 "autoscaling:CompleteLifecycleAction",
99 "secretsmanager:GetSecretValue"
100 ],
101 "Resource": [
102 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
103 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
104 ]
105 },
106 {
107 "Effect": "Allow",
108 "Action": "secretsmanager:GetSecretValue",
109 "Resource": [
110 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
111 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
112 ]
113 }
114 ]
115}
116EOF
117}
118
119resource "aws_iam_role" "lambda_test_role" {
120 name = "lambda-test-role"
121
122 assume_role_policy = <<EOF
123{
124 "Version": "2012-10-17",
125 "Id": "",
126 "Statement": [
127 {
128 "Action": "sts:AssumeRole",
129 "Principal": {
130 "Service": [
131 "lambda.amazonaws.com",
132 "secretsmanager.amazonaws.com"
133 ]
134 },
135 "Effect": "Allow",
136 "Sid": ""
137 }
138 ]
139}
140EOF
141}
142
143resource "aws_iam_role_policy_attachment" "lambda_test" {
144 policy_arn = aws_iam_policy.lambda_test.arn
145 role = aws_iam_role.lambda_test_role.name
146}
147
148resource "aws_iam_role_policy_attachment" "lambda_test_vpc_access" {
149 policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
150 role = aws_iam_role.lambda_test_role.name
151}
152resource "aws_vpc_endpoint" "vpc_endpoint" {
153 vpc_id = aws_vpc.default.id
154 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
155 vpc_endpoint_type = "Interface"
156
157 security_group_ids = [aws_security_group.db.id]
158 private_dns_enabled = true
159
160 policy = <<EOF
161{
162 "Version": "2012-10-17",
163 "Statement": [
164 {
165 "Effect": "Allow",
166 "Action": "*",
167 "Principal": "*",
168 "Resource": "*"
169 }
170 ]
171}
172EOF
173}
174
Without trying to access secretsmanager
, the lambda itself work fine, i am able to access the url endpoint, provide parameters then see the result in cloudwatch logs but as soon as i try to call secretsmanager
in the lambda function endpoint, the page return {"message": "Internal server error"}
and when i look at the logs it say {"errorMessage": "Could not connect to the endpoint URL: \"https://secretsmanager.REGIONHIDDEN.amazonaws.com/\"", "errorType": "EndpointConnectionError"
Is there anything that i am doing wrong above?
ANSWER
Answered 2022-Feb-19 at 21:44If you can call the Lambda function from API Gateway, then your question title "how to connect an aws api gateway to a private lambda function inside a vpc" is already complete and working.
It appears that your actual problem is simply accessing Secrets Manager from inside a Lambda function running in a VPC.
It's also strange that you are assigning a "db" security group to the Lambda function. What are the inbound/outbound rules of this Security Group?
It is entirely unclear why you created a VPC endpoint. What are we supposed to make of service_name = "foo"
? What is service "foo"? How is this VPC endpoint related to the Lambda function in any way? If this is supposed to be a VPC endpoint for Secrets Manager, then the service name should be "com.amazonaws.YOUR-REGION.secretsmanager"
.
If you need more help you need to edit your question to provide the following: The inbound and outbound rules of any relevant security groups, and the Lambda function code that is trying to call SecretsManager.
Update: After clarifications in comments and the updated question, I think the problem is you are missing any subnet assignments for the VPC Endpoint. Also, since you are adding a VPC policy with full access, you can just leave that out entirely, as the default policy is full access. I suggest changing the VPC endpoint to the following:
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65resource "aws_lambda_function" "lambda_test" {
66 function_name = "lambda-test"
67 ...
68
69 # Attach Lambda to VPC
70 vpc_config {
71 subnet_ids = [aws_subnet.private_subnet.id]
72 security_group_ids = [aws_security_group.db.id]
73 }
74}
75
76resource "aws_iam_policy" "lambda_test" {
77 name = "lambda-test"
78
79 policy = <<EOF
80{
81 "Version": "2012-10-17",
82 "Statement": [
83 {
84 "Effect": "Allow",
85 "Action": [
86 "logs:CreateLogStream",
87 "logs:CreateLogGroup",
88 "logs:PutLogEvents",
89 "ec2:DescribeSecurityGroups",
90 "ec2:DescribeSubnets",
91 "ec2:DescribeVpcs",
92 "ec2:DescribeNetworkInterfaces",
93 "ec2:CreateNetworkInterface",
94 "ec2:DeleteNetworkInterface",
95 "ec2:AttachNetworkInterface",
96 "ec2:AssignPrivateIpAddresses",
97 "ec2:UnassignPrivateIpAddresses",
98 "autoscaling:CompleteLifecycleAction",
99 "secretsmanager:GetSecretValue"
100 ],
101 "Resource": [
102 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
103 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
104 ]
105 },
106 {
107 "Effect": "Allow",
108 "Action": "secretsmanager:GetSecretValue",
109 "Resource": [
110 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
111 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
112 ]
113 }
114 ]
115}
116EOF
117}
118
119resource "aws_iam_role" "lambda_test_role" {
120 name = "lambda-test-role"
121
122 assume_role_policy = <<EOF
123{
124 "Version": "2012-10-17",
125 "Id": "",
126 "Statement": [
127 {
128 "Action": "sts:AssumeRole",
129 "Principal": {
130 "Service": [
131 "lambda.amazonaws.com",
132 "secretsmanager.amazonaws.com"
133 ]
134 },
135 "Effect": "Allow",
136 "Sid": ""
137 }
138 ]
139}
140EOF
141}
142
143resource "aws_iam_role_policy_attachment" "lambda_test" {
144 policy_arn = aws_iam_policy.lambda_test.arn
145 role = aws_iam_role.lambda_test_role.name
146}
147
148resource "aws_iam_role_policy_attachment" "lambda_test_vpc_access" {
149 policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
150 role = aws_iam_role.lambda_test_role.name
151}
152resource "aws_vpc_endpoint" "vpc_endpoint" {
153 vpc_id = aws_vpc.default.id
154 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
155 vpc_endpoint_type = "Interface"
156
157 security_group_ids = [aws_security_group.db.id]
158 private_dns_enabled = true
159
160 policy = <<EOF
161{
162 "Version": "2012-10-17",
163 "Statement": [
164 {
165 "Effect": "Allow",
166 "Action": "*",
167 "Principal": "*",
168 "Resource": "*"
169 }
170 ]
171}
172EOF
173}
174resource "aws_vpc_endpoint" "vpc_endpoint" {
175 vpc_id = aws_vpc.default.id
176 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
177 vpc_endpoint_type = "Interface"
178
179 subnet_ids = [aws_subnet.private_subnet.id]
180 security_group_ids = [aws_security_group.db.id]
181 private_dns_enabled = true
182}
183
Update 2: This part of your Lambda function's IAM policy is wrong:
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65resource "aws_lambda_function" "lambda_test" {
66 function_name = "lambda-test"
67 ...
68
69 # Attach Lambda to VPC
70 vpc_config {
71 subnet_ids = [aws_subnet.private_subnet.id]
72 security_group_ids = [aws_security_group.db.id]
73 }
74}
75
76resource "aws_iam_policy" "lambda_test" {
77 name = "lambda-test"
78
79 policy = <<EOF
80{
81 "Version": "2012-10-17",
82 "Statement": [
83 {
84 "Effect": "Allow",
85 "Action": [
86 "logs:CreateLogStream",
87 "logs:CreateLogGroup",
88 "logs:PutLogEvents",
89 "ec2:DescribeSecurityGroups",
90 "ec2:DescribeSubnets",
91 "ec2:DescribeVpcs",
92 "ec2:DescribeNetworkInterfaces",
93 "ec2:CreateNetworkInterface",
94 "ec2:DeleteNetworkInterface",
95 "ec2:AttachNetworkInterface",
96 "ec2:AssignPrivateIpAddresses",
97 "ec2:UnassignPrivateIpAddresses",
98 "autoscaling:CompleteLifecycleAction",
99 "secretsmanager:GetSecretValue"
100 ],
101 "Resource": [
102 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
103 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
104 ]
105 },
106 {
107 "Effect": "Allow",
108 "Action": "secretsmanager:GetSecretValue",
109 "Resource": [
110 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
111 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
112 ]
113 }
114 ]
115}
116EOF
117}
118
119resource "aws_iam_role" "lambda_test_role" {
120 name = "lambda-test-role"
121
122 assume_role_policy = <<EOF
123{
124 "Version": "2012-10-17",
125 "Id": "",
126 "Statement": [
127 {
128 "Action": "sts:AssumeRole",
129 "Principal": {
130 "Service": [
131 "lambda.amazonaws.com",
132 "secretsmanager.amazonaws.com"
133 ]
134 },
135 "Effect": "Allow",
136 "Sid": ""
137 }
138 ]
139}
140EOF
141}
142
143resource "aws_iam_role_policy_attachment" "lambda_test" {
144 policy_arn = aws_iam_policy.lambda_test.arn
145 role = aws_iam_role.lambda_test_role.name
146}
147
148resource "aws_iam_role_policy_attachment" "lambda_test_vpc_access" {
149 policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
150 role = aws_iam_role.lambda_test_role.name
151}
152resource "aws_vpc_endpoint" "vpc_endpoint" {
153 vpc_id = aws_vpc.default.id
154 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
155 vpc_endpoint_type = "Interface"
156
157 security_group_ids = [aws_security_group.db.id]
158 private_dns_enabled = true
159
160 policy = <<EOF
161{
162 "Version": "2012-10-17",
163 "Statement": [
164 {
165 "Effect": "Allow",
166 "Action": "*",
167 "Principal": "*",
168 "Resource": "*"
169 }
170 ]
171}
172EOF
173}
174resource "aws_vpc_endpoint" "vpc_endpoint" {
175 vpc_id = aws_vpc.default.id
176 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
177 vpc_endpoint_type = "Interface"
178
179 subnet_ids = [aws_subnet.private_subnet.id]
180 security_group_ids = [aws_security_group.db.id]
181 private_dns_enabled = true
182}
183 {
184 "Effect": "Allow",
185 "Action": "secretsmanager:GetSecretValue",
186 "Resource": [
187 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
188 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
189 ]
190 }
191
That gives the Lambda access to a secret, with an ARN of a Lambda function, which is not a valid secret ARN. It should be the following:
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65resource "aws_lambda_function" "lambda_test" {
66 function_name = "lambda-test"
67 ...
68
69 # Attach Lambda to VPC
70 vpc_config {
71 subnet_ids = [aws_subnet.private_subnet.id]
72 security_group_ids = [aws_security_group.db.id]
73 }
74}
75
76resource "aws_iam_policy" "lambda_test" {
77 name = "lambda-test"
78
79 policy = <<EOF
80{
81 "Version": "2012-10-17",
82 "Statement": [
83 {
84 "Effect": "Allow",
85 "Action": [
86 "logs:CreateLogStream",
87 "logs:CreateLogGroup",
88 "logs:PutLogEvents",
89 "ec2:DescribeSecurityGroups",
90 "ec2:DescribeSubnets",
91 "ec2:DescribeVpcs",
92 "ec2:DescribeNetworkInterfaces",
93 "ec2:CreateNetworkInterface",
94 "ec2:DeleteNetworkInterface",
95 "ec2:AttachNetworkInterface",
96 "ec2:AssignPrivateIpAddresses",
97 "ec2:UnassignPrivateIpAddresses",
98 "autoscaling:CompleteLifecycleAction",
99 "secretsmanager:GetSecretValue"
100 ],
101 "Resource": [
102 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
103 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
104 ]
105 },
106 {
107 "Effect": "Allow",
108 "Action": "secretsmanager:GetSecretValue",
109 "Resource": [
110 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
111 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
112 ]
113 }
114 ]
115}
116EOF
117}
118
119resource "aws_iam_role" "lambda_test_role" {
120 name = "lambda-test-role"
121
122 assume_role_policy = <<EOF
123{
124 "Version": "2012-10-17",
125 "Id": "",
126 "Statement": [
127 {
128 "Action": "sts:AssumeRole",
129 "Principal": {
130 "Service": [
131 "lambda.amazonaws.com",
132 "secretsmanager.amazonaws.com"
133 ]
134 },
135 "Effect": "Allow",
136 "Sid": ""
137 }
138 ]
139}
140EOF
141}
142
143resource "aws_iam_role_policy_attachment" "lambda_test" {
144 policy_arn = aws_iam_policy.lambda_test.arn
145 role = aws_iam_role.lambda_test_role.name
146}
147
148resource "aws_iam_role_policy_attachment" "lambda_test_vpc_access" {
149 policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
150 role = aws_iam_role.lambda_test_role.name
151}
152resource "aws_vpc_endpoint" "vpc_endpoint" {
153 vpc_id = aws_vpc.default.id
154 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
155 vpc_endpoint_type = "Interface"
156
157 security_group_ids = [aws_security_group.db.id]
158 private_dns_enabled = true
159
160 policy = <<EOF
161{
162 "Version": "2012-10-17",
163 "Statement": [
164 {
165 "Effect": "Allow",
166 "Action": "*",
167 "Principal": "*",
168 "Resource": "*"
169 }
170 ]
171}
172EOF
173}
174resource "aws_vpc_endpoint" "vpc_endpoint" {
175 vpc_id = aws_vpc.default.id
176 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
177 vpc_endpoint_type = "Interface"
178
179 subnet_ids = [aws_subnet.private_subnet.id]
180 security_group_ids = [aws_security_group.db.id]
181 private_dns_enabled = true
182}
183 {
184 "Effect": "Allow",
185 "Action": "secretsmanager:GetSecretValue",
186 "Resource": [
187 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
188 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
189 ]
190 }
191 {
192 "Effect": "Allow",
193 "Action": "secretsmanager:GetSecretValue",
194 "Resource": "${data.aws_secretsmanager_secret.my_secret.arn}"
195 }
196
Also this part of your policy is messed up:
1def test_secret():
2
3 secret = "mysecret"
4 region = "MY-REGION" :)
5
6 session = boto3.session.Session()
7 client = session.client(
8 service_name="secretsmanager",
9 region_name=region
10 )
11 secret_value_response = client.get_secret_value(SecretId=secret)
12
13 try:
14 result = json.loads(secret_value_response["SecretString"])
15
16 except Exception as e:
17 result = "Error found: {}".format(e)
18 return result
19
20
21def handler(event, context):
22
23 get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25 try:
26 some_string = event["queryStringParameters"]["some_string"]
27
28 response = {}
29
30 response["statusCode"] = 200
31 response["body"] = some_string + " " + get_secrets["name"]
32
33 print("secrets: ", some_string + " " + get_secrets["name"])
34
35 except Exception as e:
36 response = "Error: {}".format(e)
37
38 return response
39resource "aws_security_group" "db" {
40 name = "db"
41 vpc_id = aws_vpc.default.id
42
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 cidr_blocks = ["0.0.0.0/0"]
48 }
49
50 ingress {
51 from_port = 5432
52 to_port = 5432
53 protocol = "tcp"
54 cidr_blocks = ["0.0.0.0/0"]
55 }
56
57 egress {
58 from_port = 0
59 to_port = 0
60 protocol = "-1"
61 cidr_blocks = ["0.0.0.0/0"]
62 }
63}
64
65resource "aws_lambda_function" "lambda_test" {
66 function_name = "lambda-test"
67 ...
68
69 # Attach Lambda to VPC
70 vpc_config {
71 subnet_ids = [aws_subnet.private_subnet.id]
72 security_group_ids = [aws_security_group.db.id]
73 }
74}
75
76resource "aws_iam_policy" "lambda_test" {
77 name = "lambda-test"
78
79 policy = <<EOF
80{
81 "Version": "2012-10-17",
82 "Statement": [
83 {
84 "Effect": "Allow",
85 "Action": [
86 "logs:CreateLogStream",
87 "logs:CreateLogGroup",
88 "logs:PutLogEvents",
89 "ec2:DescribeSecurityGroups",
90 "ec2:DescribeSubnets",
91 "ec2:DescribeVpcs",
92 "ec2:DescribeNetworkInterfaces",
93 "ec2:CreateNetworkInterface",
94 "ec2:DeleteNetworkInterface",
95 "ec2:AttachNetworkInterface",
96 "ec2:AssignPrivateIpAddresses",
97 "ec2:UnassignPrivateIpAddresses",
98 "autoscaling:CompleteLifecycleAction",
99 "secretsmanager:GetSecretValue"
100 ],
101 "Resource": [
102 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
103 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
104 ]
105 },
106 {
107 "Effect": "Allow",
108 "Action": "secretsmanager:GetSecretValue",
109 "Resource": [
110 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
111 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
112 ]
113 }
114 ]
115}
116EOF
117}
118
119resource "aws_iam_role" "lambda_test_role" {
120 name = "lambda-test-role"
121
122 assume_role_policy = <<EOF
123{
124 "Version": "2012-10-17",
125 "Id": "",
126 "Statement": [
127 {
128 "Action": "sts:AssumeRole",
129 "Principal": {
130 "Service": [
131 "lambda.amazonaws.com",
132 "secretsmanager.amazonaws.com"
133 ]
134 },
135 "Effect": "Allow",
136 "Sid": ""
137 }
138 ]
139}
140EOF
141}
142
143resource "aws_iam_role_policy_attachment" "lambda_test" {
144 policy_arn = aws_iam_policy.lambda_test.arn
145 role = aws_iam_role.lambda_test_role.name
146}
147
148resource "aws_iam_role_policy_attachment" "lambda_test_vpc_access" {
149 policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
150 role = aws_iam_role.lambda_test_role.name
151}
152resource "aws_vpc_endpoint" "vpc_endpoint" {
153 vpc_id = aws_vpc.default.id
154 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
155 vpc_endpoint_type = "Interface"
156
157 security_group_ids = [aws_security_group.db.id]
158 private_dns_enabled = true
159
160 policy = <<EOF
161{
162 "Version": "2012-10-17",
163 "Statement": [
164 {
165 "Effect": "Allow",
166 "Action": "*",
167 "Principal": "*",
168 "Resource": "*"
169 }
170 ]
171}
172EOF
173}
174resource "aws_vpc_endpoint" "vpc_endpoint" {
175 vpc_id = aws_vpc.default.id
176 service_name = "com.amazonaws.${var.AWS_REGION}.secretsmanager"
177 vpc_endpoint_type = "Interface"
178
179 subnet_ids = [aws_subnet.private_subnet.id]
180 security_group_ids = [aws_security_group.db.id]
181 private_dns_enabled = true
182}
183 {
184 "Effect": "Allow",
185 "Action": "secretsmanager:GetSecretValue",
186 "Resource": [
187 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}",
188 "arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*"
189 ]
190 }
191 {
192 "Effect": "Allow",
193 "Action": "secretsmanager:GetSecretValue",
194 "Resource": "${data.aws_secretsmanager_secret.my_secret.arn}"
195 }
196 "Effect": "Allow",
197 "Action": [
198 "logs:CreateLogStream",
199 "logs:CreateLogGroup",
200 "logs:PutLogEvents",
201 "ec2:DescribeSecurityGroups",
202 "ec2:DescribeSubnets",
203 "ec2:DescribeVpcs",
204 "ec2:DescribeNetworkInterfaces",
205 "ec2:CreateNetworkInterface",
206 "ec2:DeleteNetworkInterface",
207 "ec2:AttachNetworkInterface",
208 "ec2:AssignPrivateIpAddresses",
209 "ec2:UnassignPrivateIpAddresses",
210 "autoscaling:CompleteLifecycleAction",
211 "secretsmanager:GetSecretValue"
212 ],
213 "Resource": [
214 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}",
215 "arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*"
216 ]
217
You are assigning this policy to a Lambda function. The resources you list in the policy are the resources the Lambda function should have access to. You don't list the Lambda function itself as the resource. I'm not sure how to fix that part of the policy, it needs to be split into multiple sections, or just replace the resource list with "*"
.
Also when you refer to a resource's .arn
value in Terraform, you will get the full ARN, so you shouldn't be prefixing that with anything.
QUESTION
Terraform AWS Provider Error: Value for unconfigurable attribute. Can't configure a value for "acl": its value will be decided automatically
Asked 2022-Feb-15 at 13:50Just today, whenever I run terraform apply
, I see an error something like this: Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.
It was working yesterday.
Following is the command I run: terraform init && terraform apply
Following is the list of initialized provider plugins:
1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10
Following are the errors:
1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
12╷
13│ Error: Value for unconfigurable attribute
14│
15│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
17│ 1: resource "aws_s3_bucket" "this" {
18│
19│ Can't configure a value for "lifecycle_rule": its value will be decided
20│ automatically based on the result of applying this configuration.
21╵
22╷
23│ Error: Value for unconfigurable attribute
24│
25│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
27│ 1: resource "aws_s3_bucket" "this" {
28│
29│ Can't configure a value for "server_side_encryption_configuration": its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
32╵
33╷
34│ Error: Value for unconfigurable attribute
35│
36│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource "aws_s3_bucket" "this":
38│ 3: acl = "private"
39│
40│ Can't configure a value for "acl": its value will be decided automatically
41│ based on the result of applying this configuration.
42╵
43ERRO[0012] 1 error occurred:
44 * exit status 1
45
My code is as follows:
1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
12╷
13│ Error: Value for unconfigurable attribute
14│
15│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
17│ 1: resource "aws_s3_bucket" "this" {
18│
19│ Can't configure a value for "lifecycle_rule": its value will be decided
20│ automatically based on the result of applying this configuration.
21╵
22╷
23│ Error: Value for unconfigurable attribute
24│
25│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
27│ 1: resource "aws_s3_bucket" "this" {
28│
29│ Can't configure a value for "server_side_encryption_configuration": its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
32╵
33╷
34│ Error: Value for unconfigurable attribute
35│
36│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource "aws_s3_bucket" "this":
38│ 3: acl = "private"
39│
40│ Can't configure a value for "acl": its value will be decided automatically
41│ based on the result of applying this configuration.
42╵
43ERRO[0012] 1 error occurred:
44 * exit status 1
45resource "aws_s3_bucket" "this" {
46 bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
47 acl = "private"
48
49 server_side_encryption_configuration {
50 rule {
51 apply_server_side_encryption_by_default {
52 kms_master_key_id = data.aws_kms_key.s3.arn
53 sse_algorithm = "aws:kms"
54 }
55 }
56 }
57
58 lifecycle_rule {
59 id = "backups"
60 enabled = true
61
62 prefix = "backups/"
63
64 transition {
65 days = 90
66 storage_class = "GLACIER_IR"
67 }
68
69 transition {
70 days = 180
71 storage_class = "DEEP_ARCHIVE"
72 }
73
74 expiration {
75 days = 365
76 }
77 }
78
79 tags = {
80 Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
81 Environment = var.environment
82 }
83}
84
ANSWER
Answered 2022-Feb-15 at 13:49Terraform AWS Provider is upgraded to version 4.0.0 which is published on 10 February 2022.
Major changes in the release include:
- Version 4.0.0 of the AWS Provider introduces significant changes to the aws_s3_bucket resource.
- Version 4.0.0 of the AWS Provider will be the last major version to support EC2-Classic resources as AWS plans to fully retire EC2-Classic Networking. See the AWS News Blog for additional details.
- Version 4.0.0 and 4.x.x versions of the AWS Provider will be the last versions compatible with Terraform 0.12-0.15.
The reason for this change by Terraform is as follows: To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket
resource have become read-only. Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_*
resource. Once updated, new aws_s3_bucket_*
resources should be imported into Terraform state.
So, I updated my code accordingly by following the guide here: Terraform AWS Provider Version 4 Upgrade Guide | S3 Bucket Refactor
The new working code looks like this:
1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
12╷
13│ Error: Value for unconfigurable attribute
14│
15│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
17│ 1: resource "aws_s3_bucket" "this" {
18│
19│ Can't configure a value for "lifecycle_rule": its value will be decided
20│ automatically based on the result of applying this configuration.
21╵
22╷
23│ Error: Value for unconfigurable attribute
24│
25│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
27│ 1: resource "aws_s3_bucket" "this" {
28│
29│ Can't configure a value for "server_side_encryption_configuration": its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
32╵
33╷
34│ Error: Value for unconfigurable attribute
35│
36│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource "aws_s3_bucket" "this":
38│ 3: acl = "private"
39│
40│ Can't configure a value for "acl": its value will be decided automatically
41│ based on the result of applying this configuration.
42╵
43ERRO[0012] 1 error occurred:
44 * exit status 1
45resource "aws_s3_bucket" "this" {
46 bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
47 acl = "private"
48
49 server_side_encryption_configuration {
50 rule {
51 apply_server_side_encryption_by_default {
52 kms_master_key_id = data.aws_kms_key.s3.arn
53 sse_algorithm = "aws:kms"
54 }
55 }
56 }
57
58 lifecycle_rule {
59 id = "backups"
60 enabled = true
61
62 prefix = "backups/"
63
64 transition {
65 days = 90
66 storage_class = "GLACIER_IR"
67 }
68
69 transition {
70 days = 180
71 storage_class = "DEEP_ARCHIVE"
72 }
73
74 expiration {
75 days = 365
76 }
77 }
78
79 tags = {
80 Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
81 Environment = var.environment
82 }
83}
84resource "aws_s3_bucket" "this" {
85 bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
86
87 tags = {
88 Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
89 Environment = var.environment
90 }
91}
92
93resource "aws_s3_bucket_acl" "this" {
94 bucket = aws_s3_bucket.this.id
95 acl = "private"
96}
97
98resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
99 bucket = aws_s3_bucket.this.id
100
101 rule {
102 apply_server_side_encryption_by_default {
103 kms_master_key_id = data.aws_kms_key.s3.arn
104 sse_algorithm = "aws:kms"
105 }
106 }
107}
108
109resource "aws_s3_bucket_lifecycle_configuration" "this" {
110 bucket = aws_s3_bucket.this.id
111
112 rule {
113 id = "backups"
114 status = "Enabled"
115
116 filter {
117 prefix = "backups/"
118 }
119
120 transition {
121 days = 90
122 storage_class = "GLACIER_IR"
123 }
124
125 transition {
126 days = 180
127 storage_class = "DEEP_ARCHIVE"
128 }
129
130 expiration {
131 days = 365
132 }
133 }
134}
135
If you don't want to upgrade your Terraform AWS Provider version to 4.0.0, you can use the existing or older version by specifying it explicitly in the code as below:
1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
12╷
13│ Error: Value for unconfigurable attribute
14│
15│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
17│ 1: resource "aws_s3_bucket" "this" {
18│
19│ Can't configure a value for "lifecycle_rule": its value will be decided
20│ automatically based on the result of applying this configuration.
21╵
22╷
23│ Error: Value for unconfigurable attribute
24│
25│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
27│ 1: resource "aws_s3_bucket" "this" {
28│
29│ Can't configure a value for "server_side_encryption_configuration": its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
32╵
33╷
34│ Error: Value for unconfigurable attribute
35│
36│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource "aws_s3_bucket" "this":
38│ 3: acl = "private"
39│
40│ Can't configure a value for "acl": its value will be decided automatically
41│ based on the result of applying this configuration.
42╵
43ERRO[0012] 1 error occurred:
44 * exit status 1
45resource "aws_s3_bucket" "this" {
46 bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
47 acl = "private"
48
49 server_side_encryption_configuration {
50 rule {
51 apply_server_side_encryption_by_default {
52 kms_master_key_id = data.aws_kms_key.s3.arn
53 sse_algorithm = "aws:kms"
54 }
55 }
56 }
57
58 lifecycle_rule {
59 id = "backups"
60 enabled = true
61
62 prefix = "backups/"
63
64 transition {
65 days = 90
66 storage_class = "GLACIER_IR"
67 }
68
69 transition {
70 days = 180
71 storage_class = "DEEP_ARCHIVE"
72 }
73
74 expiration {
75 days = 365
76 }
77 }
78
79 tags = {
80 Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
81 Environment = var.environment
82 }
83}
84resource "aws_s3_bucket" "this" {
85 bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
86
87 tags = {
88 Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
89 Environment = var.environment
90 }
91}
92
93resource "aws_s3_bucket_acl" "this" {
94 bucket = aws_s3_bucket.this.id
95 acl = "private"
96}
97
98resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
99 bucket = aws_s3_bucket.this.id
100
101 rule {
102 apply_server_side_encryption_by_default {
103 kms_master_key_id = data.aws_kms_key.s3.arn
104 sse_algorithm = "aws:kms"
105 }
106 }
107}
108
109resource "aws_s3_bucket_lifecycle_configuration" "this" {
110 bucket = aws_s3_bucket.this.id
111
112 rule {
113 id = "backups"
114 status = "Enabled"
115
116 filter {
117 prefix = "backups/"
118 }
119
120 transition {
121 days = 90
122 storage_class = "GLACIER_IR"
123 }
124
125 transition {
126 days = 180
127 storage_class = "DEEP_ARCHIVE"
128 }
129
130 expiration {
131 days = 365
132 }
133 }
134}
135terraform {
136 required_version = "~> 1.0.11"
137 required_providers {
138 aws = "~> 3.73.0"
139 }
140}
141
QUESTION
Programmatically Connecting a GitHub repo to a Google Cloud Project
Asked 2022-Feb-12 at 16:16I'm working on a Terraform project that will set up all the GCP resources needed for a large project spanning multiple GitHub repos. My goal is to be able to recreate the cloud infrastructure from scratch completely with Terraform.
The issue I'm running into is in order to setup build triggers with Terraform within GCP, the GitHub repo that is setting off the trigger first needs to be connected. Currently, I've only been able to do that manually via the Google Cloud Build dashboard. I'm not sure if this is possible via Terraform or with a script but I'm looking for any solution I can automate this with. Once the projects are connected updating everything with Terraform is working fine.
TLDR; How can I programmatically connect a GitHub project with a GCP project instead of using the dashboard?
ANSWER
Answered 2022-Feb-12 at 16:16Currently there is no way to programmatically connect a GitHub repo to a Google Cloud Project. This must be done manually via Google Cloud.
My workaround is to manually connect an "admin" project, build containers and save them to that project's artifact registry, and then deploy the containers from the registry in the programmatically generated project.
QUESTION
Kubernetes NodePort is not available on all nodes - Oracle Cloud Infrastructure (OCI)
Asked 2022-Jan-31 at 14:37I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.
I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.
The goal is:
- A running managed Kubernetes cluster (OKE)
- 2 nodes at least
- 1 service that's accessible for external parties
The infra looks the following:
- A VCN for the whole thing
- A private subnet on 10.0.1.0/24
- A public subnet on 10.0.0.0/24
- NAT gateway for the private subnet
- Internet gateway for the public subnet
- Service gateway
- The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
- A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
- A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
- A namespace in the K8S cluster (call it staging for now)
- A deployment which refers to a custom NextJS application serving traffic on port 3000
And now it's the point where I want to expose the service running on port 3000.
I have 2 obvious choices:
- Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
- Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer
The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).
Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.
The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.
That's my problem and I couldn't figure out what could be the issue.
What I've tried so far:
- Switching from ARM machines to AMD ones - no change
- Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
- Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
- Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
- Ran the Node Doctor on the nodes, everything is fine
- Checked the logs of kube-proxy, kube-flannel, core-dns, no error
- Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
- Recreated the cluster from scratch
Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.
Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.
Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.
1tcp 0 0 0.0.0.0:31600 0.0.0.0:* LISTEN 16671/kube-proxy
2
Edit4:: Tried adding whitelisting iptables rules but didn't change anything.
1tcp 0 0 0.0.0.0:31600 0.0.0.0:* LISTEN 16671/kube-proxy
2[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT
3[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT
4[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT
5
Edit5: Just as a trial, I created a LoadBalancer once more to verify if I'm gone completely mental and I just didn't notice this error when I tried or it really works. Funny thing, it works perfectly fine through the classic load balancer's IP. But when I try to send a request to the nodes directly on the port that was opened for the load balancer (it's 30679 for now). I get response only from the node that's running the pod. From the other, still nothing yet through the load balancer, I get 100% successful responses.
Bonus, here's the iptables from the Node that's not responding to requests, not too sure what to look for:
1tcp 0 0 0.0.0.0:31600 0.0.0.0:* LISTEN 16671/kube-proxy
2[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT
3[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT
4[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT
5[opc@oke-cn44eyuqdoq-n3ewna4fqra-sx5p5dalkuq-1 ~]$ sudo iptables -L
6Chain INPUT (policy ACCEPT)
7target prot opt source destination
8KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes health check service ports */
9KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
10KUBE-FIREWALL all -- anywhere anywhere
11
12Chain FORWARD (policy ACCEPT)
13target prot opt source destination
14KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
15KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
16KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
17ACCEPT all -- 10.244.0.0/16 anywhere
18ACCEPT all -- anywhere 10.244.0.0/16
19
20Chain OUTPUT (policy ACCEPT)
21target prot opt source destination
22KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
23KUBE-FIREWALL all -- anywhere anywhere
24
25Chain KUBE-EXTERNAL-SERVICES (2 references)
26target prot opt source destination
27
28Chain KUBE-FIREWALL (2 references)
29target prot opt source destination
30DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
31DROP all -- !loopback/8 loopback/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
32
33Chain KUBE-FORWARD (1 references)
34target prot opt source destination
35DROP all -- anywhere anywhere ctstate INVALID
36ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
37ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
38ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
39
40Chain KUBE-KUBELET-CANARY (0 references)
41target prot opt source destination
42
43Chain KUBE-NODEPORTS (1 references)
44target prot opt source destination
45
46Chain KUBE-PROXY-CANARY (0 references)
47target prot opt source destination
48
49Chain KUBE-SERVICES (2 references)
50target prot opt source destination
51
Service spec (the running one since it was generated using Terraform):
1tcp 0 0 0.0.0.0:31600 0.0.0.0:* LISTEN 16671/kube-proxy
2[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT
3[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT
4[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT
5[opc@oke-cn44eyuqdoq-n3ewna4fqra-sx5p5dalkuq-1 ~]$ sudo iptables -L
6Chain INPUT (policy ACCEPT)
7target prot opt source destination
8KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes health check service ports */
9KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
10KUBE-FIREWALL all -- anywhere anywhere
11
12Chain FORWARD (policy ACCEPT)
13target prot opt source destination
14KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
15KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
16KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
17ACCEPT all -- 10.244.0.0/16 anywhere
18ACCEPT all -- anywhere 10.244.0.0/16
19
20Chain OUTPUT (policy ACCEPT)
21target prot opt source destination
22KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
23KUBE-FIREWALL all -- anywhere anywhere
24
25Chain KUBE-EXTERNAL-SERVICES (2 references)
26target prot opt source destination
27
28Chain KUBE-FIREWALL (2 references)
29target prot opt source destination
30DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
31DROP all -- !loopback/8 loopback/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
32
33Chain KUBE-FORWARD (1 references)
34target prot opt source destination
35DROP all -- anywhere anywhere ctstate INVALID
36ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
37ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
38ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
39
40Chain KUBE-KUBELET-CANARY (0 references)
41target prot opt source destination
42
43Chain KUBE-NODEPORTS (1 references)
44target prot opt source destination
45
46Chain KUBE-PROXY-CANARY (0 references)
47target prot opt source destination
48
49Chain KUBE-SERVICES (2 references)
50target prot opt source destination
51{
52 "apiVersion": "v1",
53 "kind": "Service",
54 "metadata": {
55 "creationTimestamp": "2022-01-28T09:13:33Z",
56 "name": "web-staging-service",
57 "namespace": "web-staging",
58 "resourceVersion": "22542",
59 "uid": "c092f99b-7c72-4c32-bf27-ccfa1fe92a79"
60 },
61 "spec": {
62 "clusterIP": "10.96.99.112",
63 "clusterIPs": [
64 "10.96.99.112"
65 ],
66 "externalTrafficPolicy": "Cluster",
67 "ipFamilies": [
68 "IPv4"
69 ],
70 "ipFamilyPolicy": "SingleStack",
71 "ports": [
72 {
73 "nodePort": 31600,
74 "port": 3000,
75 "protocol": "TCP",
76 "targetPort": 3000
77 }
78 ],
79 "selector": {
80 "app": "frontend"
81 },
82 "sessionAffinity": "None",
83 "type": "NodePort"
84 },
85 "status": {
86 "loadBalancer": {}
87 }
88}
89
Any ideas are appreciated. Thanks guys.
ANSWER
Answered 2022-Jan-31 at 12:06Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.
Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.
QUESTION
Can you pass blocks as variables in Terraform, referencing the type of a resource's nested block contents?
Asked 2021-Dec-20 at 02:40I am trying to build in Terraform a Web ACL resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl
This resource has the nested blocks rule->action->block and rule-> action->count
I would like to have a variable which's type allows me to set the action to either count {}
or block{}
so that the two following configurations are possible:
With block:
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13
With count:
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13resource "aws_wafv2_web_acl" "example" {
14 ...
15
16 rule {
17 ...
18
19 action {
20 count {}
21 }
22
23 ...
24}
25
I can achieve this result with a boolean variable and dynamic blocks in a very non-declarative way so far.
My question is, can the type of a variable reference the type of a nested block, so that the content of the nested block can be passed in a variable?
What I am trying to achieve is something that would look similar to this (non working syntax):
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13resource "aws_wafv2_web_acl" "example" {
14 ...
15
16 rule {
17 ...
18
19 action {
20 count {}
21 }
22
23 ...
24}
25resource "aws_wafv2_web_acl" "example" {
26 ...
27
28 rule {
29 ...
30
31 action = var.action_block
32
33 ...
34 }
35}
36
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13resource "aws_wafv2_web_acl" "example" {
14 ...
15
16 rule {
17 ...
18
19 action {
20 count {}
21 }
22
23 ...
24}
25resource "aws_wafv2_web_acl" "example" {
26 ...
27
28 rule {
29 ...
30
31 action = var.action_block
32
33 ...
34 }
35}
36variable "action_block" {
37 description = "Action of the rule"
38 type = <whatever type is accepted by aws_wafv2_web_acl->rule->action>
39}
40
so that it can be passed down in a similar manner to this
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13resource "aws_wafv2_web_acl" "example" {
14 ...
15
16 rule {
17 ...
18
19 action {
20 count {}
21 }
22
23 ...
24}
25resource "aws_wafv2_web_acl" "example" {
26 ...
27
28 rule {
29 ...
30
31 action = var.action_block
32
33 ...
34 }
35}
36variable "action_block" {
37 description = "Action of the rule"
38 type = <whatever type is accepted by aws_wafv2_web_acl->rule->action>
39}
40module "my_waf" {
41 source = "../modules/waf"
42 action_block {
43 block {}
44 }
45}
46
For reference, what I am trying to avoid:
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13resource "aws_wafv2_web_acl" "example" {
14 ...
15
16 rule {
17 ...
18
19 action {
20 count {}
21 }
22
23 ...
24}
25resource "aws_wafv2_web_acl" "example" {
26 ...
27
28 rule {
29 ...
30
31 action = var.action_block
32
33 ...
34 }
35}
36variable "action_block" {
37 description = "Action of the rule"
38 type = <whatever type is accepted by aws_wafv2_web_acl->rule->action>
39}
40module "my_waf" {
41 source = "../modules/waf"
42 action_block {
43 block {}
44 }
45}
46 dynamic "action" {
47 for_each = var.block ? [] : [1]
48 content {
49 count {}
50 }
51 }
52
53 dynamic "action" {
54 for_each = var.block ? [1] : []
55 content {
56 block {}
57 }
58 }
59
60
Thank you so much for your help!
ANSWER
Answered 2021-Dec-20 at 02:40The only marginal improvement I can imagine is to move the dynamic
blocks one level deeper, to perhaps make it clear to a reader that the action
block will always be present and it's the count
or block
blocks inside that have dynamic behavior:
1resource "aws_wafv2_web_acl" "example" {
2 ...
3
4 rule {
5 ...
6
7 action {
8 block {}
9 }
10
11 ...
12}
13resource "aws_wafv2_web_acl" "example" {
14 ...
15
16 rule {
17 ...
18
19 action {
20 count {}
21 }
22
23 ...
24}
25resource "aws_wafv2_web_acl" "example" {
26 ...
27
28 rule {
29 ...
30
31 action = var.action_block
32
33 ...
34 }
35}
36variable "action_block" {
37 description = "Action of the rule"
38 type = <whatever type is accepted by aws_wafv2_web_acl->rule->action>
39}
40module "my_waf" {
41 source = "../modules/waf"
42 action_block {
43 block {}
44 }
45}
46 dynamic "action" {
47 for_each = var.block ? [] : [1]
48 content {
49 count {}
50 }
51 }
52
53 dynamic "action" {
54 for_each = var.block ? [1] : []
55 content {
56 block {}
57 }
58 }
59
60 action {
61 dynamic "count" {
62 for_each = var.block ? [] : [1]
63 content {}
64 }
65 dynamic "block" {
66 for_each = var.block ? [1] : []
67 content {}
68 }
69 }
70
There are some other ways you could formulate those two for_each
expressions so that the input could have a different shape, but you'll need to write out a suitable type constraint for that variable yourself which matches whatever conditions you want to apply to it.
QUESTION
trigger lambda function from DynamoDB
Asked 2021-Nov-17 at 22:35Every time a new item arrives in my dynamo table, I want to run a lambda function trigger_lambda_function
. This is how I define my table and trigger. However, the trigger does not work as expected.
1resource "aws_dynamodb_table" "filenames" {
2 name = local.dynamodb_table_filenames
3 billing_mode = "PROVISIONED"
4 read_capacity = 1000
5 write_capacity = 1000
6 hash_key = "filename"
7
8 #range_key = ""
9
10 attribute {
11 name = "filename"
12 type = "S"
13 }
14
15 tags = var.tags
16}
17
18
19resource "aws_lambda_event_source_mapping" "allow_dynamodb_table_to_trigger_lambda" {
20 event_source_arn = aws_dynamodb_table.filenames.stream_arn
21 function_name = aws_lambda_function.trigger_stepfunction_lambda.arn
22 starting_position = "LATEST"
23}
24
Upon terraform apply
, I get an error that:
1resource "aws_dynamodb_table" "filenames" {
2 name = local.dynamodb_table_filenames
3 billing_mode = "PROVISIONED"
4 read_capacity = 1000
5 write_capacity = 1000
6 hash_key = "filename"
7
8 #range_key = ""
9
10 attribute {
11 name = "filename"
12 type = "S"
13 }
14
15 tags = var.tags
16}
17
18
19resource "aws_lambda_event_source_mapping" "allow_dynamodb_table_to_trigger_lambda" {
20 event_source_arn = aws_dynamodb_table.filenames.stream_arn
21 function_name = aws_lambda_function.trigger_stepfunction_lambda.arn
22 starting_position = "LATEST"
23}
24│ Error: error creating Lambda Event Source Mapping (): InvalidParameterValueException: Unrecognized event source.
25│ {
26│ RespMetadata: {
27│ StatusCode: 400,
28│ RequestID: "5ae68da6-3f6d-4adb-b104-72ae584dbca7"
29│ },
30│ Message_: "Unrecognized event source.",
31│ Type: "User"
32│ }
33│
34│ with module.ingest_system["alpegatm"].aws_lambda_event_source_mapping.allow_dynamodb_table_to_trigger_lambda,
35│ on ../../modules/ingest_system/dynamo.tf line 39, in resource "aws_lambda_event_source_mapping" "allow_dynamodb_table_to_trigger_lambda":
36│ 39: resource "aws_lambda_event_source_mapping" "allow_dynamodb_table_to_trigger_lambda" {
37
I also tried .arn
instead of stream_arn
but that threw an error too. What else could I try?
I followed the documentation for the trigger: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_event_source_mapping
ANSWER
Answered 2021-Nov-17 at 22:35From the aws_dynamodb_table
docs, stream_arn
is only available if stream_enabled
is set to true
. You might want to add stream_enabled = true
to your DynamoDB table definition.
By default stream_enabled
is set to false
. You can see all the default values here for aws_dynamodb_table
.
QUESTION
Terraform: Inappropriate value for attribute "ingress" while creating SG
Asked 2021-Nov-02 at 04:36I'm creating a Security group using terraform, and when I'm running terraform plan. It is giving me an error like some fields are required, and all those fields are optional.
Terraform Version: v1.0.5
AWS Provider version: v3.57.0
main.tf
1resource "aws_security_group" "sg_oregon" {
2 name = "tf-sg"
3 description = "Allow web traffics"
4 vpc_id = aws_vpc.vpc_terraform.id
5
6 ingress = [
7 {
8 description = "HTTP"
9 from_port = 80
10 to_port = 80
11 protocol = "tcp"
12 cidr_blocks = ["0.0.0.0/0"]
13 },
14 {
15 description = "HTTPS"
16 from_port = 443
17 to_port = 443
18 protocol = "tcp"
19 cidr_blocks = ["0.0.0.0/0"]
20 },
21
22 {
23 description = "SSH"
24 from_port = 22
25 to_port = 22
26 protocol = "tcp"
27 cidr_blocks = ["0.0.0.0/0"]
28 }
29 ]
30
31
32 egress = [
33 {
34 description = "for all outgoing traffics"
35 from_port = 0
36 to_port = 0
37 protocol = "-1"
38 cidr_blocks = ["0.0.0.0/0"]
39 ipv6_cidr_blocks = ["::/0"]
40
41 }
42 ]
43
44 tags = {
45 Name = "sg-for-subnet"
46 }
47}
48
error in console
1resource "aws_security_group" "sg_oregon" {
2 name = "tf-sg"
3 description = "Allow web traffics"
4 vpc_id = aws_vpc.vpc_terraform.id
5
6 ingress = [
7 {
8 description = "HTTP"
9 from_port = 80
10 to_port = 80
11 protocol = "tcp"
12 cidr_blocks = ["0.0.0.0/0"]
13 },
14 {
15 description = "HTTPS"
16 from_port = 443
17 to_port = 443
18 protocol = "tcp"
19 cidr_blocks = ["0.0.0.0/0"]
20 },
21
22 {
23 description = "SSH"
24 from_port = 22
25 to_port = 22
26 protocol = "tcp"
27 cidr_blocks = ["0.0.0.0/0"]
28 }
29 ]
30
31
32 egress = [
33 {
34 description = "for all outgoing traffics"
35 from_port = 0
36 to_port = 0
37 protocol = "-1"
38 cidr_blocks = ["0.0.0.0/0"]
39 ipv6_cidr_blocks = ["::/0"]
40
41 }
42 ]
43
44 tags = {
45 Name = "sg-for-subnet"
46 }
47}
48│ Inappropriate value for attribute "ingress": element 0: attributes "ipv6_cidr_blocks", "prefix_list_ids", "security_groups", and "self" are required.
49
50│ Inappropriate value for attribute "egress": element 0: attributes "prefix_list_ids", "security_groups", and "self" are required.
51
I'm following this doc: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group
Any help would be appreciated.
ANSWER
Answered 2021-Sep-06 at 21:28Since you are using Attributes as Blocks you have to provide values for all options:
1resource "aws_security_group" "sg_oregon" {
2 name = "tf-sg"
3 description = "Allow web traffics"
4 vpc_id = aws_vpc.vpc_terraform.id
5
6 ingress = [
7 {
8 description = "HTTP"
9 from_port = 80
10 to_port = 80
11 protocol = "tcp"
12 cidr_blocks = ["0.0.0.0/0"]
13 },
14 {
15 description = "HTTPS"
16 from_port = 443
17 to_port = 443
18 protocol = "tcp"
19 cidr_blocks = ["0.0.0.0/0"]
20 },
21
22 {
23 description = "SSH"
24 from_port = 22
25 to_port = 22
26 protocol = "tcp"
27 cidr_blocks = ["0.0.0.0/0"]
28 }
29 ]
30
31
32 egress = [
33 {
34 description = "for all outgoing traffics"
35 from_port = 0
36 to_port = 0
37 protocol = "-1"
38 cidr_blocks = ["0.0.0.0/0"]
39 ipv6_cidr_blocks = ["::/0"]
40
41 }
42 ]
43
44 tags = {
45 Name = "sg-for-subnet"
46 }
47}
48│ Inappropriate value for attribute "ingress": element 0: attributes "ipv6_cidr_blocks", "prefix_list_ids", "security_groups", and "self" are required.
49
50│ Inappropriate value for attribute "egress": element 0: attributes "prefix_list_ids", "security_groups", and "self" are required.
51resource "aws_security_group" "sg_oregon" {
52 name = "tf-sg"
53 description = "Allow web traffics"
54 vpc_id = aws_vpc.vpc_terraform.id
55
56 ingress = [
57 {
58 description = "HTTP"
59 from_port = 80
60 to_port = 80
61 protocol = "tcp"
62 cidr_blocks = ["0.0.0.0/0"]
63 ipv6_cidr_blocks = []
64 prefix_list_ids = []
65 security_groups = []
66 self = false
67 },
68 {
69 description = "HTTPS"
70 from_port = 443
71 to_port = 443
72 protocol = "tcp"
73 cidr_blocks = ["0.0.0.0/0"]
74 ipv6_cidr_blocks = []
75 prefix_list_ids = []
76 security_groups = []
77 self = false
78 },
79
80 {
81 description = "SSH"
82 from_port = 22
83 to_port = 22
84 protocol = "tcp"
85 cidr_blocks = ["0.0.0.0/0"]
86 ipv6_cidr_blocks = []
87 prefix_list_ids = []
88 security_groups = []
89 self = false
90 }
91 ]
92
93
94 egress = [
95 {
96 description = "for all outgoing traffics"
97 from_port = 0
98 to_port = 0
99 protocol = "-1"
100 cidr_blocks = ["0.0.0.0/0"]
101 ipv6_cidr_blocks = ["::/0"]
102 prefix_list_ids = []
103 security_groups = []
104 self = false
105 }
106 ]
107
108 tags = {
109 Name = "sg-for-subnet"
110 }
111}
112
QUESTION
How to fix "Function not implemented - Failed to initialize inotify (Errno::ENOSYS)" in rails
Asked 2021-Oct-31 at 17:41So I'm running the new Apple M1 Pro chipset, and the original M1 chip on another machine, and when I attempt to create new RSpec tests in ruby I get the following error.
Function not implemented - Failed to initialize inotify (Errno::ENOSYS)
the full stack dump looks like this
1/var/lib/gems/2.7.0/gems/rb-inotify-0.10.1/lib/rb-inotify/notifier.rb:69:in `initialize': Function not implemented - Failed to initialize inotify (Errno::ENOSYS)
2 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/linux.rb:31:in `new'
3 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/linux.rb:31:in `_configure'
4 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:45:in `block in configure'
5 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:40:in `each'
6 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:40:in `configure'
7 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:63:in `start'
8 from /usr/lib/ruby/2.7.0/forwardable.rb:235:in `start'
9 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/listener.rb:68:in `block in <class:Listener>'
10 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:121:in `instance_eval'
11 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:121:in `call'
12 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:91:in `transition_with_callbacks!'
13 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:57:in `transition'
14 from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/listener.rb:91:in `start'
15 from /var/lib/gems/2.7.0/gems/spring-watcher-listen-2.0.1/lib/spring/watcher/listen.rb:27:in `start'
16 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:80:in `start_watcher'
17 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:89:in `preload'
18 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:157:in `serve'
19 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:145:in `block in run'
20 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:139:in `loop'
21 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:139:in `run'
22 from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application/boot.rb:19:in `<top (required)>'
23 from /usr/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:72:in `require'
24 from /usr/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:72:in `require'
25 from -e:1:in `<main>'
26
rails is running from a docker container, and I have tried following the solution that is listed below but not such luck. I'm fairly new to ruby and rails so any help would be greatly appreciated!
https://github.com/evilmartians/terraforming-rails/issues/34
ANSWER
Answered 2021-Oct-31 at 17:41Update:
To fix this issue I used the solution from @mahatmanich listed here
https://stackoverflow.com/questions/31857365/rails-generate-commands-hang-when-trying-to-create-a-model'
Essentially, we need to delete the bin directory and then re-create it using
rake app:update:bin
Since rails 5 some 'rake' commands are encapsulated within the 'rails' command. However when one deletes 'bin/' directory one is also removeing the 'rails' command itself, thus one needs to go back to 'rake' for the reset since 'rails' is not available any longer but 'rake' still is.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Terraform
Tutorials and Learning Resources are not available at this moment for Terraform