Explore all Terraform open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Terraform

terraform

v1.1.9

backstage

v1.1.1

salt

v3004.1

pulumi

v3.30.0

terraformer

0.8.19

Popular Libraries in Terraform

terraform

by hashicorp doticongodoticon

star image 32174 doticonMPL-2.0

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

devops-exercises

by bregman-arie doticonpythondoticon

star image 22045 doticonNOASSERTION

Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions

backstage

by backstage doticontypescriptdoticon

star image 16116 doticonApache-2.0

Backstage is an open platform for building developer portals

salt

by saltstack doticonpythondoticon

star image 12241 doticonApache-2.0

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:

pulumi

by pulumi doticongodoticon

star image 12104 doticonApache-2.0

Pulumi - Developer-First Infrastructure as Code. Your Cloud, Your Language, Your Way 🚀

terraformer

by GoogleCloudPlatform doticongodoticon

star image 7233 doticonApache-2.0

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code

terraform-provider-aws

by hashicorp doticongodoticon

star image 7174 doticonMPL-2.0

Terraform AWS provider

chef

by chef doticonrubydoticon

star image 6865 doticonApache-2.0

Chef Infra, a powerful automation platform that transforms infrastructure into code automating how infrastructure is configured, deployed and managed across any environment, at any scale

infracost

by infracost doticongodoticon

star image 6374 doticonApache-2.0

Cloud cost estimates for Terraform in pull requests💰📉 Love your cloud bill!

Trending New libraries in Terraform

backstage

by backstage doticontypescriptdoticon

star image 16116 doticonApache-2.0

Backstage is an open platform for building developer portals

infracost

by infracost doticongodoticon

star image 6374 doticonApache-2.0

Cloud cost estimates for Terraform in pull requests💰📉 Love your cloud bill!

terraform-cdk

by hashicorp doticontypescriptdoticon

star image 3425 doticonMPL-2.0

Define infrastructure resources using programming constructs and provision them using HashiCorp Terraform

cloudquery

by cloudquery doticongodoticon

star image 2221 doticonMPL-2.0

The open-source cloud asset inventory powered by SQL.

driftctl

by cloudskiff doticongodoticon

star image 1603 doticonApache-2.0

Detect, track and alert on infrastructure drift

engine

by Qovery doticonrustdoticon

star image 1460 doticonGPL-3.0

The simplest way to deploy your apps on any cloud provider

inframap

by cycloidio doticongodoticon

star image 805 doticonMIT

Read your tfstate or HCL to generate a graph specific for each provider, showing only the resources that are most important/relevant.

terraform-aws-next-js

by milliHQ doticontypescriptdoticon

star image 796 doticonApache-2.0

Terraform module for building and deploying Next.js apps to AWS. Supports SSR (Lambda), Static (S3) and API (Lambda) pages.

cloudformation-guard

by aws-cloudformation doticonrustdoticon

star image 737 doticonApache-2.0

Guard offers a policy-as-code domain-specific language (DSL) to write rules and validate JSON- and YAML-formatted data such as CloudFormation Templates, K8s configurations, and Terraform JSON plans/configurations against those rules.

Top Authors in Terraform

1

hashicorp

90 Libraries

star icon58073

2

aws-samples

19 Libraries

star icon573

3

GoogleCloudPlatform

17 Libraries

star icon9272

4

microsoft

14 Libraries

star icon1045

5

infrablocks

14 Libraries

star icon199

6

apparentlymart

14 Libraries

star icon390

7

claranet

13 Libraries

star icon235

8

Azure

11 Libraries

star icon702

9

pulumi

11 Libraries

star icon12588

10

paultyng

10 Libraries

star icon425

1

90 Libraries

star icon58073

2

19 Libraries

star icon573

3

17 Libraries

star icon9272

4

14 Libraries

star icon1045

5

14 Libraries

star icon199

6

14 Libraries

star icon390

7

13 Libraries

star icon235

8

11 Libraries

star icon702

9

11 Libraries

star icon12588

10

10 Libraries

star icon425

Trending Kits in Terraform

No Trending Kits are available at this moment for Terraform

Trending Discussions on Terraform

json.Marshal(): json: error calling MarshalJSON for type msgraph.Application

Web3js fails to import in Vue3 composition api project

how to connect an aws api gateway to a private lambda function inside a vpc

Terraform AWS Provider Error: Value for unconfigurable attribute. Can't configure a value for "acl": its value will be decided automatically

Programmatically Connecting a GitHub repo to a Google Cloud Project

Kubernetes NodePort is not available on all nodes - Oracle Cloud Infrastructure (OCI)

Can you pass blocks as variables in Terraform, referencing the type of a resource's nested block contents?

trigger lambda function from DynamoDB

Terraform: Inappropriate value for attribute "ingress" while creating SG

How to fix "Function not implemented - Failed to initialize inotify (Errno::ENOSYS)" in rails

QUESTION

json.Marshal(): json: error calling MarshalJSON for type msgraph.Application

Asked 2022-Mar-27 at 23:59

What specific syntax or configuration changes must be made in order to resolve the error below in which terraform is failing to create an instance of azuread_application?

THE CODE:

The terraform code that is triggering the error when terraform apply is run is as follows:

1variable "tenantId" { }
2variable "clientId" { }
3variable "clientSecret" { }
4variable "instanceName" { }
5
6terraform {
7  required_providers {
8    azuread = {
9      source  = "hashicorp/azuread"
10      version = "2.5.0"
11    }
12  }
13}
14
15provider "azuread" {
16  tenant_id       = var.tenantId
17  client_id       = var.clientId
18  client_secret   = var.clientSecret
19}
20
21resource "azuread_application" "appRegistration" {
22  display_name = var.instanceName
23  app_role {
24    allowed_member_types = ["User", "Application"]
25    description          = "Admins can manage roles and perform all task actions"
26    display_name         = "Admin"
27    enabled              = true
28    id                   = "1b19509b-32b1-4e9f-b71d-4992aa991967"
29    value                = "admin"
30  }
31}
32

THE ERROR:

The error and log output that result from running the above code with terraform apply are:

1variable "tenantId" { }
2variable "clientId" { }
3variable "clientSecret" { }
4variable "instanceName" { }
5
6terraform {
7  required_providers {
8    azuread = {
9      source  = "hashicorp/azuread"
10      version = "2.5.0"
11    }
12  }
13}
14
15provider "azuread" {
16  tenant_id       = var.tenantId
17  client_id       = var.clientId
18  client_secret   = var.clientSecret
19}
20
21resource "azuread_application" "appRegistration" {
22  display_name = var.instanceName
23  app_role {
24    allowed_member_types = ["User", "Application"]
25    description          = "Admins can manage roles and perform all task actions"
26    display_name         = "Admin"
27    enabled              = true
28    id                   = "1b19509b-32b1-4e9f-b71d-4992aa991967"
29    value                = "admin"
30  }
31}
322021/10/05 17:47:18 [DEBUG] module.ad-admin.azuread_application.appRegistration:
33 apply errored, but we're indicating that via the Error pointer rather than returning it:
34 Could not create application: json.Marshal():
35 json: error calling MarshalJSON for type msgraph.Application:
36 json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners: encountered DirectoryObject with nil ODataId
37
382021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
392021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
402021/10/05 17:47:18 [TRACE] EvalApplyProvisioners: azuread_application.appRegistration has no state, so skipping provisioners
412021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
422021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
432021/10/05 17:47:18 [TRACE] vertex "module.ad-admin.azuread_application.appRegistration": visit complete
44
452021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.application_id (expand)" errored, so skipping
462021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal.appRegistrationSP" errored, so skipping
472021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.application_id" errored, so skipping
482021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.appId (expand)" errored, so skipping
492021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal_password.appRegistrationSP_pwd" errored, so skipping
502021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.appId" errored, so skipping
512021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment_vault" errored, so skipping
522021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment" errored, so skipping
532021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azuread\"] (close)" errored, so skipping
542021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azurerm\"] (close)" errored, so skipping
552021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin (close)" errored, so skipping
562021/10/05 17:47:18 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
572021/10/05 17:47:18 [TRACE] dag/walk: upstream of "root" errored, so skipping
582021/10/05 17:47:18 [TRACE] statemgr.Filesystem: creating backup snapshot at terraform.tfstate.backup
592021/10/05 17:47:18 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 391
602021/10/05 17:47:18 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
612021/10/05 17:47:18 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
62
63Error: Could not create application
64
65  on ..\..\..\..\modules\ad-admin\active-directory.tf line 69, in resource "azuread_application" "appRegistration":
66  69: resource "azuread_application" "appRegistration" {
67
68json.Marshal(): json: error calling MarshalJSON for type msgraph.Application:
69json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners:
702021/10/05 17:47:18 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate
71encountered DirectoryObject with nil ODataId
72

terraform -version gives:

Terraform v1.0.8 on windows_amd64

ANSWER

Answered 2021-Oct-07 at 18:35

This was a bug, reported as GitHub issue:

The resolution to the problem in the OP is to upgrade the version from 2.5.0 to 2.6.0 in the required_providers block from the code in the OP above as follows:

1variable "tenantId" { }
2variable "clientId" { }
3variable "clientSecret" { }
4variable "instanceName" { }
5
6terraform {
7  required_providers {
8    azuread = {
9      source  = "hashicorp/azuread"
10      version = "2.5.0"
11    }
12  }
13}
14
15provider "azuread" {
16  tenant_id       = var.tenantId
17  client_id       = var.clientId
18  client_secret   = var.clientSecret
19}
20
21resource "azuread_application" "appRegistration" {
22  display_name = var.instanceName
23  app_role {
24    allowed_member_types = ["User", "Application"]
25    description          = "Admins can manage roles and perform all task actions"
26    display_name         = "Admin"
27    enabled              = true
28    id                   = "1b19509b-32b1-4e9f-b71d-4992aa991967"
29    value                = "admin"
30  }
31}
322021/10/05 17:47:18 [DEBUG] module.ad-admin.azuread_application.appRegistration:
33 apply errored, but we're indicating that via the Error pointer rather than returning it:
34 Could not create application: json.Marshal():
35 json: error calling MarshalJSON for type msgraph.Application:
36 json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners: encountered DirectoryObject with nil ODataId
37
382021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
392021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
402021/10/05 17:47:18 [TRACE] EvalApplyProvisioners: azuread_application.appRegistration has no state, so skipping provisioners
412021/10/05 17:47:18 [TRACE] EvalMaybeTainted: module.ad-admin.azuread_application.appRegistration encountered an error during creation, so it is now marked as tainted
422021/10/05 17:47:18 [TRACE] EvalWriteState: removing state object for module.ad-admin.azuread_application.appRegistration
432021/10/05 17:47:18 [TRACE] vertex "module.ad-admin.azuread_application.appRegistration": visit complete
44
452021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.application_id (expand)" errored, so skipping
462021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal.appRegistrationSP" errored, so skipping
472021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.application_id" errored, so skipping
482021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.output.appId (expand)" errored, so skipping
492021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azuread_service_principal_password.appRegistrationSP_pwd" errored, so skipping
502021/10/05 17:47:18 [TRACE] dag/walk: upstream of "output.appId" errored, so skipping
512021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment_vault" errored, so skipping
522021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.azurerm_role_assignment.appRegistrationSP_role_assignment" errored, so skipping
532021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azuread\"] (close)" errored, so skipping
542021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin.provider[\"registry.terraform.io/hashicorp/azurerm\"] (close)" errored, so skipping
552021/10/05 17:47:18 [TRACE] dag/walk: upstream of "module.ad-admin (close)" errored, so skipping
562021/10/05 17:47:18 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
572021/10/05 17:47:18 [TRACE] dag/walk: upstream of "root" errored, so skipping
582021/10/05 17:47:18 [TRACE] statemgr.Filesystem: creating backup snapshot at terraform.tfstate.backup
592021/10/05 17:47:18 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 391
602021/10/05 17:47:18 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
612021/10/05 17:47:18 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
62
63Error: Could not create application
64
65  on ..\..\..\..\modules\ad-admin\active-directory.tf line 69, in resource "azuread_application" "appRegistration":
66  69: resource "azuread_application" "appRegistration" {
67
68json.Marshal(): json: error calling MarshalJSON for type msgraph.Application:
69json: error calling MarshalJSON for type *msgraph.Owners: marshaling Owners:
702021/10/05 17:47:18 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate
71encountered DirectoryObject with nil ODataId
72terraform {
73  required_providers {
74    azuread = {
75      source  = "hashicorp/azuread"
76      version = "2.6.0"
77    }
78  }
79}
80

Source https://stackoverflow.com/questions/69459069

QUESTION

Web3js fails to import in Vue3 composition api project

Asked 2022-Mar-14 at 03:36

I've created a brand new project with npm init vite bar -- --template vue. I've done an npm install web3 and I can see my package-lock.json includes this package. My node_modules directory also includes the web3 modules.

So then I added this line to main.js:

1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3'   <-- This line
4
5
6createApp(App).mount('#app')
7

And I get the following error: Uncaught ReferenceError: process is not defined

I don't understand what is going on here. I'm fairly new to using npm so I'm not super sure what to Google. The errors are coming from node_modules/web3/lib/index.js, node_modules/web3-core/lib/index.js, node_modules/web3-core-requestmanager/lib/index.js, and finally node_modules/util/util.js. I suspect it has to do with one of these:

  1. I'm using Vue 3
  2. I'm using Vue 3 Composition API
  3. I'm using Vue 3 Composition API SFC <script setup> tag (but I imported it in main.js so I don't think it is this one)
  4. web3js is in Typescript and my Vue3 project is not configured for Typescript

But as I am fairly new to JavaScript and Vue and Web3 I am not sure how to focus my Googling on this error. My background is Python, Go, Terraform. Basically the back end of the back end. Front end JavaScript is new to me.

How do I go about resolving this issue?

ANSWER

Answered 2022-Mar-14 at 03:36
Option 1: Polyfill Node globals/modules

Polyfilling the Node globals and modules enables the web3 import to run in the browser:

  1. Install the ESBuild plugins that polyfill Node globals/modules:
1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3'   &lt;-- This line
4
5
6createApp(App).mount('#app')
7npm i -D @esbuild-plugins/node-globals-polyfill
8npm i -D @esbuild-plugins/node-modules-polyfill
9
  1. Configure optimizeDeps.esbuildOptions to use these ESBuild plugins.

  2. Configure define to replace global with globalThis (the browser equivalent).

1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3'   &lt;-- This line
4
5
6createApp(App).mount('#app')
7npm i -D @esbuild-plugins/node-globals-polyfill
8npm i -D @esbuild-plugins/node-modules-polyfill
9import { defineConfig } from 'vite'
10import GlobalsPolyfills from '@esbuild-plugins/node-globals-polyfill'
11import NodeModulesPolyfills from '@esbuild-plugins/node-modules-polyfill'
12
13export default defineConfig({
1415  optimizeDeps: {
16    esbuildOptions: {
17      2️⃣
18      plugins: [
19        NodeModulesPolyfills(),
20        GlobalsPolyfills({
21          process: true,
22          buffer: true,
23        }),
24      ],
25      3️⃣
26      define: {
27        global: 'globalThis',
28      },
29    },
30  },
31})
32

demo 1

Note: The polyfills add considerable size to the build output.

Option 2: Use pre-bundled script

web3 distributes a bundled script at web3/dist/web3.min.js, which can run in the browser without any configuration (listed as "pure js"). You could configure a resolve.alias to pull in that file:

1import { createApp } from 'vue'
2import App from './App.vue'
3import Web3 from 'web3'   &lt;-- This line
4
5
6createApp(App).mount('#app')
7npm i -D @esbuild-plugins/node-globals-polyfill
8npm i -D @esbuild-plugins/node-modules-polyfill
9import { defineConfig } from 'vite'
10import GlobalsPolyfills from '@esbuild-plugins/node-globals-polyfill'
11import NodeModulesPolyfills from '@esbuild-plugins/node-modules-polyfill'
12
13export default defineConfig({
1415  optimizeDeps: {
16    esbuildOptions: {
17      2️⃣
18      plugins: [
19        NodeModulesPolyfills(),
20        GlobalsPolyfills({
21          process: true,
22          buffer: true,
23        }),
24      ],
25      3️⃣
26      define: {
27        global: 'globalThis',
28      },
29    },
30  },
31})
32import { defineConfig } from 'vite'
33
34export default defineConfig({
3536  resolve: {
37    alias: {
38      web3: 'web3/dist/web3.min.js',
39    },
40  },
41})
42

demo 2

Note: This option produces 469.4 KiB smaller output than Option 1.

Source https://stackoverflow.com/questions/68975837

QUESTION

how to connect an aws api gateway to a private lambda function inside a vpc

Asked 2022-Feb-20 at 12:53

I am trying to connect an aws api gateway to a lambda function residing in a VPC then retrieve the secret manager to access a database using python code with boto3. The database and vpc endpoint were created in a private subnet.

lambda function
1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39
TERRAFORM security group
1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65
lambda
1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65resource &quot;aws_lambda_function&quot; &quot;lambda_test&quot; {
66  function_name       = &quot;lambda-test&quot;
67  ...
68
69  # Attach Lambda to VPC
70  vpc_config {
71    subnet_ids = [aws_subnet.private_subnet.id]
72    security_group_ids = [aws_security_group.db.id]
73  }
74}
75
76resource &quot;aws_iam_policy&quot; &quot;lambda_test&quot; {
77  name        = &quot;lambda-test&quot;
78
79  policy = &lt;&lt;EOF
80{
81    &quot;Version&quot;: &quot;2012-10-17&quot;,
82    &quot;Statement&quot;: [
83        {
84            &quot;Effect&quot;: &quot;Allow&quot;,
85            &quot;Action&quot;: [
86                &quot;logs:CreateLogStream&quot;,
87                &quot;logs:CreateLogGroup&quot;,
88                &quot;logs:PutLogEvents&quot;,
89                &quot;ec2:DescribeSecurityGroups&quot;,
90                &quot;ec2:DescribeSubnets&quot;,
91                &quot;ec2:DescribeVpcs&quot;,
92                &quot;ec2:DescribeNetworkInterfaces&quot;,
93                &quot;ec2:CreateNetworkInterface&quot;,
94                &quot;ec2:DeleteNetworkInterface&quot;,
95                &quot;ec2:AttachNetworkInterface&quot;,
96                &quot;ec2:AssignPrivateIpAddresses&quot;,
97                &quot;ec2:UnassignPrivateIpAddresses&quot;,
98                &quot;autoscaling:CompleteLifecycleAction&quot;,
99                &quot;secretsmanager:GetSecretValue&quot;
100            ],
101            &quot;Resource&quot;: [
102              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
103              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
104            ]
105        },
106        {
107            &quot;Effect&quot;: &quot;Allow&quot;,
108            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
109            &quot;Resource&quot;: [
110              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
111              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
112            ]
113        }
114    ]
115}
116EOF
117}
118
119resource &quot;aws_iam_role&quot; &quot;lambda_test_role&quot; {
120  name = &quot;lambda-test-role&quot;
121
122  assume_role_policy = &lt;&lt;EOF
123{
124  &quot;Version&quot;: &quot;2012-10-17&quot;,
125  &quot;Id&quot;: &quot;&quot;,
126  &quot;Statement&quot;: [
127    {
128      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
129      &quot;Principal&quot;: {
130        &quot;Service&quot;: [
131          &quot;lambda.amazonaws.com&quot;,
132          &quot;secretsmanager.amazonaws.com&quot;
133        ]
134      },
135      &quot;Effect&quot;: &quot;Allow&quot;,
136      &quot;Sid&quot;: &quot;&quot;
137    }
138  ]
139}
140EOF
141}
142
143resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test&quot; {
144  policy_arn  = aws_iam_policy.lambda_test.arn
145  role        = aws_iam_role.lambda_test_role.name
146}
147
148resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test_vpc_access&quot; {
149  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot;
150  role        = aws_iam_role.lambda_test_role.name
151}
152
vpc endpoint
1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65resource &quot;aws_lambda_function&quot; &quot;lambda_test&quot; {
66  function_name       = &quot;lambda-test&quot;
67  ...
68
69  # Attach Lambda to VPC
70  vpc_config {
71    subnet_ids = [aws_subnet.private_subnet.id]
72    security_group_ids = [aws_security_group.db.id]
73  }
74}
75
76resource &quot;aws_iam_policy&quot; &quot;lambda_test&quot; {
77  name        = &quot;lambda-test&quot;
78
79  policy = &lt;&lt;EOF
80{
81    &quot;Version&quot;: &quot;2012-10-17&quot;,
82    &quot;Statement&quot;: [
83        {
84            &quot;Effect&quot;: &quot;Allow&quot;,
85            &quot;Action&quot;: [
86                &quot;logs:CreateLogStream&quot;,
87                &quot;logs:CreateLogGroup&quot;,
88                &quot;logs:PutLogEvents&quot;,
89                &quot;ec2:DescribeSecurityGroups&quot;,
90                &quot;ec2:DescribeSubnets&quot;,
91                &quot;ec2:DescribeVpcs&quot;,
92                &quot;ec2:DescribeNetworkInterfaces&quot;,
93                &quot;ec2:CreateNetworkInterface&quot;,
94                &quot;ec2:DeleteNetworkInterface&quot;,
95                &quot;ec2:AttachNetworkInterface&quot;,
96                &quot;ec2:AssignPrivateIpAddresses&quot;,
97                &quot;ec2:UnassignPrivateIpAddresses&quot;,
98                &quot;autoscaling:CompleteLifecycleAction&quot;,
99                &quot;secretsmanager:GetSecretValue&quot;
100            ],
101            &quot;Resource&quot;: [
102              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
103              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
104            ]
105        },
106        {
107            &quot;Effect&quot;: &quot;Allow&quot;,
108            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
109            &quot;Resource&quot;: [
110              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
111              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
112            ]
113        }
114    ]
115}
116EOF
117}
118
119resource &quot;aws_iam_role&quot; &quot;lambda_test_role&quot; {
120  name = &quot;lambda-test-role&quot;
121
122  assume_role_policy = &lt;&lt;EOF
123{
124  &quot;Version&quot;: &quot;2012-10-17&quot;,
125  &quot;Id&quot;: &quot;&quot;,
126  &quot;Statement&quot;: [
127    {
128      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
129      &quot;Principal&quot;: {
130        &quot;Service&quot;: [
131          &quot;lambda.amazonaws.com&quot;,
132          &quot;secretsmanager.amazonaws.com&quot;
133        ]
134      },
135      &quot;Effect&quot;: &quot;Allow&quot;,
136      &quot;Sid&quot;: &quot;&quot;
137    }
138  ]
139}
140EOF
141}
142
143resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test&quot; {
144  policy_arn  = aws_iam_policy.lambda_test.arn
145  role        = aws_iam_role.lambda_test_role.name
146}
147
148resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test_vpc_access&quot; {
149  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot;
150  role        = aws_iam_role.lambda_test_role.name
151}
152resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
153  vpc_id                      = aws_vpc.default.id
154  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
155  vpc_endpoint_type           = &quot;Interface&quot;
156
157  security_group_ids          = [aws_security_group.db.id]
158  private_dns_enabled         = true
159
160  policy                      = &lt;&lt;EOF
161{
162    &quot;Version&quot;: &quot;2012-10-17&quot;,
163    &quot;Statement&quot;: [
164        {
165          &quot;Effect&quot;: &quot;Allow&quot;,
166          &quot;Action&quot;: &quot;*&quot;,
167          &quot;Principal&quot;: &quot;*&quot;,
168          &quot;Resource&quot;: &quot;*&quot;
169        }
170    ]
171}
172EOF
173}
174

Without trying to access secretsmanager, the lambda itself work fine, i am able to access the url endpoint, provide parameters then see the result in cloudwatch logs but as soon as i try to call secretsmanager in the lambda function endpoint, the page return {"message": "Internal server error"} and when i look at the logs it say {"errorMessage": "Could not connect to the endpoint URL: \"https://secretsmanager.REGIONHIDDEN.amazonaws.com/\"", "errorType": "EndpointConnectionError"

Is there anything that i am doing wrong above?

ANSWER

Answered 2022-Feb-19 at 21:44

If you can call the Lambda function from API Gateway, then your question title "how to connect an aws api gateway to a private lambda function inside a vpc" is already complete and working.

It appears that your actual problem is simply accessing Secrets Manager from inside a Lambda function running in a VPC.

It's also strange that you are assigning a "db" security group to the Lambda function. What are the inbound/outbound rules of this Security Group?

It is entirely unclear why you created a VPC endpoint. What are we supposed to make of service_name = "foo"? What is service "foo"? How is this VPC endpoint related to the Lambda function in any way? If this is supposed to be a VPC endpoint for Secrets Manager, then the service name should be "com.amazonaws.YOUR-REGION.secretsmanager".

If you need more help you need to edit your question to provide the following: The inbound and outbound rules of any relevant security groups, and the Lambda function code that is trying to call SecretsManager.


Update: After clarifications in comments and the updated question, I think the problem is you are missing any subnet assignments for the VPC Endpoint. Also, since you are adding a VPC policy with full access, you can just leave that out entirely, as the default policy is full access. I suggest changing the VPC endpoint to the following:

1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65resource &quot;aws_lambda_function&quot; &quot;lambda_test&quot; {
66  function_name       = &quot;lambda-test&quot;
67  ...
68
69  # Attach Lambda to VPC
70  vpc_config {
71    subnet_ids = [aws_subnet.private_subnet.id]
72    security_group_ids = [aws_security_group.db.id]
73  }
74}
75
76resource &quot;aws_iam_policy&quot; &quot;lambda_test&quot; {
77  name        = &quot;lambda-test&quot;
78
79  policy = &lt;&lt;EOF
80{
81    &quot;Version&quot;: &quot;2012-10-17&quot;,
82    &quot;Statement&quot;: [
83        {
84            &quot;Effect&quot;: &quot;Allow&quot;,
85            &quot;Action&quot;: [
86                &quot;logs:CreateLogStream&quot;,
87                &quot;logs:CreateLogGroup&quot;,
88                &quot;logs:PutLogEvents&quot;,
89                &quot;ec2:DescribeSecurityGroups&quot;,
90                &quot;ec2:DescribeSubnets&quot;,
91                &quot;ec2:DescribeVpcs&quot;,
92                &quot;ec2:DescribeNetworkInterfaces&quot;,
93                &quot;ec2:CreateNetworkInterface&quot;,
94                &quot;ec2:DeleteNetworkInterface&quot;,
95                &quot;ec2:AttachNetworkInterface&quot;,
96                &quot;ec2:AssignPrivateIpAddresses&quot;,
97                &quot;ec2:UnassignPrivateIpAddresses&quot;,
98                &quot;autoscaling:CompleteLifecycleAction&quot;,
99                &quot;secretsmanager:GetSecretValue&quot;
100            ],
101            &quot;Resource&quot;: [
102              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
103              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
104            ]
105        },
106        {
107            &quot;Effect&quot;: &quot;Allow&quot;,
108            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
109            &quot;Resource&quot;: [
110              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
111              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
112            ]
113        }
114    ]
115}
116EOF
117}
118
119resource &quot;aws_iam_role&quot; &quot;lambda_test_role&quot; {
120  name = &quot;lambda-test-role&quot;
121
122  assume_role_policy = &lt;&lt;EOF
123{
124  &quot;Version&quot;: &quot;2012-10-17&quot;,
125  &quot;Id&quot;: &quot;&quot;,
126  &quot;Statement&quot;: [
127    {
128      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
129      &quot;Principal&quot;: {
130        &quot;Service&quot;: [
131          &quot;lambda.amazonaws.com&quot;,
132          &quot;secretsmanager.amazonaws.com&quot;
133        ]
134      },
135      &quot;Effect&quot;: &quot;Allow&quot;,
136      &quot;Sid&quot;: &quot;&quot;
137    }
138  ]
139}
140EOF
141}
142
143resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test&quot; {
144  policy_arn  = aws_iam_policy.lambda_test.arn
145  role        = aws_iam_role.lambda_test_role.name
146}
147
148resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test_vpc_access&quot; {
149  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot;
150  role        = aws_iam_role.lambda_test_role.name
151}
152resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
153  vpc_id                      = aws_vpc.default.id
154  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
155  vpc_endpoint_type           = &quot;Interface&quot;
156
157  security_group_ids          = [aws_security_group.db.id]
158  private_dns_enabled         = true
159
160  policy                      = &lt;&lt;EOF
161{
162    &quot;Version&quot;: &quot;2012-10-17&quot;,
163    &quot;Statement&quot;: [
164        {
165          &quot;Effect&quot;: &quot;Allow&quot;,
166          &quot;Action&quot;: &quot;*&quot;,
167          &quot;Principal&quot;: &quot;*&quot;,
168          &quot;Resource&quot;: &quot;*&quot;
169        }
170    ]
171}
172EOF
173}
174resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
175  vpc_id                      = aws_vpc.default.id
176  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
177  vpc_endpoint_type           = &quot;Interface&quot;
178
179  subnet_ids                  = [aws_subnet.private_subnet.id]
180  security_group_ids          = [aws_security_group.db.id]
181  private_dns_enabled         = true
182}
183

Update 2: This part of your Lambda function's IAM policy is wrong:

1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65resource &quot;aws_lambda_function&quot; &quot;lambda_test&quot; {
66  function_name       = &quot;lambda-test&quot;
67  ...
68
69  # Attach Lambda to VPC
70  vpc_config {
71    subnet_ids = [aws_subnet.private_subnet.id]
72    security_group_ids = [aws_security_group.db.id]
73  }
74}
75
76resource &quot;aws_iam_policy&quot; &quot;lambda_test&quot; {
77  name        = &quot;lambda-test&quot;
78
79  policy = &lt;&lt;EOF
80{
81    &quot;Version&quot;: &quot;2012-10-17&quot;,
82    &quot;Statement&quot;: [
83        {
84            &quot;Effect&quot;: &quot;Allow&quot;,
85            &quot;Action&quot;: [
86                &quot;logs:CreateLogStream&quot;,
87                &quot;logs:CreateLogGroup&quot;,
88                &quot;logs:PutLogEvents&quot;,
89                &quot;ec2:DescribeSecurityGroups&quot;,
90                &quot;ec2:DescribeSubnets&quot;,
91                &quot;ec2:DescribeVpcs&quot;,
92                &quot;ec2:DescribeNetworkInterfaces&quot;,
93                &quot;ec2:CreateNetworkInterface&quot;,
94                &quot;ec2:DeleteNetworkInterface&quot;,
95                &quot;ec2:AttachNetworkInterface&quot;,
96                &quot;ec2:AssignPrivateIpAddresses&quot;,
97                &quot;ec2:UnassignPrivateIpAddresses&quot;,
98                &quot;autoscaling:CompleteLifecycleAction&quot;,
99                &quot;secretsmanager:GetSecretValue&quot;
100            ],
101            &quot;Resource&quot;: [
102              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
103              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
104            ]
105        },
106        {
107            &quot;Effect&quot;: &quot;Allow&quot;,
108            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
109            &quot;Resource&quot;: [
110              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
111              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
112            ]
113        }
114    ]
115}
116EOF
117}
118
119resource &quot;aws_iam_role&quot; &quot;lambda_test_role&quot; {
120  name = &quot;lambda-test-role&quot;
121
122  assume_role_policy = &lt;&lt;EOF
123{
124  &quot;Version&quot;: &quot;2012-10-17&quot;,
125  &quot;Id&quot;: &quot;&quot;,
126  &quot;Statement&quot;: [
127    {
128      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
129      &quot;Principal&quot;: {
130        &quot;Service&quot;: [
131          &quot;lambda.amazonaws.com&quot;,
132          &quot;secretsmanager.amazonaws.com&quot;
133        ]
134      },
135      &quot;Effect&quot;: &quot;Allow&quot;,
136      &quot;Sid&quot;: &quot;&quot;
137    }
138  ]
139}
140EOF
141}
142
143resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test&quot; {
144  policy_arn  = aws_iam_policy.lambda_test.arn
145  role        = aws_iam_role.lambda_test_role.name
146}
147
148resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test_vpc_access&quot; {
149  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot;
150  role        = aws_iam_role.lambda_test_role.name
151}
152resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
153  vpc_id                      = aws_vpc.default.id
154  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
155  vpc_endpoint_type           = &quot;Interface&quot;
156
157  security_group_ids          = [aws_security_group.db.id]
158  private_dns_enabled         = true
159
160  policy                      = &lt;&lt;EOF
161{
162    &quot;Version&quot;: &quot;2012-10-17&quot;,
163    &quot;Statement&quot;: [
164        {
165          &quot;Effect&quot;: &quot;Allow&quot;,
166          &quot;Action&quot;: &quot;*&quot;,
167          &quot;Principal&quot;: &quot;*&quot;,
168          &quot;Resource&quot;: &quot;*&quot;
169        }
170    ]
171}
172EOF
173}
174resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
175  vpc_id                      = aws_vpc.default.id
176  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
177  vpc_endpoint_type           = &quot;Interface&quot;
178
179  subnet_ids                  = [aws_subnet.private_subnet.id]
180  security_group_ids          = [aws_security_group.db.id]
181  private_dns_enabled         = true
182}
183       {
184            &quot;Effect&quot;: &quot;Allow&quot;,
185            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
186            &quot;Resource&quot;: [
187              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
188              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
189            ]
190        }
191

That gives the Lambda access to a secret, with an ARN of a Lambda function, which is not a valid secret ARN. It should be the following:

1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65resource &quot;aws_lambda_function&quot; &quot;lambda_test&quot; {
66  function_name       = &quot;lambda-test&quot;
67  ...
68
69  # Attach Lambda to VPC
70  vpc_config {
71    subnet_ids = [aws_subnet.private_subnet.id]
72    security_group_ids = [aws_security_group.db.id]
73  }
74}
75
76resource &quot;aws_iam_policy&quot; &quot;lambda_test&quot; {
77  name        = &quot;lambda-test&quot;
78
79  policy = &lt;&lt;EOF
80{
81    &quot;Version&quot;: &quot;2012-10-17&quot;,
82    &quot;Statement&quot;: [
83        {
84            &quot;Effect&quot;: &quot;Allow&quot;,
85            &quot;Action&quot;: [
86                &quot;logs:CreateLogStream&quot;,
87                &quot;logs:CreateLogGroup&quot;,
88                &quot;logs:PutLogEvents&quot;,
89                &quot;ec2:DescribeSecurityGroups&quot;,
90                &quot;ec2:DescribeSubnets&quot;,
91                &quot;ec2:DescribeVpcs&quot;,
92                &quot;ec2:DescribeNetworkInterfaces&quot;,
93                &quot;ec2:CreateNetworkInterface&quot;,
94                &quot;ec2:DeleteNetworkInterface&quot;,
95                &quot;ec2:AttachNetworkInterface&quot;,
96                &quot;ec2:AssignPrivateIpAddresses&quot;,
97                &quot;ec2:UnassignPrivateIpAddresses&quot;,
98                &quot;autoscaling:CompleteLifecycleAction&quot;,
99                &quot;secretsmanager:GetSecretValue&quot;
100            ],
101            &quot;Resource&quot;: [
102              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
103              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
104            ]
105        },
106        {
107            &quot;Effect&quot;: &quot;Allow&quot;,
108            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
109            &quot;Resource&quot;: [
110              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
111              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
112            ]
113        }
114    ]
115}
116EOF
117}
118
119resource &quot;aws_iam_role&quot; &quot;lambda_test_role&quot; {
120  name = &quot;lambda-test-role&quot;
121
122  assume_role_policy = &lt;&lt;EOF
123{
124  &quot;Version&quot;: &quot;2012-10-17&quot;,
125  &quot;Id&quot;: &quot;&quot;,
126  &quot;Statement&quot;: [
127    {
128      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
129      &quot;Principal&quot;: {
130        &quot;Service&quot;: [
131          &quot;lambda.amazonaws.com&quot;,
132          &quot;secretsmanager.amazonaws.com&quot;
133        ]
134      },
135      &quot;Effect&quot;: &quot;Allow&quot;,
136      &quot;Sid&quot;: &quot;&quot;
137    }
138  ]
139}
140EOF
141}
142
143resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test&quot; {
144  policy_arn  = aws_iam_policy.lambda_test.arn
145  role        = aws_iam_role.lambda_test_role.name
146}
147
148resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test_vpc_access&quot; {
149  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot;
150  role        = aws_iam_role.lambda_test_role.name
151}
152resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
153  vpc_id                      = aws_vpc.default.id
154  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
155  vpc_endpoint_type           = &quot;Interface&quot;
156
157  security_group_ids          = [aws_security_group.db.id]
158  private_dns_enabled         = true
159
160  policy                      = &lt;&lt;EOF
161{
162    &quot;Version&quot;: &quot;2012-10-17&quot;,
163    &quot;Statement&quot;: [
164        {
165          &quot;Effect&quot;: &quot;Allow&quot;,
166          &quot;Action&quot;: &quot;*&quot;,
167          &quot;Principal&quot;: &quot;*&quot;,
168          &quot;Resource&quot;: &quot;*&quot;
169        }
170    ]
171}
172EOF
173}
174resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
175  vpc_id                      = aws_vpc.default.id
176  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
177  vpc_endpoint_type           = &quot;Interface&quot;
178
179  subnet_ids                  = [aws_subnet.private_subnet.id]
180  security_group_ids          = [aws_security_group.db.id]
181  private_dns_enabled         = true
182}
183       {
184            &quot;Effect&quot;: &quot;Allow&quot;,
185            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
186            &quot;Resource&quot;: [
187              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
188              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
189            ]
190        }
191   {
192        &quot;Effect&quot;: &quot;Allow&quot;,
193        &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
194        &quot;Resource&quot;: &quot;${data.aws_secretsmanager_secret.my_secret.arn}&quot;
195    }
196

Also this part of your policy is messed up:

1def test_secret():
2
3    secret = &quot;mysecret&quot;
4    region = &quot;MY-REGION&quot; :)
5    
6    session = boto3.session.Session()
7    client = session.client(
8        service_name=&quot;secretsmanager&quot;,
9        region_name=region
10    )
11    secret_value_response = client.get_secret_value(SecretId=secret)
12
13    try:
14        result =  json.loads(secret_value_response[&quot;SecretString&quot;])
15
16    except Exception as e:
17        result = &quot;Error found: {}&quot;.format(e)
18    return result
19
20
21def handler(event, context):
22
23    get_secrets = test_secret() # THE CODE FAIL HERE IN CLOUDWATCH
24
25    try:
26        some_string = event[&quot;queryStringParameters&quot;][&quot;some_string&quot;]
27
28        response = {}
29
30        response[&quot;statusCode&quot;] = 200
31        response[&quot;body&quot;] = some_string + &quot; &quot; + get_secrets[&quot;name&quot;]
32
33        print(&quot;secrets: &quot;, some_string + &quot; &quot; + get_secrets[&quot;name&quot;])
34
35    except Exception as e:
36        response = &quot;Error: {}&quot;.format(e)
37
38    return response
39resource &quot;aws_security_group&quot; &quot;db&quot; {
40  name   = &quot;db&quot;
41  vpc_id = aws_vpc.default.id
42
43  ingress {
44    from_port   = 443
45    to_port     = 443
46    protocol    = &quot;tcp&quot;
47    cidr_blocks = [&quot;0.0.0.0/0&quot;]
48  }
49
50  ingress {
51    from_port   = 5432
52    to_port     = 5432
53    protocol    = &quot;tcp&quot;
54    cidr_blocks = [&quot;0.0.0.0/0&quot;]
55  }
56
57  egress {
58    from_port   = 0
59    to_port     = 0
60    protocol    = &quot;-1&quot;
61    cidr_blocks = [&quot;0.0.0.0/0&quot;]
62  }
63}
64
65resource &quot;aws_lambda_function&quot; &quot;lambda_test&quot; {
66  function_name       = &quot;lambda-test&quot;
67  ...
68
69  # Attach Lambda to VPC
70  vpc_config {
71    subnet_ids = [aws_subnet.private_subnet.id]
72    security_group_ids = [aws_security_group.db.id]
73  }
74}
75
76resource &quot;aws_iam_policy&quot; &quot;lambda_test&quot; {
77  name        = &quot;lambda-test&quot;
78
79  policy = &lt;&lt;EOF
80{
81    &quot;Version&quot;: &quot;2012-10-17&quot;,
82    &quot;Statement&quot;: [
83        {
84            &quot;Effect&quot;: &quot;Allow&quot;,
85            &quot;Action&quot;: [
86                &quot;logs:CreateLogStream&quot;,
87                &quot;logs:CreateLogGroup&quot;,
88                &quot;logs:PutLogEvents&quot;,
89                &quot;ec2:DescribeSecurityGroups&quot;,
90                &quot;ec2:DescribeSubnets&quot;,
91                &quot;ec2:DescribeVpcs&quot;,
92                &quot;ec2:DescribeNetworkInterfaces&quot;,
93                &quot;ec2:CreateNetworkInterface&quot;,
94                &quot;ec2:DeleteNetworkInterface&quot;,
95                &quot;ec2:AttachNetworkInterface&quot;,
96                &quot;ec2:AssignPrivateIpAddresses&quot;,
97                &quot;ec2:UnassignPrivateIpAddresses&quot;,
98                &quot;autoscaling:CompleteLifecycleAction&quot;,
99                &quot;secretsmanager:GetSecretValue&quot;
100            ],
101            &quot;Resource&quot;: [
102              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
103              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
104            ]
105        },
106        {
107            &quot;Effect&quot;: &quot;Allow&quot;,
108            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
109            &quot;Resource&quot;: [
110              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
111              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
112            ]
113        }
114    ]
115}
116EOF
117}
118
119resource &quot;aws_iam_role&quot; &quot;lambda_test_role&quot; {
120  name = &quot;lambda-test-role&quot;
121
122  assume_role_policy = &lt;&lt;EOF
123{
124  &quot;Version&quot;: &quot;2012-10-17&quot;,
125  &quot;Id&quot;: &quot;&quot;,
126  &quot;Statement&quot;: [
127    {
128      &quot;Action&quot;: &quot;sts:AssumeRole&quot;,
129      &quot;Principal&quot;: {
130        &quot;Service&quot;: [
131          &quot;lambda.amazonaws.com&quot;,
132          &quot;secretsmanager.amazonaws.com&quot;
133        ]
134      },
135      &quot;Effect&quot;: &quot;Allow&quot;,
136      &quot;Sid&quot;: &quot;&quot;
137    }
138  ]
139}
140EOF
141}
142
143resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test&quot; {
144  policy_arn  = aws_iam_policy.lambda_test.arn
145  role        = aws_iam_role.lambda_test_role.name
146}
147
148resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_test_vpc_access&quot; {
149  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole&quot;
150  role        = aws_iam_role.lambda_test_role.name
151}
152resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
153  vpc_id                      = aws_vpc.default.id
154  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
155  vpc_endpoint_type           = &quot;Interface&quot;
156
157  security_group_ids          = [aws_security_group.db.id]
158  private_dns_enabled         = true
159
160  policy                      = &lt;&lt;EOF
161{
162    &quot;Version&quot;: &quot;2012-10-17&quot;,
163    &quot;Statement&quot;: [
164        {
165          &quot;Effect&quot;: &quot;Allow&quot;,
166          &quot;Action&quot;: &quot;*&quot;,
167          &quot;Principal&quot;: &quot;*&quot;,
168          &quot;Resource&quot;: &quot;*&quot;
169        }
170    ]
171}
172EOF
173}
174resource &quot;aws_vpc_endpoint&quot; &quot;vpc_endpoint&quot; {
175  vpc_id                      = aws_vpc.default.id
176  service_name                = &quot;com.amazonaws.${var.AWS_REGION}.secretsmanager&quot;
177  vpc_endpoint_type           = &quot;Interface&quot;
178
179  subnet_ids                  = [aws_subnet.private_subnet.id]
180  security_group_ids          = [aws_security_group.db.id]
181  private_dns_enabled         = true
182}
183       {
184            &quot;Effect&quot;: &quot;Allow&quot;,
185            &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
186            &quot;Resource&quot;: [
187              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}&quot;,
188              &quot;arn:aws:lambda:::${data.aws_secretsmanager_secret.my_secret.arn}/*&quot;
189            ]
190        }
191   {
192        &quot;Effect&quot;: &quot;Allow&quot;,
193        &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
194        &quot;Resource&quot;: &quot;${data.aws_secretsmanager_secret.my_secret.arn}&quot;
195    }
196            &quot;Effect&quot;: &quot;Allow&quot;,
197            &quot;Action&quot;: [
198                &quot;logs:CreateLogStream&quot;,
199                &quot;logs:CreateLogGroup&quot;,
200                &quot;logs:PutLogEvents&quot;,
201                &quot;ec2:DescribeSecurityGroups&quot;,
202                &quot;ec2:DescribeSubnets&quot;,
203                &quot;ec2:DescribeVpcs&quot;,
204                &quot;ec2:DescribeNetworkInterfaces&quot;,
205                &quot;ec2:CreateNetworkInterface&quot;,
206                &quot;ec2:DeleteNetworkInterface&quot;,
207                &quot;ec2:AttachNetworkInterface&quot;,
208                &quot;ec2:AssignPrivateIpAddresses&quot;,
209                &quot;ec2:UnassignPrivateIpAddresses&quot;,
210                &quot;autoscaling:CompleteLifecycleAction&quot;,
211                &quot;secretsmanager:GetSecretValue&quot;
212            ],
213            &quot;Resource&quot;: [
214              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}&quot;,
215              &quot;arn:aws:lambda:::${aws_lambda_function.lambda_test.arn}/*&quot;
216            ]
217

You are assigning this policy to a Lambda function. The resources you list in the policy are the resources the Lambda function should have access to. You don't list the Lambda function itself as the resource. I'm not sure how to fix that part of the policy, it needs to be split into multiple sections, or just replace the resource list with "*".

Also when you refer to a resource's .arn value in Terraform, you will get the full ARN, so you shouldn't be prefixing that with anything.

Source https://stackoverflow.com/questions/71188858

QUESTION

Terraform AWS Provider Error: Value for unconfigurable attribute. Can't configure a value for &quot;acl&quot;: its value will be decided automatically

Asked 2022-Feb-15 at 13:50

Just today, whenever I run terraform apply, I see an error something like this: Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.

It was working yesterday.

Following is the command I run: terraform init && terraform apply

Following is the list of initialized provider plugins:

1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10

Following are the errors:

1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
1213│ Error: Value for unconfigurable attribute
1415│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
17│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
1819│ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
20│ automatically based on the result of applying this configuration.
212223│ Error: Value for unconfigurable attribute
2425│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
27│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
2829│ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
323334│ Error: Value for unconfigurable attribute
3536│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
38│    3:   acl    = &quot;private&quot;
3940│ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
41│ based on the result of applying this configuration.
4243ERRO[0012] 1 error occurred:
44        * exit status 1
45

My code is as follows:

1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
1213│ Error: Value for unconfigurable attribute
1415│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
17│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
1819│ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
20│ automatically based on the result of applying this configuration.
212223│ Error: Value for unconfigurable attribute
2425│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
27│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
2829│ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
323334│ Error: Value for unconfigurable attribute
3536│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
38│    3:   acl    = &quot;private&quot;
3940│ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
41│ based on the result of applying this configuration.
4243ERRO[0012] 1 error occurred:
44        * exit status 1
45resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
46  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
47  acl    = &quot;private&quot;
48
49  server_side_encryption_configuration {
50    rule {
51      apply_server_side_encryption_by_default {
52        kms_master_key_id = data.aws_kms_key.s3.arn
53        sse_algorithm     = &quot;aws:kms&quot;
54      }
55    }
56  }
57
58  lifecycle_rule {
59    id      = &quot;backups&quot;
60    enabled = true
61
62    prefix = &quot;backups/&quot;
63
64    transition {
65      days          = 90
66      storage_class = &quot;GLACIER_IR&quot;
67    }
68
69    transition {
70      days          = 180
71      storage_class = &quot;DEEP_ARCHIVE&quot;
72    }
73
74    expiration {
75      days = 365
76    }
77  }
78
79  tags = {
80    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
81    Environment = var.environment
82  }
83}
84

ANSWER

Answered 2022-Feb-15 at 13:49

Terraform AWS Provider is upgraded to version 4.0.0 which is published on 10 February 2022.

Major changes in the release include:

  • Version 4.0.0 of the AWS Provider introduces significant changes to the aws_s3_bucket resource.
  • Version 4.0.0 of the AWS Provider will be the last major version to support EC2-Classic resources as AWS plans to fully retire EC2-Classic Networking. See the AWS News Blog for additional details.
  • Version 4.0.0 and 4.x.x versions of the AWS Provider will be the last versions compatible with Terraform 0.12-0.15.

The reason for this change by Terraform is as follows: To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket resource have become read-only. Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_* resource. Once updated, new aws_s3_bucket_* resources should be imported into Terraform state.

So, I updated my code accordingly by following the guide here: Terraform AWS Provider Version 4 Upgrade Guide | S3 Bucket Refactor

The new working code looks like this:

1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
1213│ Error: Value for unconfigurable attribute
1415│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
17│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
1819│ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
20│ automatically based on the result of applying this configuration.
212223│ Error: Value for unconfigurable attribute
2425│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
27│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
2829│ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
323334│ Error: Value for unconfigurable attribute
3536│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
38│    3:   acl    = &quot;private&quot;
3940│ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
41│ based on the result of applying this configuration.
4243ERRO[0012] 1 error occurred:
44        * exit status 1
45resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
46  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
47  acl    = &quot;private&quot;
48
49  server_side_encryption_configuration {
50    rule {
51      apply_server_side_encryption_by_default {
52        kms_master_key_id = data.aws_kms_key.s3.arn
53        sse_algorithm     = &quot;aws:kms&quot;
54      }
55    }
56  }
57
58  lifecycle_rule {
59    id      = &quot;backups&quot;
60    enabled = true
61
62    prefix = &quot;backups/&quot;
63
64    transition {
65      days          = 90
66      storage_class = &quot;GLACIER_IR&quot;
67    }
68
69    transition {
70      days          = 180
71      storage_class = &quot;DEEP_ARCHIVE&quot;
72    }
73
74    expiration {
75      days = 365
76    }
77  }
78
79  tags = {
80    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
81    Environment = var.environment
82  }
83}
84resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
85  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
86
87  tags = {
88    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
89    Environment = var.environment
90  }
91}
92
93resource &quot;aws_s3_bucket_acl&quot; &quot;this&quot; {
94  bucket = aws_s3_bucket.this.id
95  acl    = &quot;private&quot;
96}
97
98resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;this&quot; {
99  bucket = aws_s3_bucket.this.id
100
101  rule {
102    apply_server_side_encryption_by_default {
103      kms_master_key_id = data.aws_kms_key.s3.arn
104      sse_algorithm     = &quot;aws:kms&quot;
105    }
106  }
107}
108
109resource &quot;aws_s3_bucket_lifecycle_configuration&quot; &quot;this&quot; {
110  bucket = aws_s3_bucket.this.id
111
112  rule {
113    id     = &quot;backups&quot;
114    status = &quot;Enabled&quot;
115
116    filter {
117      prefix = &quot;backups/&quot;
118    }
119
120    transition {
121      days          = 90
122      storage_class = &quot;GLACIER_IR&quot;
123    }
124
125    transition {
126      days          = 180
127      storage_class = &quot;DEEP_ARCHIVE&quot;
128    }
129
130    expiration {
131      days = 365
132    }
133  }
134}
135

If you don't want to upgrade your Terraform AWS Provider version to 4.0.0, you can use the existing or older version by specifying it explicitly in the code as below:

1- Finding latest version of hashicorp/archive...
2- Finding latest version of hashicorp/aws...
3- Finding latest version of hashicorp/null...
4- Installing hashicorp/null v3.1.0...
5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
6- Installing hashicorp/archive v2.2.0...
7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
8- Installing hashicorp/aws v4.0.0...
9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
10Acquiring state lock. This may take a few moments...
11Releasing state lock. This may take a few moments...
1213│ Error: Value for unconfigurable attribute
1415│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
16│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
17│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
1819│ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
20│ automatically based on the result of applying this configuration.
212223│ Error: Value for unconfigurable attribute
2425│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
26│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
27│    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
2829│ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
30│ value will be decided automatically based on the result of applying this
31│ configuration.
323334│ Error: Value for unconfigurable attribute
3536│   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
37│   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
38│    3:   acl    = &quot;private&quot;
3940│ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
41│ based on the result of applying this configuration.
4243ERRO[0012] 1 error occurred:
44        * exit status 1
45resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
46  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
47  acl    = &quot;private&quot;
48
49  server_side_encryption_configuration {
50    rule {
51      apply_server_side_encryption_by_default {
52        kms_master_key_id = data.aws_kms_key.s3.arn
53        sse_algorithm     = &quot;aws:kms&quot;
54      }
55    }
56  }
57
58  lifecycle_rule {
59    id      = &quot;backups&quot;
60    enabled = true
61
62    prefix = &quot;backups/&quot;
63
64    transition {
65      days          = 90
66      storage_class = &quot;GLACIER_IR&quot;
67    }
68
69    transition {
70      days          = 180
71      storage_class = &quot;DEEP_ARCHIVE&quot;
72    }
73
74    expiration {
75      days = 365
76    }
77  }
78
79  tags = {
80    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
81    Environment = var.environment
82  }
83}
84resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
85  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
86
87  tags = {
88    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
89    Environment = var.environment
90  }
91}
92
93resource &quot;aws_s3_bucket_acl&quot; &quot;this&quot; {
94  bucket = aws_s3_bucket.this.id
95  acl    = &quot;private&quot;
96}
97
98resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;this&quot; {
99  bucket = aws_s3_bucket.this.id
100
101  rule {
102    apply_server_side_encryption_by_default {
103      kms_master_key_id = data.aws_kms_key.s3.arn
104      sse_algorithm     = &quot;aws:kms&quot;
105    }
106  }
107}
108
109resource &quot;aws_s3_bucket_lifecycle_configuration&quot; &quot;this&quot; {
110  bucket = aws_s3_bucket.this.id
111
112  rule {
113    id     = &quot;backups&quot;
114    status = &quot;Enabled&quot;
115
116    filter {
117      prefix = &quot;backups/&quot;
118    }
119
120    transition {
121      days          = 90
122      storage_class = &quot;GLACIER_IR&quot;
123    }
124
125    transition {
126      days          = 180
127      storage_class = &quot;DEEP_ARCHIVE&quot;
128    }
129
130    expiration {
131      days = 365
132    }
133  }
134}
135terraform {
136  required_version = &quot;~&gt; 1.0.11&quot;
137  required_providers {
138    aws  = &quot;~&gt; 3.73.0&quot;
139  }
140}
141

Source https://stackoverflow.com/questions/71078462

QUESTION

Programmatically Connecting a GitHub repo to a Google Cloud Project

Asked 2022-Feb-12 at 16:16

I'm working on a Terraform project that will set up all the GCP resources needed for a large project spanning multiple GitHub repos. My goal is to be able to recreate the cloud infrastructure from scratch completely with Terraform.

The issue I'm running into is in order to setup build triggers with Terraform within GCP, the GitHub repo that is setting off the trigger first needs to be connected. Currently, I've only been able to do that manually via the Google Cloud Build dashboard. I'm not sure if this is possible via Terraform or with a script but I'm looking for any solution I can automate this with. Once the projects are connected updating everything with Terraform is working fine.

TLDR; How can I programmatically connect a GitHub project with a GCP project instead of using the dashboard?

ANSWER

Answered 2022-Feb-12 at 16:16

Currently there is no way to programmatically connect a GitHub repo to a Google Cloud Project. This must be done manually via Google Cloud.

My workaround is to manually connect an "admin" project, build containers and save them to that project's artifact registry, and then deploy the containers from the registry in the programmatically generated project.

Source https://stackoverflow.com/questions/69834735

QUESTION

Kubernetes NodePort is not available on all nodes - Oracle Cloud Infrastructure (OCI)

Asked 2022-Jan-31 at 14:37

I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.

I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.

The goal is:

  • A running managed Kubernetes cluster (OKE)
  • 2 nodes at least
  • 1 service that's accessible for external parties

The infra looks the following:

  • A VCN for the whole thing
  • A private subnet on 10.0.1.0/24
  • A public subnet on 10.0.0.0/24
  • NAT gateway for the private subnet
  • Internet gateway for the public subnet
  • Service gateway
  • The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
  • A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
  • A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
  • A namespace in the K8S cluster (call it staging for now)
  • A deployment which refers to a custom NextJS application serving traffic on port 3000

And now it's the point where I want to expose the service running on port 3000.

I have 2 obvious choices:

  • Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
  • Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer

The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).

Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.

The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.

That's my problem and I couldn't figure out what could be the issue.

What I've tried so far:

  • Switching from ARM machines to AMD ones - no change
  • Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
  • Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
  • Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
  • Ran the Node Doctor on the nodes, everything is fine
  • Checked the logs of kube-proxy, kube-flannel, core-dns, no error
  • Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
  • Recreated the cluster from scratch

Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.

Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.

Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.

1tcp        0      0 0.0.0.0:31600           0.0.0.0:*               LISTEN      16671/kube-proxy
2

Edit4:: Tried adding whitelisting iptables rules but didn't change anything.

1tcp        0      0 0.0.0.0:31600           0.0.0.0:*               LISTEN      16671/kube-proxy
2[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT
3[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT
4[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT
5

Edit5: Just as a trial, I created a LoadBalancer once more to verify if I'm gone completely mental and I just didn't notice this error when I tried or it really works. Funny thing, it works perfectly fine through the classic load balancer's IP. But when I try to send a request to the nodes directly on the port that was opened for the load balancer (it's 30679 for now). I get response only from the node that's running the pod. From the other, still nothing yet through the load balancer, I get 100% successful responses.

Bonus, here's the iptables from the Node that's not responding to requests, not too sure what to look for:

1tcp        0      0 0.0.0.0:31600           0.0.0.0:*               LISTEN      16671/kube-proxy
2[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT
3[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT
4[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT
5[opc@oke-cn44eyuqdoq-n3ewna4fqra-sx5p5dalkuq-1 ~]$ sudo iptables -L
6Chain INPUT (policy ACCEPT)
7target     prot opt source               destination
8KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
9KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
10KUBE-FIREWALL  all  --  anywhere             anywhere
11
12Chain FORWARD (policy ACCEPT)
13target     prot opt source               destination
14KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
15KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
16KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
17ACCEPT     all  --  10.244.0.0/16        anywhere
18ACCEPT     all  --  anywhere             10.244.0.0/16
19
20Chain OUTPUT (policy ACCEPT)
21target     prot opt source               destination
22KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
23KUBE-FIREWALL  all  --  anywhere             anywhere
24
25Chain KUBE-EXTERNAL-SERVICES (2 references)
26target     prot opt source               destination
27
28Chain KUBE-FIREWALL (2 references)
29target     prot opt source               destination
30DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
31DROP       all  -- !loopback/8           loopback/8           /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
32
33Chain KUBE-FORWARD (1 references)
34target     prot opt source               destination
35DROP       all  --  anywhere             anywhere             ctstate INVALID
36ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
37ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
38ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
39
40Chain KUBE-KUBELET-CANARY (0 references)
41target     prot opt source               destination
42
43Chain KUBE-NODEPORTS (1 references)
44target     prot opt source               destination
45
46Chain KUBE-PROXY-CANARY (0 references)
47target     prot opt source               destination
48
49Chain KUBE-SERVICES (2 references)
50target     prot opt source               destination
51

Service spec (the running one since it was generated using Terraform):

1tcp        0      0 0.0.0.0:31600           0.0.0.0:*               LISTEN      16671/kube-proxy
2[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT
3[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT
4[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT
5[opc@oke-cn44eyuqdoq-n3ewna4fqra-sx5p5dalkuq-1 ~]$ sudo iptables -L
6Chain INPUT (policy ACCEPT)
7target     prot opt source               destination
8KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
9KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
10KUBE-FIREWALL  all  --  anywhere             anywhere
11
12Chain FORWARD (policy ACCEPT)
13target     prot opt source               destination
14KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
15KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
16KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
17ACCEPT     all  --  10.244.0.0/16        anywhere
18ACCEPT     all  --  anywhere             10.244.0.0/16
19
20Chain OUTPUT (policy ACCEPT)
21target     prot opt source               destination
22KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
23KUBE-FIREWALL  all  --  anywhere             anywhere
24
25Chain KUBE-EXTERNAL-SERVICES (2 references)
26target     prot opt source               destination
27
28Chain KUBE-FIREWALL (2 references)
29target     prot opt source               destination
30DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
31DROP       all  -- !loopback/8           loopback/8           /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
32
33Chain KUBE-FORWARD (1 references)
34target     prot opt source               destination
35DROP       all  --  anywhere             anywhere             ctstate INVALID
36ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
37ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
38ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
39
40Chain KUBE-KUBELET-CANARY (0 references)
41target     prot opt source               destination
42
43Chain KUBE-NODEPORTS (1 references)
44target     prot opt source               destination
45
46Chain KUBE-PROXY-CANARY (0 references)
47target     prot opt source               destination
48
49Chain KUBE-SERVICES (2 references)
50target     prot opt source               destination
51{
52    &quot;apiVersion&quot;: &quot;v1&quot;,
53    &quot;kind&quot;: &quot;Service&quot;,
54    &quot;metadata&quot;: {
55        &quot;creationTimestamp&quot;: &quot;2022-01-28T09:13:33Z&quot;,
56        &quot;name&quot;: &quot;web-staging-service&quot;,
57        &quot;namespace&quot;: &quot;web-staging&quot;,
58        &quot;resourceVersion&quot;: &quot;22542&quot;,
59        &quot;uid&quot;: &quot;c092f99b-7c72-4c32-bf27-ccfa1fe92a79&quot;
60    },
61    &quot;spec&quot;: {
62        &quot;clusterIP&quot;: &quot;10.96.99.112&quot;,
63        &quot;clusterIPs&quot;: [
64            &quot;10.96.99.112&quot;
65        ],
66        &quot;externalTrafficPolicy&quot;: &quot;Cluster&quot;,
67        &quot;ipFamilies&quot;: [
68            &quot;IPv4&quot;
69        ],
70        &quot;ipFamilyPolicy&quot;: &quot;SingleStack&quot;,
71        &quot;ports&quot;: [
72            {
73                &quot;nodePort&quot;: 31600,
74                &quot;port&quot;: 3000,
75                &quot;protocol&quot;: &quot;TCP&quot;,
76                &quot;targetPort&quot;: 3000
77            }
78        ],
79        &quot;selector&quot;: {
80            &quot;app&quot;: &quot;frontend&quot;
81        },
82        &quot;sessionAffinity&quot;: &quot;None&quot;,
83        &quot;type&quot;: &quot;NodePort&quot;
84    },
85    &quot;status&quot;: {
86        &quot;loadBalancer&quot;: {}
87    }
88}
89

Any ideas are appreciated. Thanks guys.

ANSWER

Answered 2022-Jan-31 at 12:06

Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.

Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.

Source https://stackoverflow.com/questions/70893487

QUESTION

Can you pass blocks as variables in Terraform, referencing the type of a resource's nested block contents?

Asked 2021-Dec-20 at 02:40

I am trying to build in Terraform a Web ACL resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl

This resource has the nested blocks rule->action->block and rule-> action->count

I would like to have a variable which's type allows me to set the action to either count {} or block{} so that the two following configurations are possible:

With block:

1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13

With count:

1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
14  ...
15
16  rule {
17   ...
18
19    action {
20      count {}
21    }
22
23   ...
24}
25

I can achieve this result with a boolean variable and dynamic blocks in a very non-declarative way so far.

My question is, can the type of a variable reference the type of a nested block, so that the content of the nested block can be passed in a variable?

What I am trying to achieve is something that would look similar to this (non working syntax):

1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
14  ...
15
16  rule {
17   ...
18
19    action {
20      count {}
21    }
22
23   ...
24}
25resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
26  ...
27
28  rule {
29   ...
30
31    action = var.action_block
32
33   ...
34  } 
35}
36
1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
14  ...
15
16  rule {
17   ...
18
19    action {
20      count {}
21    }
22
23   ...
24}
25resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
26  ...
27
28  rule {
29   ...
30
31    action = var.action_block
32
33   ...
34  } 
35}
36variable &quot;action_block&quot; {
37  description = &quot;Action of the rule&quot;
38  type         = &lt;whatever type is accepted by aws_wafv2_web_acl-&gt;rule-&gt;action&gt;
39}
40

so that it can be passed down in a similar manner to this

1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
14  ...
15
16  rule {
17   ...
18
19    action {
20      count {}
21    }
22
23   ...
24}
25resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
26  ...
27
28  rule {
29   ...
30
31    action = var.action_block
32
33   ...
34  } 
35}
36variable &quot;action_block&quot; {
37  description = &quot;Action of the rule&quot;
38  type         = &lt;whatever type is accepted by aws_wafv2_web_acl-&gt;rule-&gt;action&gt;
39}
40module &quot;my_waf&quot; {
41  source                   = &quot;../modules/waf&quot;
42  action_block {
43    block {}
44  }
45}
46

For reference, what I am trying to avoid:

1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
14  ...
15
16  rule {
17   ...
18
19    action {
20      count {}
21    }
22
23   ...
24}
25resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
26  ...
27
28  rule {
29   ...
30
31    action = var.action_block
32
33   ...
34  } 
35}
36variable &quot;action_block&quot; {
37  description = &quot;Action of the rule&quot;
38  type         = &lt;whatever type is accepted by aws_wafv2_web_acl-&gt;rule-&gt;action&gt;
39}
40module &quot;my_waf&quot; {
41  source                   = &quot;../modules/waf&quot;
42  action_block {
43    block {}
44  }
45}
46    dynamic &quot;action&quot; {
47      for_each = var.block  ? [] : [1]
48      content {
49        count {}
50      }
51    }
52
53    dynamic &quot;action&quot; {
54      for_each = var.block ? [1] : []
55      content {
56        block {}
57      }
58    }
59
60

Thank you so much for your help!

ANSWER

Answered 2021-Dec-20 at 02:40

The only marginal improvement I can imagine is to move the dynamic blocks one level deeper, to perhaps make it clear to a reader that the action block will always be present and it's the count or block blocks inside that have dynamic behavior:

1resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
2  ...
3
4  rule {
5   ...
6
7    action {
8      block {}
9    }
10
11   ...
12}
13resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
14  ...
15
16  rule {
17   ...
18
19    action {
20      count {}
21    }
22
23   ...
24}
25resource &quot;aws_wafv2_web_acl&quot; &quot;example&quot; {
26  ...
27
28  rule {
29   ...
30
31    action = var.action_block
32
33   ...
34  } 
35}
36variable &quot;action_block&quot; {
37  description = &quot;Action of the rule&quot;
38  type         = &lt;whatever type is accepted by aws_wafv2_web_acl-&gt;rule-&gt;action&gt;
39}
40module &quot;my_waf&quot; {
41  source                   = &quot;../modules/waf&quot;
42  action_block {
43    block {}
44  }
45}
46    dynamic &quot;action&quot; {
47      for_each = var.block  ? [] : [1]
48      content {
49        count {}
50      }
51    }
52
53    dynamic &quot;action&quot; {
54      for_each = var.block ? [1] : []
55      content {
56        block {}
57      }
58    }
59
60  action {
61    dynamic &quot;count&quot; {
62      for_each = var.block ? [] : [1]
63      content {}
64    }
65    dynamic &quot;block&quot; {
66      for_each = var.block ? [1] : []
67      content {}
68    }
69  }
70

There are some other ways you could formulate those two for_each expressions so that the input could have a different shape, but you'll need to write out a suitable type constraint for that variable yourself which matches whatever conditions you want to apply to it.

Source https://stackoverflow.com/questions/70382612

QUESTION

trigger lambda function from DynamoDB

Asked 2021-Nov-17 at 22:35

Every time a new item arrives in my dynamo table, I want to run a lambda function trigger_lambda_function. This is how I define my table and trigger. However, the trigger does not work as expected.

1resource &quot;aws_dynamodb_table&quot; &quot;filenames&quot; {
2  name           = local.dynamodb_table_filenames
3  billing_mode   = &quot;PROVISIONED&quot;
4  read_capacity  = 1000
5  write_capacity = 1000
6  hash_key       = &quot;filename&quot;
7
8  #range_key      = &quot;&quot;
9
10  attribute {
11    name = &quot;filename&quot;
12    type = &quot;S&quot;
13  }
14
15  tags = var.tags
16}
17
18
19resource &quot;aws_lambda_event_source_mapping&quot; &quot;allow_dynamodb_table_to_trigger_lambda&quot; {
20  event_source_arn  = aws_dynamodb_table.filenames.stream_arn
21  function_name     = aws_lambda_function.trigger_stepfunction_lambda.arn
22  starting_position = &quot;LATEST&quot;
23}
24

Upon terraform apply, I get an error that:

1resource &quot;aws_dynamodb_table&quot; &quot;filenames&quot; {
2  name           = local.dynamodb_table_filenames
3  billing_mode   = &quot;PROVISIONED&quot;
4  read_capacity  = 1000
5  write_capacity = 1000
6  hash_key       = &quot;filename&quot;
7
8  #range_key      = &quot;&quot;
9
10  attribute {
11    name = &quot;filename&quot;
12    type = &quot;S&quot;
13  }
14
15  tags = var.tags
16}
17
18
19resource &quot;aws_lambda_event_source_mapping&quot; &quot;allow_dynamodb_table_to_trigger_lambda&quot; {
20  event_source_arn  = aws_dynamodb_table.filenames.stream_arn
21  function_name     = aws_lambda_function.trigger_stepfunction_lambda.arn
22  starting_position = &quot;LATEST&quot;
23}
24│ Error: error creating Lambda Event Source Mapping (): InvalidParameterValueException: Unrecognized event source.
25│ {
26│   RespMetadata: {
27│     StatusCode: 400,
28│     RequestID: &quot;5ae68da6-3f6d-4adb-b104-72ae584dbca7&quot;
29│   },
30│   Message_: &quot;Unrecognized event source.&quot;,
31│   Type: &quot;User&quot;
32│ }
3334│   with module.ingest_system[&quot;alpegatm&quot;].aws_lambda_event_source_mapping.allow_dynamodb_table_to_trigger_lambda,
35│   on ../../modules/ingest_system/dynamo.tf line 39, in resource &quot;aws_lambda_event_source_mapping&quot; &quot;allow_dynamodb_table_to_trigger_lambda&quot;:
36│   39: resource &quot;aws_lambda_event_source_mapping&quot; &quot;allow_dynamodb_table_to_trigger_lambda&quot; {
37

I also tried .arn instead of stream_arnbut that threw an error too. What else could I try?

I followed the documentation for the trigger: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_event_source_mapping

ANSWER

Answered 2021-Nov-17 at 22:35

From the aws_dynamodb_table docs, stream_arn is only available if stream_enabled is set to true. You might want to add stream_enabled = true to your DynamoDB table definition.

By default stream_enabled is set to false. You can see all the default values here for aws_dynamodb_table.

Source https://stackoverflow.com/questions/70008141

QUESTION

Terraform: Inappropriate value for attribute &quot;ingress&quot; while creating SG

Asked 2021-Nov-02 at 04:36

I'm creating a Security group using terraform, and when I'm running terraform plan. It is giving me an error like some fields are required, and all those fields are optional.

Terraform Version: v1.0.5

AWS Provider version: v3.57.0

main.tf

1resource &quot;aws_security_group&quot; &quot;sg_oregon&quot; {
2  name        = &quot;tf-sg&quot;
3  description = &quot;Allow web traffics&quot;
4  vpc_id      = aws_vpc.vpc_terraform.id
5
6  ingress = [
7    {
8      description      = &quot;HTTP&quot;
9      from_port        = 80
10      to_port          = 80
11      protocol         = &quot;tcp&quot;
12      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
13    },
14  {
15      description      = &quot;HTTPS&quot;
16      from_port        = 443
17      to_port          = 443
18      protocol         = &quot;tcp&quot;
19      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
20  },
21
22    {
23      description      = &quot;SSH&quot;
24      from_port        = 22
25      to_port          = 22
26      protocol         = &quot;tcp&quot;
27      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
28    }
29  ]
30
31
32  egress = [
33    {
34      description      = &quot;for all outgoing traffics&quot;
35      from_port        = 0
36      to_port          = 0
37      protocol         = &quot;-1&quot;
38      cidr_blocks      = [&quot;0.0.0.0/0&quot;]
39      ipv6_cidr_blocks = [&quot;::/0&quot;]
40      
41    }
42  ]
43
44  tags = {
45    Name = &quot;sg-for-subnet&quot;
46  }
47}
48

error in console

1resource &quot;aws_security_group&quot; &quot;sg_oregon&quot; {
2  name        = &quot;tf-sg&quot;
3  description = &quot;Allow web traffics&quot;
4  vpc_id      = aws_vpc.vpc_terraform.id
5
6  ingress = [
7    {
8      description      = &quot;HTTP&quot;
9      from_port        = 80
10      to_port          = 80
11      protocol         = &quot;tcp&quot;
12      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
13    },
14  {
15      description      = &quot;HTTPS&quot;
16      from_port        = 443
17      to_port          = 443
18      protocol         = &quot;tcp&quot;
19      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
20  },
21
22    {
23      description      = &quot;SSH&quot;
24      from_port        = 22
25      to_port          = 22
26      protocol         = &quot;tcp&quot;
27      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
28    }
29  ]
30
31
32  egress = [
33    {
34      description      = &quot;for all outgoing traffics&quot;
35      from_port        = 0
36      to_port          = 0
37      protocol         = &quot;-1&quot;
38      cidr_blocks      = [&quot;0.0.0.0/0&quot;]
39      ipv6_cidr_blocks = [&quot;::/0&quot;]
40      
41    }
42  ]
43
44  tags = {
45    Name = &quot;sg-for-subnet&quot;
46  }
47}
48│ Inappropriate value for attribute &quot;ingress&quot;: element 0: attributes &quot;ipv6_cidr_blocks&quot;, &quot;prefix_list_ids&quot;, &quot;security_groups&quot;, and &quot;self&quot; are required.
49
50│ Inappropriate value for attribute &quot;egress&quot;: element 0: attributes &quot;prefix_list_ids&quot;, &quot;security_groups&quot;, and &quot;self&quot; are required.
51

ANSWER

Answered 2021-Sep-06 at 21:28

Since you are using Attributes as Blocks you have to provide values for all options:

1resource &quot;aws_security_group&quot; &quot;sg_oregon&quot; {
2  name        = &quot;tf-sg&quot;
3  description = &quot;Allow web traffics&quot;
4  vpc_id      = aws_vpc.vpc_terraform.id
5
6  ingress = [
7    {
8      description      = &quot;HTTP&quot;
9      from_port        = 80
10      to_port          = 80
11      protocol         = &quot;tcp&quot;
12      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
13    },
14  {
15      description      = &quot;HTTPS&quot;
16      from_port        = 443
17      to_port          = 443
18      protocol         = &quot;tcp&quot;
19      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
20  },
21
22    {
23      description      = &quot;SSH&quot;
24      from_port        = 22
25      to_port          = 22
26      protocol         = &quot;tcp&quot;
27      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
28    }
29  ]
30
31
32  egress = [
33    {
34      description      = &quot;for all outgoing traffics&quot;
35      from_port        = 0
36      to_port          = 0
37      protocol         = &quot;-1&quot;
38      cidr_blocks      = [&quot;0.0.0.0/0&quot;]
39      ipv6_cidr_blocks = [&quot;::/0&quot;]
40      
41    }
42  ]
43
44  tags = {
45    Name = &quot;sg-for-subnet&quot;
46  }
47}
48│ Inappropriate value for attribute &quot;ingress&quot;: element 0: attributes &quot;ipv6_cidr_blocks&quot;, &quot;prefix_list_ids&quot;, &quot;security_groups&quot;, and &quot;self&quot; are required.
49
50│ Inappropriate value for attribute &quot;egress&quot;: element 0: attributes &quot;prefix_list_ids&quot;, &quot;security_groups&quot;, and &quot;self&quot; are required.
51resource &quot;aws_security_group&quot; &quot;sg_oregon&quot; {
52  name        = &quot;tf-sg&quot;
53  description = &quot;Allow web traffics&quot;
54  vpc_id      = aws_vpc.vpc_terraform.id
55
56  ingress = [
57    {
58      description      = &quot;HTTP&quot;
59      from_port        = 80
60      to_port          = 80
61      protocol         = &quot;tcp&quot;
62      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
63      ipv6_cidr_blocks = []
64      prefix_list_ids = []
65      security_groups = []
66      self = false
67    },
68  {
69      description      = &quot;HTTPS&quot;
70      from_port        = 443
71      to_port          = 443
72      protocol         = &quot;tcp&quot;
73      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
74      ipv6_cidr_blocks = []
75      prefix_list_ids = []
76      security_groups = []
77      self = false      
78  },
79
80    {
81      description      = &quot;SSH&quot;
82      from_port        = 22
83      to_port          = 22
84      protocol         = &quot;tcp&quot;
85      cidr_blocks      = [&quot;0.0.0.0/0&quot;]  
86      ipv6_cidr_blocks = []
87      prefix_list_ids = []
88      security_groups = []
89      self = false      
90    }
91  ]
92
93
94  egress = [
95    {
96      description      = &quot;for all outgoing traffics&quot;
97      from_port        = 0
98      to_port          = 0
99      protocol         = &quot;-1&quot;
100      cidr_blocks      = [&quot;0.0.0.0/0&quot;]
101      ipv6_cidr_blocks = [&quot;::/0&quot;]
102      prefix_list_ids = []
103      security_groups = []
104      self = false
105    }
106  ]
107
108  tags = {
109    Name = &quot;sg-for-subnet&quot;
110  }
111}
112

Source https://stackoverflow.com/questions/69079945

QUESTION

How to fix &quot;Function not implemented - Failed to initialize inotify (Errno::ENOSYS)&quot; in rails

Asked 2021-Oct-31 at 17:41

So I'm running the new Apple M1 Pro chipset, and the original M1 chip on another machine, and when I attempt to create new RSpec tests in ruby I get the following error.

Function not implemented - Failed to initialize inotify (Errno::ENOSYS)

the full stack dump looks like this

1/var/lib/gems/2.7.0/gems/rb-inotify-0.10.1/lib/rb-inotify/notifier.rb:69:in `initialize': Function not implemented - Failed to initialize inotify (Errno::ENOSYS)
2        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/linux.rb:31:in `new'
3        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/linux.rb:31:in `_configure'
4        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:45:in `block in configure'
5        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:40:in `each'
6        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:40:in `configure'
7        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/adapter/base.rb:63:in `start'
8        from /usr/lib/ruby/2.7.0/forwardable.rb:235:in `start'
9        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/listener.rb:68:in `block in &lt;class:Listener&gt;'
10        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:121:in `instance_eval'
11        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:121:in `call'
12        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:91:in `transition_with_callbacks!'
13        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/fsm.rb:57:in `transition'
14        from /var/lib/gems/2.7.0/gems/listen-3.1.5/lib/listen/listener.rb:91:in `start'
15        from /var/lib/gems/2.7.0/gems/spring-watcher-listen-2.0.1/lib/spring/watcher/listen.rb:27:in `start'
16        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:80:in `start_watcher'
17        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:89:in `preload'
18        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:157:in `serve'
19        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:145:in `block in run'
20        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:139:in `loop'
21        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application.rb:139:in `run'
22        from /var/lib/gems/2.7.0/gems/spring-2.1.1/lib/spring/application/boot.rb:19:in `&lt;top (required)&gt;'
23        from /usr/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:72:in `require'
24        from /usr/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:72:in `require'
25        from -e:1:in `&lt;main&gt;'
26

rails is running from a docker container, and I have tried following the solution that is listed below but not such luck. I'm fairly new to ruby and rails so any help would be greatly appreciated!

https://github.com/evilmartians/terraforming-rails/issues/34

ANSWER

Answered 2021-Oct-31 at 17:41

Update: To fix this issue I used the solution from @mahatmanich listed here https://stackoverflow.com/questions/31857365/rails-generate-commands-hang-when-trying-to-create-a-model'

Essentially, we need to delete the bin directory and then re-create it using rake app:update:bin

Since rails 5 some 'rake' commands are encapsulated within the 'rails' command. However when one deletes 'bin/' directory one is also removeing the 'rails' command itself, thus one needs to go back to 'rake' for the reset since 'rails' is not available any longer but 'rake' still is.

Source https://stackoverflow.com/questions/69773109

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Terraform

Tutorials and Learning Resources are not available at this moment for Terraform

Share this Page

share link

Get latest updates on Terraform