terraform | Set up a cold , inhospitable system using Terraform | Infrastructure Automation library
kandi X-RAY | terraform Summary
kandi X-RAY | terraform Summary
Terraform is a small goal-oriented Ruby DSL for setting up a machine, similar in purpose to Chef and Puppet, but without the complexity. It's tailored for the kinds of tasks needed for deploying web apps and is designed to be refreshingly easy to understand and debug. You can read through the entire Terraform library in two minutes and know precisely what it will and won't do for you. Its design is inspired by Babushka.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Ensure the Ruby project is installed
- Ensure that the package is present
- Ensure the Ruby RRuby is installed .
- Check if file exists
- Checks for all dependencies .
- Ensure that the given block is run .
- Defines a dependency .
- Determine if the current request is present .
- Execute a shell command
- Check if package exists
terraform Key Features
terraform Examples and Code Snippets
Community Discussions
Trending Discussions on terraform
QUESTION
What specific syntax or configuration changes must be made in order to resolve the error below in which terraform is failing to create an instance of azuread_application
?
THE CODE:
The terraform code that is triggering the error when terraform apply
is run is as follows:
ANSWER
Answered 2021-Oct-07 at 18:35This was a bug, reported as GitHub issue:
The resolution to the problem in the OP is to upgrade the version from 2.5.0
to 2.6.0
in the required_providers
block from the code in the OP above as follows:
QUESTION
I've created a brand new project with npm init vite bar -- --template vue
. I've done an npm install web3
and I can see my package-lock.json
includes this package. My node_modules
directory also includes the web3
modules.
So then I added this line to main.js
:
ANSWER
Answered 2022-Mar-14 at 03:36Polyfilling the Node globals and modules enables the web3
import to run in the browser:
- Install the ESBuild plugins that polyfill Node globals/modules:
QUESTION
I am trying to connect an aws api gateway to a lambda function residing in a VPC then retrieve the secret manager to access a database using python code with boto3. The database and vpc endpoint were created in a private subnet.
lambda function ...ANSWER
Answered 2022-Feb-19 at 21:44If you can call the Lambda function from API Gateway, then your question title "how to connect an aws api gateway to a private lambda function inside a vpc" is already complete and working.
It appears that your actual problem is simply accessing Secrets Manager from inside a Lambda function running in a VPC.
It's also strange that you are assigning a "db" security group to the Lambda function. What are the inbound/outbound rules of this Security Group?
It is entirely unclear why you created a VPC endpoint. What are we supposed to make of service_name = "foo"
? What is service "foo"? How is this VPC endpoint related to the Lambda function in any way? If this is supposed to be a VPC endpoint for Secrets Manager, then the service name should be "com.amazonaws.YOUR-REGION.secretsmanager"
.
If you need more help you need to edit your question to provide the following: The inbound and outbound rules of any relevant security groups, and the Lambda function code that is trying to call SecretsManager.
Update: After clarifications in comments and the updated question, I think the problem is you are missing any subnet assignments for the VPC Endpoint. Also, since you are adding a VPC policy with full access, you can just leave that out entirely, as the default policy is full access. I suggest changing the VPC endpoint to the following:
QUESTION
Just today, whenever I run terraform apply
, I see an error something like this: Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.
It was working yesterday.
Following is the command I run: terraform init && terraform apply
Following is the list of initialized provider plugins:
...ANSWER
Answered 2022-Feb-15 at 13:49Terraform AWS Provider is upgraded to version 4.0.0 which is published on 10 February 2022.
Major changes in the release include:
- Version 4.0.0 of the AWS Provider introduces significant changes to the aws_s3_bucket resource.
- Version 4.0.0 of the AWS Provider will be the last major version to support EC2-Classic resources as AWS plans to fully retire EC2-Classic Networking. See the AWS News Blog for additional details.
- Version 4.0.0 and 4.x.x versions of the AWS Provider will be the last versions compatible with Terraform 0.12-0.15.
The reason for this change by Terraform is as follows: To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket
resource have become read-only. Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_*
resource. Once updated, new aws_s3_bucket_*
resources should be imported into Terraform state.
So, I updated my code accordingly by following the guide here: Terraform AWS Provider Version 4 Upgrade Guide | S3 Bucket Refactor
The new working code looks like this:
QUESTION
I'm working on a Terraform project that will set up all the GCP resources needed for a large project spanning multiple GitHub repos. My goal is to be able to recreate the cloud infrastructure from scratch completely with Terraform.
The issue I'm running into is in order to setup build triggers with Terraform within GCP, the GitHub repo that is setting off the trigger first needs to be connected. Currently, I've only been able to do that manually via the Google Cloud Build dashboard. I'm not sure if this is possible via Terraform or with a script but I'm looking for any solution I can automate this with. Once the projects are connected updating everything with Terraform is working fine.
TLDR; How can I programmatically connect a GitHub project with a GCP project instead of using the dashboard?
...ANSWER
Answered 2022-Feb-12 at 16:16Currently there is no way to programmatically connect a GitHub repo to a Google Cloud Project. This must be done manually via Google Cloud.
My workaround is to manually connect an "admin" project, build containers and save them to that project's artifact registry, and then deploy the containers from the registry in the programmatically generated project.
QUESTION
I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.
I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.
The goal is:
- A running managed Kubernetes cluster (OKE)
- 2 nodes at least
- 1 service that's accessible for external parties
The infra looks the following:
- A VCN for the whole thing
- A private subnet on 10.0.1.0/24
- A public subnet on 10.0.0.0/24
- NAT gateway for the private subnet
- Internet gateway for the public subnet
- Service gateway
- The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
- A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
- A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
- A namespace in the K8S cluster (call it staging for now)
- A deployment which refers to a custom NextJS application serving traffic on port 3000
And now it's the point where I want to expose the service running on port 3000.
I have 2 obvious choices:
- Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
- Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer
The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).
Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.
The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.
That's my problem and I couldn't figure out what could be the issue.
What I've tried so far:
- Switching from ARM machines to AMD ones - no change
- Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
- Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
- Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
- Ran the Node Doctor on the nodes, everything is fine
- Checked the logs of kube-proxy, kube-flannel, core-dns, no error
- Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
- Recreated the cluster from scratch
Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.
Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.
Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.
...ANSWER
Answered 2022-Jan-31 at 12:06Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.
Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.
QUESTION
I am trying to build in Terraform a Web ACL resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl
This resource has the nested blocks rule->action->block and rule-> action->count
I would like to have a variable which's type allows me to set the action to either count {}
or block{}
so that the two following configurations are possible:
With block:
...ANSWER
Answered 2021-Dec-20 at 02:40The only marginal improvement I can imagine is to move the dynamic
blocks one level deeper, to perhaps make it clear to a reader that the action
block will always be present and it's the count
or block
blocks inside that have dynamic behavior:
QUESTION
Every time a new item arrives in my dynamo table, I want to run a lambda function trigger_lambda_function
. This is how I define my table and trigger. However, the trigger does not work as expected.
ANSWER
Answered 2021-Nov-17 at 22:35From the aws_dynamodb_table
docs, stream_arn
is only available if stream_enabled
is set to true
. You might want to add stream_enabled = true
to your DynamoDB table definition.
By default stream_enabled
is set to false
. You can see all the default values here for aws_dynamodb_table
.
QUESTION
I'm creating a Security group using terraform, and when I'm running terraform plan. It is giving me an error like some fields are required, and all those fields are optional.
Terraform Version: v1.0.5
AWS Provider version: v3.57.0
...main.tf
ANSWER
Answered 2021-Sep-06 at 21:28Since you are using Attributes as Blocks you have to provide values for all options:
QUESTION
So I'm running the new Apple M1 Pro chipset, and the original M1 chip on another machine, and when I attempt to create new RSpec tests in ruby I get the following error.
Function not implemented - Failed to initialize inotify (Errno::ENOSYS)
the full stack dump looks like this
...ANSWER
Answered 2021-Oct-31 at 17:41Update:
To fix this issue I used the solution from @mahatmanich listed here
https://stackoverflow.com/questions/31857365/rails-generate-commands-hang-when-trying-to-create-a-model'
Essentially, we need to delete the bin directory and then re-create it using
rake app:update:bin
Since rails 5 some 'rake' commands are encapsulated within the 'rails' command. However when one deletes 'bin/' directory one is also removeing the 'rails' command itself, thus one needs to go back to 'rake' for the reset since 'rails' is not available any longer but 'rake' still is.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
Install terraform
Install the Terraform gem on your local machine (the machine you're deploying from): gem install terraform
Write your system provisioning script using the Terraform DSL.
Copy your system provisioning script and the Terraform library (which is a single file) to your remote machine and run it. Do this as part of your deploy script.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page