terra-cli | To install the latest version | Infrastructure Automation library
kandi X-RAY | terra-cli Summary
kandi X-RAY | terra-cli Summary
The status command prints details about the current workspace and server. The version command prints the installed version string. The gcloud, git, gsutil, bq, and nextflow commands call third-party applications in the context of a Terra workspace. The resolve command is an alias for the terra resource resolve command.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Update a bucket in the workspace
- Adds a new GCS bucket to the workspace
- Add a controlled GCS bucket to the workspace
- Add a controlled dataset to the workspace
- Adds a referenced database to the workspace
- Update a BigQuery dataset in the workspace
- Add a controlled GCP Notebook instance to the workspace
- Updates a GCP notebook
- Adds a referenced GCS bucket object to the workspace
- Update a bucket object
- Demonstrates how to create a new SSH key
- Run the command
- Prints out information about this instance
- Update the environment
- Deserializes a resource
- Executes the command to build the command
- Executes the command
- Runs a tool command inside a Docker container
- Builds a shell command string for the gcloud configuration
- Resolve a GCS resource
- Runs a tool command in a local process
- Execute the generate Cromwell config
- Adds a referenced BigQuery DataTable to the workspace
- Print out this object
- Perform a list of resources
- Grants the break - glass access to the current workspace
terra-cli Key Features
terra-cli Examples and Code Snippets
Community Discussions
Trending Discussions on Infrastructure Automation
QUESTION
I have an RDS DB instance (Aurora PostgreSQL) setup in my AWS account. This was created manually using AWS Console. I now want to create CloudFormation template Yaml for that DB, which I can use to create the DB later if needed. That will also help me replicate the DB in another environment. I would also use that as part of my Infrastructure automation.
...ANSWER
Answered 2020-Jun-05 at 00:59Unfortunately, there is no such functionality provided by AWS.
However, you mean hear about two options that people could wrongfully recommend.
CloudFormer is a template creation beta tool that creates an AWS CloudFormation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and CloudFormer creates a template in an Amazon S3 bucket.
Although it sounds good, the tool is no longer maintained and its not reliable (for years in beta).
Importing Existing Resources Into a Stack
Often people mistakenly think that this "generates yaml" for you from existing resources. The truth is that it does not generate template files for you. You have to write your own template which matches your resource exactly, before you can import any resource under control to CloudFormation stack.
Your only options is to manually write the template for the RDS and import it, or look for an external tools that could reverse-engineer yaml templates from existing resources.
QUESTION
I'm struggling to set up a CI process for a web application in Azure. I'm used to deploying built code directly into Web Apps in Azure but decided to use docker this time.
In the build pipeline, I build the docker images and push them to an Azure Container Registry, tagged with the latest build number. In the release pipeline (which has DEV, TEST and PROD), I need to deploy those images to the Web Apps of each environment. There are 2 relevant tasks available in Azure releases: "Azure App Service deploy" and "Azure Web App for Containers". Neither of these allow the image source for the Web App to be set to Azure Conntainer Registry. Instead they take custom registry/repository names and set the image source in the Web App to Private Registry, which then requires login and password. I'm also deploying all Azure resources using ARM templates so I don't like the idea of configuring credentials when the 2 resources (the Registry and the Web App) are integrated already. Ideally, I would be able to set the Web App to use the repository and tag in Azure Container Registry that I specify in the release. I even tried to manually configure the Web Apps first with specific repositories and tags, and then tried to change the tags used by the Web Apps with the release (with the tasks I mentioned) but it didn't work. The tags stay the same.
Another option I considered was to configure all Web Apps to specific and permanent repositories and tags (e.g. "dev-latest") from the start (which doesn't fit well with ARM deployments since the containers need to exist in the Registry before the Web Apps can be configured so my infrastructure automation is incomplete), enable "Continuous Deployment" in the Web Apps and then tag the latest pushed repositories accordingly in the release so they would be picked up by Web Apps. I could not find a reasoble way to add tags to existing repositories in the Registry.
What is Azure best practice for CI with containerised web apps? How do people actually build their containers and then deploy them to each environment?
...ANSWER
Answered 2020-Mar-16 at 08:59Just set up a CI pipeline for building an image and pushing it to a container registry.
You could then use both Azure App Service deploy and Azure Web App for Containers task to handle the deploy.
The Azure WebApp Container task similar to other built-in Azure tasks, requires an Azure service connection as an input. The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps Server to Azure.
I'm also deploying all Azure resources using ARM templates so I don't like the idea of configuring credentials when the 2 resources (the Registry and the Web App)
You could also be able to Deploy Azure Web App for Containers with ARM and Azure DevOps.
How do people actually build their containers and then deploy them to each environment?
Kindly take a look at below blogs and official doc which may be helpful:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install terra-cli
DOCKER_NOT_AVAILABLE (default) to skip pulling the Docker image
DOCKER_AVAILABLE to pull the image (requires Docker to be installed and running).
Java 11
Docker 20.10.2 (Must be running if installing in DOCKER_AVAILABLE mode)
curl, tar, gcloud (For install only)
terra auth login launches an OAuth flow that pops out a browser window to complete the login.
If the machine where you're running the CLI does not have a browser available to it, then use the manual login flow by setting the browser flag terra config set browser MANUAL. See the Authentication section below for more details.
gcloud - Make sure you have Python installed, then download the .tar.gz archive file from the installation page. Run gcloud version to verify the installation.
gsutil - included in the gcloud CLI, or available separately here. Verify the installation with gsutil version (also printed as part of gcloud version)
bq - included with gcloud. More details are available here. Similarly, verify the installation with bq version.
nextflow - Install by downloading a bash script and running it locally. Create a nextflow directory somewhere convenient (e.g. $HOME/nextflow) and switch to it. Then do curl -s https://get.nextflow.io | bash. Finally, move the nextflow executable script to a location on the $PATH: sudo mv nextflow /usr/local/bin/. Verify the installation with nextflow -version.
git - Follow instruction here for installing Git on different platform.
Download the terra-cli.tar install package directly from the GitHub releases page.
Unarchive the tar file.
Run the install script from the unarchived directory: ./install.sh
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page