virtual-cluster | Vagrant-based virtual cluster with shifter echosystem | Infrastructure Automation library
kandi X-RAY | virtual-cluster Summary
kandi X-RAY | virtual-cluster Summary
A vagrant-based virtual cluster with slurm + shifter on top of Ubuntu Trusty.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of virtual-cluster
virtual-cluster Key Features
virtual-cluster Examples and Code Snippets
Community Discussions
Trending Discussions on Infrastructure Automation
QUESTION
I have an RDS DB instance (Aurora PostgreSQL) setup in my AWS account. This was created manually using AWS Console. I now want to create CloudFormation template Yaml for that DB, which I can use to create the DB later if needed. That will also help me replicate the DB in another environment. I would also use that as part of my Infrastructure automation.
...ANSWER
Answered 2020-Jun-05 at 00:59Unfortunately, there is no such functionality provided by AWS.
However, you mean hear about two options that people could wrongfully recommend.
CloudFormer is a template creation beta tool that creates an AWS CloudFormation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and CloudFormer creates a template in an Amazon S3 bucket.
Although it sounds good, the tool is no longer maintained and its not reliable (for years in beta).
Importing Existing Resources Into a Stack
Often people mistakenly think that this "generates yaml" for you from existing resources. The truth is that it does not generate template files for you. You have to write your own template which matches your resource exactly, before you can import any resource under control to CloudFormation stack.
Your only options is to manually write the template for the RDS and import it, or look for an external tools that could reverse-engineer yaml templates from existing resources.
QUESTION
I'm struggling to set up a CI process for a web application in Azure. I'm used to deploying built code directly into Web Apps in Azure but decided to use docker this time.
In the build pipeline, I build the docker images and push them to an Azure Container Registry, tagged with the latest build number. In the release pipeline (which has DEV, TEST and PROD), I need to deploy those images to the Web Apps of each environment. There are 2 relevant tasks available in Azure releases: "Azure App Service deploy" and "Azure Web App for Containers". Neither of these allow the image source for the Web App to be set to Azure Conntainer Registry. Instead they take custom registry/repository names and set the image source in the Web App to Private Registry, which then requires login and password. I'm also deploying all Azure resources using ARM templates so I don't like the idea of configuring credentials when the 2 resources (the Registry and the Web App) are integrated already. Ideally, I would be able to set the Web App to use the repository and tag in Azure Container Registry that I specify in the release. I even tried to manually configure the Web Apps first with specific repositories and tags, and then tried to change the tags used by the Web Apps with the release (with the tasks I mentioned) but it didn't work. The tags stay the same.
Another option I considered was to configure all Web Apps to specific and permanent repositories and tags (e.g. "dev-latest") from the start (which doesn't fit well with ARM deployments since the containers need to exist in the Registry before the Web Apps can be configured so my infrastructure automation is incomplete), enable "Continuous Deployment" in the Web Apps and then tag the latest pushed repositories accordingly in the release so they would be picked up by Web Apps. I could not find a reasoble way to add tags to existing repositories in the Registry.
What is Azure best practice for CI with containerised web apps? How do people actually build their containers and then deploy them to each environment?
...ANSWER
Answered 2020-Mar-16 at 08:59Just set up a CI pipeline for building an image and pushing it to a container registry.
You could then use both Azure App Service deploy and Azure Web App for Containers task to handle the deploy.
The Azure WebApp Container task similar to other built-in Azure tasks, requires an Azure service connection as an input. The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps Server to Azure.
I'm also deploying all Azure resources using ARM templates so I don't like the idea of configuring credentials when the 2 resources (the Registry and the Web App)
You could also be able to Deploy Azure Web App for Containers with ARM and Azure DevOps.
How do people actually build their containers and then deploy them to each environment?
Kindly take a look at below blogs and official doc which may be helpful:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install virtual-cluster
Modify /etc/hosts so that "controller" will know the address of "server" and vice versa.
Install various dependencies of slurm and shifter with APT.
Install and configure munge (install-munge.sh): Install munge with APT. Copy /shared-folder/installation/munge.key to /etc/munge and change owner and permissions. WARNING: munge.key is already provided in this repository. However, for any more serious use than this insecure toy virtual cluster a new munge.key should be generated with /usr/sbin/create-munge-key.
Install and configure slurm (install-slurm.sh): Download, build and install slurm from source. Set up a state save location for slurm in /var/lib/slurm. Set up startup scripts in /etc/init.d/slurm. Create a symlink /etc/slurm.conf to /shared-folder/installation/slurm.conf.
Install and configure shifter (install-shifter.sh): Build and install udiRoot in /opt/shifter/udiRoot. The most important things that get installed are the shifter and shifterimg executables as well as the plugin for slurm. The shifter and shifterimg executables provide the user interface of shifter. Copy /shared-folder/installation/udiRoot.conf to /etc/shifter/udiRoot.conf. The file udiRoot.conf provides most of the configuration details of shifter. Configure slurm to use the shifter plugin. This is achieved through the configuration file /etc/plugstack.conf. Install image gateway: Set up a python virtual environment in /opt/shifter/imagegw where the image gateway will be executed. Create a symlink /etc/shifter/imagemanager.json to /shared-folder/installation/imagemanager.json. The file imagemanager.json provides configuration details of the image gateway. Set up startup scripts in /etc/init.d.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page