Terraform Quick Summary
Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-prem resources safely and efficiently.
Terraform is a popular open source Infrastructure as Code (IAC) tool that automates provisioning of your infrastructure in the cloud and manages the full lifecycle of all deployed resources, which are defined in source code. Its resource-managing behavior is predictable and reproducible, so you can plan the actions in advance and reuse your code configurations for similar infrastructure.
How To Create Reusable Infrastructure with Terraform Modules and Templates
What are Terraform Templates? Basics and Use Cases
Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.
Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.
How does Terraform work?
Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API.
You can find all publicly available providers on the Terraform Registry, including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.
The core Terraform workflow consists of three stages:
- Write: You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.
- Plan:Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.
- Apply: On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines.
Why Terraform?
Manage any infrastructure
Find providers for many of the platforms and services you already use in the Terraform Registry. You can also write your own. Terraform takes an immutable approach to infrastructure, reducing the complexity of upgrading or modifying your services and infrastructure.
Track your infrastructure
Terraform generates a plan and prompts you for your approval before modifying your infrastructure. It also keeps track of your real infrastructure in a state file, which acts as a source of truth for your environment. Terraform uses the state file to determine the changes to make to your infrastructure so that it will match your configuration.
Automate changes
Terraform configuration files are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. Terraform builds a resource graph to determine resource dependencies and creates or modifies non-dependent resources in parallel. This allows Terraform to provision resources efficiently.
Standardize configurations
Terraform supports reusable configuration components called modules that define configurable collections of infrastructure, saving time and encouraging best practices. You can use publicly available modules from the Terraform Registry, or write your own.
Collaborate
Since your configuration is written in a file, you can commit it to a Version Control System (VCS) and use Terraform Cloud to efficiently manage Terraform workflows across teams. Terraform Cloud runs Terraform in a consistent, reliable environment and provides secure access to shared state and secret data, role-based access controls, a private registry for sharing both modules and providers, and more.
Prerequisites
- A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. You can find instructions in the DigitalOcean product documents, How to Create a Personal Access Token.
- A password-less SSH key added to your DigitalOcean account, which you can create by following How To Use SSH Keys with DigitalOcean Droplets.
- Terraform installed on your local machine. For instructions according to your operating system, see Step 1 of the How To Use Terraform with DigitalOcean tutorial.
- Python 3 installed on your local machine. You can complete Step 1 of How To Install and Set Up a Local Programming Environment for Python 3 for your OS.
- A fully registered domain name added to your DigitalOcean account. For instructions on how to do that, visit the official docs.
Understanding a Terraform Project’s Structure
A resource is an entity of a cloud service (such as a DigitalOcean Droplet) declared in Terraform code that is created according to specified and inferred properties. Multiple resources form infrastructure with their mutual connections.
Terraform uses a specialized programming language for defining infrastructure, called Hashicorp Configuration Language (HCL). HCL code is typically stored in files ending with the extension tf. A Terraform project is any directory that contains tf files and which has been initialized using the init command, which sets up Terraform caches and default local state.
Terraform state is the mechanism via which it keeps track of resources that are actually deployed in the cloud. State is stored in backends (locally on disk or remotely on a file storage cloud service or specialized state management software) for optimal redundancy and reliability.
Project workspaces allow you to have multiple states in the same backend, tied to the same configuration. This allows you to deploy multiple distinct instances of the same infrastructure. Each project starts with a workspace named default—this will be used if you do not explicitly create or switch to another one.
Modules in Terraform (akin to libraries in other programming languages) are parametrized code containers enclosing multiple resource declarations. They allow you to abstract away a common part of your infrastructure and reuse it later with different inputs.
A Terraform project can also include external code files for use with dynamic data inputs, which can parse the JSON output of a CLI command and offer it for use in resource declarations. In this tutorial, you’ll do this with a Python script.
Simple Structure
A simple structure is suitable for small and testing projects, with a few resources of varying types and variables. It has a few configuration files, usually one per resource type (or more helper ones together with a main), and no custom modules, because most of the resources are unique and there aren’t enough to be generalized and reused. Following this, most of the code is stored in the same directory, next to each other. These projects often have a few variables (such as an API key for accessing the cloud) and may use dynamic data inputs and other Terraform and HCL features, though not prominently.
As an example of the file structure of this approach, this is what the project you’ll build in this tutorial will look like in the end:
.
└── tf/
├── versions.tf
├── variables.tf
├── provider.tf
├── droplets.tf
├── dns.tf
├── data-sources.tf
└── external/
└── name-generator.py
As this project will deploy an Apache web server Droplet and set up DNS records, the definitions of project variables, the DigitalOceanTerraform provider, the Droplet, and DNS records will be stored in their respective files. The minimum required Terraform and DigitalOcean provider versions will be specified in versions.tf, while the Python script that will generate a name for the Droplet (and be used as a dynamic data source in data-sources.tf) will be stored in the external folder, to separate it from HCL code.
Complex Structure
Contrary to the simple structure, this approach is suitable for large projects, with clearly defined subdirectory structures containing multiple modules of varying levels of complexity, aside from the usual code. These modules can depend on each other. Coupled with version control systems, these projects can make extensive use of workspaces. This approach is suitable for larger projects managing multiple apps, while reusing code as much as possible.
Development, staging, quality assurance, and production infrastructure instances can also be housed under the same project in different directories by relying on common modules, thus eliminating duplicate code and making the project the central source of truth. Here is the file structure of an example project with a more complex structure, containing multiple deployment apps, Terraform modules, and target cloud environments:
.
└── tf/
├── modules/
│ ├── network/
│ │ ├── main.tf
│ │ ├── dns.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── spaces/
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── applications/
├── backend-app/
│ ├── env/
│ │ ├── dev.tfvars
│ │ ├── staging.tfvars
│ │ ├── qa.tfvars
│ │ └── production.tfvars
│ └── main.tf
└── frontend-app/
├── env/
│ ├── dev.tfvars
│ ├── staging.tfvars
│ ├── qa.tfvars
│ └── production.tfvars
└── main.tf
In the next steps, you’ll create a project with a simple structure that will provision a Droplet with an Apache web server installed and DNS records set up for your domain. You’ll first initialize your project with the DigitalOcean provider and variables, and then proceed to define the Droplet, a dynamic data source to provide its name, and a DNS record for deployment.
Step 1 — Setting Up Your Initial Project
In this section, you’ll add the DigitalOceanTerraform provider to your project, define the project variables, and declare a DigitalOcean provider instance, so that Terraform will be able to connect to your account.
Start off by creating a directory for your Terraform project with the following command:
· mkdir ~/apache-droplet-terraform
·
Navigate to it:
· cd ~/apache-droplet-terraform
·
Since this project will follow the simple structuring approach, you’ll store the provider, variables, Droplet, and DNS record code in separate files, per the file structure from the previous section. First, you’ll need to add the DigitalOceanTerraform provider to your project as a required provider.
Create a file named versions.tf and open it for editing by running:
· nano versions.tf
·
Add the following lines:
~/apache-droplet-terraform/versions.tf
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
In this terraform block, you list the required providers (DigitalOcean, version 2.x). When you are done, save and close the file.
Then, define the variables your project will expose in the variables.tf file, following the approach of storing different resource types in separate code files:
· nano variables.tf
·
Add the following variables:
~/apache-droplet-terraform/variables.tf
variable "do_token" {}
variable "domain_name" {}
Save and close the file.
The do_token variable will hold your DigitalOcean Personal Access Token and domain_name will specify your desired domain name. The deployed Droplet will have the SSH key, identified by the SSH fingerprint, automatically installed.
Next, let’s define the DigitalOcean provider instance for this project. You’ll store it in a file named provider.tf. Create and open it for editing by running:
· nano provider.tf
·
Add the provider:
~/apache-droplet-terraform/provider.tf
provider "digitalocean" {
token = var.do_token
}
Save and exit when you’re done. You’ve defined the digitalocean provider, which corresponds to the required provider you specified earlier in provider.tf, and set its token to the value of the variable, which will be supplied during runtime.
In this step, you have created a directory for your project, requested the DigitalOcean provider to be available, declared project variables, and set up the connection to a DigitalOcean provider instance to use an authentication token that will be provided later.
Step 2 — Creating a Python Script for Dynamic Data
Before continuing on to defining the Droplet, you’ll create a Python script that will generate the Droplet’s name dynamically and declare a data source resource to parse it. The name will be generated by concatenating a constant string (web) with the current time of the local machine, expressed in the UNIX epoch format. A naming script can be useful when multiple Droplets are generated according to a naming scheme, in order to easily differentiate between them.
You’ll store the script in a file named name-generator.py, in a directory named external. First, create the directory by running:
· mkdir external
·
The external directory resides in the root of your project and will store non-HCL code files, like the Python script you’ll write.
Create name-generator.py under external and open it for editing:
· nano external/name-generator.py
·
Add the following code:
external/name-generator.py
importjson, time
fixed_name = "web"
result = {
"name": f"{fixed_name}-{int(time.time())}",
}
print(json.dumps(result))
This Python script imports the json and time modules, declares a dictionary named result, and sets the value of the name key to an interpolated string, which combines the fixed_name with the current UNIX time of the machine it runs on. Then, the result is converted into JSON and outputted on stdout. The output will be different each time the script is run:
Output
{"name": "web-1597747959"}
When you’re done, save and close the file.
Note: Large and complex structured projects require more thought put into how external data sources are created and used, especially in terms of portability and error handling. Terraform expects the executed program to write a human-readable error message to stderr and gracefully exit with a non-zero status, which is something not shown in this step because of the simplicity of the task. Additionally, it expects the program to have no side effects, so that it can be re-run as many times as needed.
For more info on what Terraform expects, visit the official docs on data sources.
Now that the script is ready, you can define the data source, which will pull the data from the script. You’ll store the data source in a file named data-sources.tf in the root of your project as per the simple structuring approach.
Create it for editing by running:
· nano data-sources.tf
·
Add the following definition:
~/apache-droplet-terraform/data-sources.tf
data "external" "droplet_name" {
program = ["python3", "${path.module}/external/name-generator.py"]
}
Save and close the file.
This data source is called droplet_name and executes the name-generator.py script using Python 3, which resides in the external directory you just created. It automatically parses its output and provides the deserialized data under its result attribute for use within other resource definitions.
With the data source now declared, you can define the Droplet that Apache will run on.
Step 3 — Defining the Droplet
In this step, you’ll write the definition of the Droplet resource and store it in a code file dedicated to Droplets, as per the simple structuring approach. Its name will come from the dynamic data source you have just created, and will be different each time it’s deployed.
Create and open the droplets.tf file for editing:
· nano droplets.tf
·
Add the following Droplet resource definition:
~/apache-droplet-terraform/droplets.tf
data "digitalocean_ssh_key" "ssh_key" {
name = "your_ssh_key_name"
}
resource "digitalocean_droplet" "web" {
image = "ubuntu-20-04-x64"
name = data.external.droplet_name.result.name
region = "fra1"
size = "s-1vcpu-1gb"
ssh_keys = [
data.digitalocean_ssh_key.ssh_key.id
]
}
You first declare a DigitalOcean SSH key resource called ssh_key, which will fetch a key from your account by its name. Make sure to replace the highlighted code with your SSH key name.
Then, you declare a Droplet resource, called web. Its actual name in the cloud will be different, because it’s being requested from the droplet_name external data source. To bootstrap the Droplet resource with a SSH key each time it’s deployed, the ID of the ssh_key is passed into the ssh_keys parameter, so that DigitalOcean will know which key to apply.
For now, this is all you need to configure related to droplet.tf, so save and close the file when you’re done.
You’ll now write the configuration for the DNS record that will point your domain to the just declared Droplet.
Step 4 — Defining DNS Records
The last step in the process is to configure the DNS record pointing to the Droplet from your domain.
You’ll store the DNS config in a file named dns.tf, because it’s a separate resource type from the others you have created in the previous steps. Create and open it for editing:
· nano dns.tf
·
Add the following lines:
~/apache-droplet-terraform/dns.tf
resource "digitalocean_record" "www" {
domain = var.domain_name
type = "A"
name = "@"
value = digitalocean_droplet.web.ipv4_address
}
This code declares a DigitalOcean DNS record at your domain name (passed in using the variable), of type A. The record has a name of @, which is a placeholder routing to the domain itself and with the Droplet IP address as its value. You can replace the name value with something else, which will result in a subdomain being created.
When you’re done, save and close the file.
Now that you’ve configured the Droplet, the name generator data source, and a DNS record, you’ll move on to deploying the project in the cloud.
Step 5 — Planning and Applying the Configuration
In this section, you’ll initialize your Terraform project, deploy it to the cloud, and check that everything was provisioned correctly.
Now that the project infrastructure is defined completely, all that is left to do before deploying it is to initialize the Terraform project. Do so by running the following command:
· terraforminit
·
You’ll receive the following output:
Output
Initializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Finding latest version of hashicorp/external...
- Installing digitalocean/digitalocean v2.10.1...
- Installed digitalocean/digitalocean v2.10.1 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)
- Installing hashicorp/external v2.1.0...
- Installed hashicorp/external v2.1.0 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraforminit" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
You’ll now be able to deploy your Droplet with a dynamically generated name and an accompanying domain to your DigitalOcean account.
Start by defining the domain name, SSH key fingerprint, and your personal access token as environment variables, so you won’t have to copy the values each time you run Terraform. Run the following commands, replacing the highlighted values:
· export DO_PAT="your_do_api_token"
·
· export DO_DOMAIN_NAME="your_domain"
·
You can find your API token in your DigitalOcean Control Panel.
Run the plan command with the variable values passed in to see what steps Terraform would take to deploy your project:
· terraform plan -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}"
·
The output will be similar to the following:
Output
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# digitalocean_droplet.web will be created
+ resource "digitalocean_droplet" "web" {
+ backups = false
+ created_at = (known after apply)
+ disk = (known after apply)
+ id = (known after apply)
+ image = "ubuntu-20-04-x64"
+ ipv4_address = (known after apply)
+ ipv4_address_private = (known after apply)
+ ipv6 = false
+ ipv6_address = (known after apply)
+ locked = (known after apply)
+ memory = (known after apply)
+ monitoring = false
+ name = "web-1625908814"
+ price_hourly = (known after apply)
+ price_monthly = (known after apply)
+ private_networking = (known after apply)
+ region = "fra1"
+ resize_disk = true
+ size = "s-1vcpu-1gb"
+ ssh_keys = [
+ "...",
]
+ status = (known after apply)
+ urn = (known after apply)
+ vcpus = (known after apply)
+ volume_ids = (known after apply)
+ vpc_uuid = (known after apply)
}
# digitalocean_record.www will be created
+ resource "digitalocean_record" "www" {
+ domain = "your_domain'"
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "@"
+ ttl = (known after apply)
+ type = "A"
+ value = (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
...
The lines starting with a green + signify that Terraform will create each of the resources that follow after—which is exactly what should happen, so you can apply the configuration:
· terraform apply -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}"
·
The output will be the same as before, except that this time you’ll be asked to confirm:
Output
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: `yes`
Enter yes, and Terraform will provision your Droplet and the DNS record:
Output
digitalocean_droplet.web: Creating...
...
digitalocean_droplet.web: Creation complete after 33s [id=204432105]
digitalocean_record.www: Creating...
digitalocean_record.www: Creation complete after 1s [id=110657456]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Terraform has now recorded the deployed resources in its state. To confirm that the DNS records and the Droplet were connected successfully, you can extract the IP address of the Droplet from the local state and check if it matches public DNS records for your domain. Run the following command to get the IP address:
· terraform show | grep "ipv4"
·
You’ll receive your Droplet’s IP address:
Output
ipv4_address = "your_Droplet_IP"
...
You can check the public A records by running:
· nslookup -type=a your_domain | grep "Address" | tail -1
·
The output will show the IP address to which the A record points:
Output
Address: your_Droplet_IP
They are the same, as they should be, meaning that the Droplet and DNS record were provisioned successfully.
For the changes in the next step to take place, destroy the deployed resources by running:
· terraform destroy -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}"
·
When prompted, enter yes to continue.
In this step, you have created your infrastructure and applied it to your DigitalOcean account. You’ll now modify it to automatically install the Apache web server on the provisioned Droplet using Terraformprovisioners.
Step 6 — Running Code Using Provisioners
Now you’ll set up the installation of the Apache web server on your deployed Droplet by using the remote-execprovisioner to execute custom commands.
Terraformprovisioners can be used to execute specific actions on created remote resources (the remote-execprovisioner) or the local machine the code is executing on (using the local-execprovisioner). If a provisioner fails, the node will be marked as tainted in current state, which means that it will be deleted and recreated during the next run.
To connect to a provisioned Droplet, Terraform needs the private SSH key of the one set up on the Droplet. The best way to pass in the location of the private key is by using variables, so open variables.tf for editing:
· nano variables.tf
·
Add the highlighted line:
~/apache-droplet-terraform/variables.tf
variable "do_token" {}
variable "domain_name" {}
variable "private_key" {}
You have now added a new variable, called private_key, to your project. Save and close the file.
Next, you’ll add the connection data and remote provisioner declarations to your Droplet configuration. Open droplets.tf for editing by running:
· nano droplets.tf
·
Extend the existing code with the highlighted lines:
~/apache-droplet-terraform/droplets.tf
data "digitalocean_ssh_key" "ssh_key" {
name = "your_ssh_key_name"
}
resource "digitalocean_droplet" "web" {
image = "ubuntu-20-04-x64"
name = data.external.droplet_name.result.name
region = "fra1"
size = "s-1vcpu-1gb"
ssh_keys = [
data.digitalocean_ssh_key.ssh_key.id
]
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file(var.private_key)
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# Install Apache
"apt update",
"apt -y install apache2"
]
}
}
The connection block specifies how Terraform should connect to the target Droplet. The provisioner block contains the array of commands, within the inline parameter, that it will execute after provisioning. That is, updating the package manager cache and installing Apache. Save and exit when you’re done.
You can create a temporary environment variable for the private key path as well:
· export DO_PRIVATE_KEY="private_key_location"
·
Note: The private key, and any other file that you wish to load from within Terraform, must be placed within the project. You can see the How To Configure SSH Key-Based Authentication on a Linux Server tutorial for more info regarding SSH key set up on Ubuntu 20.04 or other distributions.
Try applying the configuration again:
· terraform apply -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}" -var "private_key=${DO_PRIVATE_KEY}"
·
Enter yes when prompted. You’ll receive output similar to before, but followed with long output from the remote-execprovisioner:
Output
digitalocean_droplet.web: Creating...
digitalocean_droplet.web: Still creating... [10s elapsed]
digitalocean_droplet.web: Still creating... [20s elapsed]
digitalocean_droplet.web: Still creating... [30s elapsed]
digitalocean_droplet.web: Provisioning with 'remote-exec'...
digitalocean_droplet.web (remote-exec): Connecting to remote host via SSH...
digitalocean_droplet.web (remote-exec): Host: ...
digitalocean_droplet.web (remote-exec): User: root
digitalocean_droplet.web (remote-exec): Password: false
digitalocean_droplet.web (remote-exec): Private key: true
digitalocean_droplet.web (remote-exec): Certificate: false
digitalocean_droplet.web (remote-exec): SSH Agent: false
digitalocean_droplet.web (remote-exec): Checking Host Key: false
digitalocean_droplet.web (remote-exec): Connected!
...
digitalocean_droplet.web: Creation complete after 1m5s [id=204442200]
digitalocean_record.www: Creating...
digitalocean_record.www: Creation complete after 1s [id=110666268]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
You can now navigate to your domain in a web browser. You will see the default Apache welcome page.
This means that Apache was installed successfully, and that Terraform provisioned everything correctly.
To destroy the deployed resources, run the following command and enter yes when prompted:
· terraform destroy -var "do_token=${DO_PAT}" -var "domain_name=${DO_DOMAIN_NAME}" -var "private_key=${DO_PRIVATE_KEY}"
·
You have now completed a small Terraform project with a simple structure that deploys the Apache web server on a Droplet and sets up DNS records for the desired domain.
Conclusion
For reference, here is the file structure of the project you created in this tutorial:
.
└── tf/
├── versions.tf
├── variables.tf
├── provider.tf
├── droplets.tf
├── dns.tf
├── data-sources.tf
└── external/
└── name-generator.py
The resources you defined (the Droplet, the DNS record and dynamic data source, the DigitalOcean provider and variables) are stored each in its own separate file, according to the simple project structure outlined in the first section of this tutorial.
Savic Digitalocean How-to-structure-a-terraform-project
How To Create Reusable Infrastructure with Terraform Modules and Templates
Introduction
One of the main benefits of Infrastructure as Code (IAC) is reusing parts of the defined infrastructure. In Terraform, you can use modules to encapsulate logically connected components into one entity and customize them using input variables you define. By using modules to define your infrastructure at a high level, you can separate development, staging, and production environments by only passing in different values to the same modules, which minimizes code duplication and maximizes conciseness.
Terraform Registry is integrated into Terraform and lists modules and providers that you can incorporate in your project right away by defining them in the required_providers section. Referencing public modules can speed up your workflow and reduce code duplication. If you have a useful module and would like to share it with the world, you can look into publishing it on the Registry for other developers to use.
In this tutorial, you’ll explore some of the ways to define and reuse code in Terraform projects. You’ll reference modules from the Terraform Registry, separate development and production environments using modules, learn about templates and how they are used, and specify resource dependencies explicitly using the depends_on meta argument.
Prerequisites
- A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. You can find instructions in the DigitalOcean product documents, How to Create a Personal Access Token.
- Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial and be sure to name the project folder terraform-reusability, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.
- The droplet-lb module available under modules in terraform-reusability. Follow the How to Build a Custom Module tutorial and work through it until the droplet-lb module is functionally complete. (That is, until the cd ../..command in the Creating a Module section.)
- Knowledge of Terraform project structuring approaches. For more information, see our tutorial How To Structure a Terraform Project.
- (Optional) Two separate domains whose nameservers are pointed to DigitalOcean at your registrar. Your domains must not yet be added to your DigitalOcean account. Refer to the How To Point to DigitalOceanNameservers From Common Domain Registrars tutorial to set this up. Note that you don’t need to do this if you don’t plan on deploying the project you’ll create through this tutorial.
Note: This tutorial has been tested using Terraform1.0.2.
Separating Development and Production Environments
In this section, you’ll use modules to separate your target deployment environments. You’ll arrange these according to the structure of a more complex project. You’ll create a project with two modules: one will define the Droplets and Load Balancers, and the other will set up the DNS domain records. Afterward, you’ll write configuration for two different environments (dev and prod), which will call the same modules.
Creating the dns-records Module
As part of the prerequisites, you set up the initial project under terraform-reusability and created the droplet-lb module in its own subdirectory under modules. You’ll now set up the second module, called dns-records, containing variables, outputs, and resource definitions. From the terraform-reusability directory, create dns-records by running:
· mkdir modules/dns-records
·
Navigate to it:
· cd modules/dns-records
·
This module will contain the definitions for your domain and the DNS records that you’ll later point to the Load Balancers. You’ll first define the variables, which will become inputs that this module will expose. You’ll store them in a file called variables.tf. Create it for editing:
· nano variables.tf
·
Add the following variable definitions:
terraform-reusability/modules/dns-records/variables.tf
variable "domain_name" {}
variable "ipv4_address" {}
Save and close the file. You’ll now define the domain and the accompanying A and CNAME records in a file named records.tf. Create and open it for editing by running:
· nano records.tf
·
Add the following resource definitions:
terraform-reusability/modules/dns-records/records.tf
resource "digitalocean_domain" "domain" {
name = var.domain_name
}
resource "digitalocean_record" "domain_A" {
domain = digitalocean_domain.domain.name
type = "A"
name = "@"
value = var.ipv4_address
}
resource "digitalocean_record" "domain_CNAME" {
domain = digitalocean_domain.domain.name
type = "CNAME"
name = "www"
value = "@"
}
First, you add the domain name to your DigitalOcean account. The cloud will automatically add the three DigitalOceannameservers as NS records. The domain name you supply to Terraform must not already be present in your DigitalOcean account, or Terraform will show an error during infrastructure creation.
Then, you define an A record for your domain, routing it (the @ as value signifies the true domain name, without subdomains) to the IP address supplied as the variable ipv4_address. The actual IP address will be passed in when you initialize an instance of the module. For the sake of completeness, the CNAME record that follows specifies that the www subdomain should also point to the same domain. Save and close the file when you’re done.
Next, you’ll define the outputs for this module. The outputs will show the FQDN (fully qualified domain name) of the created records. Create and open outputs.tf for editing:
· nano outputs.tf
·
Add the following lines:
terraform-reusability/modules/dns-records/outputs.tf
output "A_fqdn" {
value = digitalocean_record.domain_A.fqdn
}
output "CNAME_fqdn" {
value = digitalocean_record.domain_CNAME.fqdn
}
Save and close the file when you’re done.
With the variables, DNS records, and outputs defined, the last thing you’ll need to specify are the provider requirements for this module. You’ll specify that the dns-records module requires the digitalocean provider in a file called provider.tf. Create and open it for editing:
· nano provider.tf
·
Add the following lines:
terraform-reusability/modules/dns-records/provider.tf
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
When you’re done, save and close the file. Now that the digitalocean provider has been defined, the dns-records module is functionally complete.
Creating Different Environments
The current structure of the terraform-reusability project will look similar to this:
terraform-reusability/
├─ modules/
│ ├─ dns-records/
│ │ ├─ outputs.tf
│ │ ├─ provider.tf
│ │ ├─ records.tf
│ │ ├─ variables.tf
│ ├─ droplet-lb/
│ │ ├─ droplets.tf
│ │ ├─ lb.tf
│ │ ├─ outputs.tf
│ │ ├─ provider.tf
│ │ ├─ variables.tf
├─ provider.tf
So far, you have two modules in your project: the one you just created (dns-records) and the one you created as part of the prerequisites (droplet-lb).
To facilitate different environments, you’ll store the dev and prod environment config files under a directory called environments, which will reside in the root of the project. Both environments will call the same two modules, but with different parameter values. The advantage of this is when the modules change internally in the future, you’ll only need to update the values you are passing in.
First, navigate to the root of the project by running:
· cd ../..
·
Then, create the dev and prod directories under environments at the same time:
· mkdir -p environments/dev&&mkdir environments/prod
·
The -p argument orders mkdir to create all directories in the given path.
Navigate to the dev directory, as you’ll first configure that environment:
· cd environments/dev
·
You’ll store the code in a file named main.tf, so create it for editing:
· nano main.tf
·
Add the following lines:
terraform-reusability/environments/dev/main.tf
module "droplets" {
source = "../../modules/droplet-lb"
droplet_count = 2
group_name = "dev"
}
module "dns" {
source = "../../modules/dns-records"
domain_name = "your_dev_domain"
ipv4_address =module.droplets.lb_ip
}
Here you call and configure the two modules, droplet-lb and dns-records, which will together result in the creation of two Droplets. They’re fronted by a Load Balancer, and the DNS records for the supplied domain are set up to point to that Load Balancer. Remember to replace your_dev_domain with your desired domain name for the dev environment, then save and close the file.
Next, you’ll configure the DigitalOcean provider and create a variable for it to be able to accept the personal access token you’ve created as part of the prerequisites. Open a new file, called provider.tf, for editing:
· nano provider.tf
·
Add the following lines:
terraform-reusability/environments/dev/provider.tf
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
variable "do_token" {}
provider "digitalocean" {
token = var.do_token
}
In this code, you require the digitalocean provider to be available and to pass in the do_token variable to its instance. Save and close the file.
Initialize the configuration by running:
· terraforminit
·
You’ll receive the following output:
Output
Initializing modules...
- dns in ../../modules/dns-records
- droplets in ../../modules/droplet-lb
Initializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.10.1...
- Installed digitalocean/digitalocean v2.10.1 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraforminit" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
The configuration for the prod environment is similar. Navigate to its directory by running:
· cd ../prod
·
Create and open main.tf for editing:
· nano main.tf
·
Add the following lines:
terraform-reusability/environments/prod/main.tf
module "droplets" {
source = "../../modules/droplet-lb"
droplet_count = 5
group_name = "prod"
}
module "dns" {
source = "../../modules/dns-records"
domain_name = "your_prod_domain"
ipv4_address =module.droplets.lb_ip
}
The difference between this and your dev code is that there will be five Droplets deployed. Furthermore, the domain name, which you should replace with your prod domain name, will be different. Save and close the file when you’re done.
Then, copy over the provider configuration from dev:
· cp ../dev/provider.tf .
·
Initialize this configuration as well:
· terraforminit
·
The output of this command will be the same as the previous time you ran it.
You can try planning the configuration to see what resources Terraform would create by running:
· terraform plan -var "do_token=${DO_PAT}"
·
The output for prod will be the following:
Output
...
Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.dns.digitalocean_domain.domain will be created
+ resource "digitalocean_domain" "domain" {
+ id = (known after apply)
+ name = "your_prod_domain"
+ urn = (known after apply)
}
# module.dns.digitalocean_record.domain_A will be created
+ resource "digitalocean_record" "domain_A" {
+ domain = "your_prod_domain"
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "@"
+ ttl = (known after apply)
+ type = "A"
+ value = (known after apply)
}
# module.dns.digitalocean_record.domain_CNAME will be created
+ resource "digitalocean_record" "domain_CNAME" {
+ domain = "your_prod_domain"
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "www"
+ ttl = (known after apply)
+ type = "CNAME"
+ value = "@"
}
# module.droplets.digitalocean_droplet.droplets[0] will be created
+ resource "digitalocean_droplet" "droplets" {
...
+ name = "prod-0"
...
}
# module.droplets.digitalocean_droplet.droplets[1] will be created
+ resource "digitalocean_droplet" "droplets" {
...
+ name = "prod-1"
...
}
# module.droplets.digitalocean_droplet.droplets[2] will be created
+ resource "digitalocean_droplet" "droplets" {
...
+ name = "prod-2"
...
}
# module.droplets.digitalocean_droplet.droplets[3] will be created
+ resource "digitalocean_droplet" "droplets" {
...
+ name = "prod-3"
...
}
# module.droplets.digitalocean_droplet.droplets[4] will be created
+ resource "digitalocean_droplet" "droplets" {
...
+ name = "prod-4"
...
}
# module.droplets.digitalocean_loadbalancer.www-lb will be created
+ resource "digitalocean_loadbalancer" "www-lb" {
...
+ name = "lb-prod"
...
Plan: 9 to add, 0 to change, 0 to destroy.
...
This would deploy five Droplets with a Load Balancer. It would also create the prod domain you specified with the two DNS records pointing to the Load Balancer. You can try planning the configuration for the dev environment as well—you’ll note that two Droplets would be planned for deployment.
Note: You can apply this configuration for the dev and prod environments with the following command:
· terraform apply -var "do_token=${DO_PAT}"
·
To destroy it, run the following command and input yes when prompted:
· terraform destroy -var "do_token=${DO_PAT}"
·
The following demonstrates how you have structured this project:
terraform-reusability/
├─ environments/
│ ├─ dev/
│ │ ├─ main.tf
│ │ ├─ provider.tf
│ ├─ prod/
│ │ ├─ main.tf
│ │ ├─ provider.tf
├─ modules/
│ ├─ dns-records/
│ │ ├─ outputs.tf
│ │ ├─ provider.tf
│ │ ├─ records.tf
│ │ ├─ variables.tf
│ ├─ droplet-lb/
│ │ ├─ droplets.tf
│ │ ├─ lb.tf
│ │ ├─ outputs.tf
│ │ ├─ provider.tf
│ │ ├─ variables.tf
├─ provider.tf
The addition is the environmentsdirectory, which holds the code for the dev and prod environments.
The benefit of this approach is that further changes to modules automatically propagate to all areas of your project. Barring any possible customizations to module inputs, this approach is not repetitive and promotes reusability as much as possible, even across deployment environments. Overall, this reduces clutter and allows you to trace the modifications using a version-control system.
In the final two sections of this tutorial, you’ll review the depends_onmeta argument and the templatefile function.
Declaring Dependencies to Build Infrastructure in Order
While planning actions, Terraform automatically tries to identify existing dependencies and builds them into its dependency graph. The main dependencies it can detect are clear references; for example, when an output value of a module is passed to a parameter on another resource. In this scenario, the module must first complete its deployment to provide the output value.
The dependencies that Terraform can’t detect are hidden—they have side effects and mutual references not inferable from the code. An example of this is when an object depends not on the existence, but on the behavior of another one, and does not access its attributes from code. To overcome this, you can use depends_on to manually specify the dependencies in an explicit way. Since Terraform0.13, you can also use depends_on on modules to force the listed resources to be fully deployed before deploying the module itself. It’s possible to use the depends_onmeta argument with every resource type. depends_on will also accept a list of other resources on which its specified resource depends.
depends_on accepts a list of references to other resources. Its syntax looks like this:
resource "resource_type" "res" {
depends_on = [...] # List of resources
# Parameters...
}
Remember that you should only use depends_on as a last-resort option. If used, it should be kept well documented, because the behavior that the resources depend on may not be immediately obvious.
Terraform is able to detect the references made from the code you’ve written, and will schedule the resources for deployment accordingly.
Using Templates for Customization
In Terraform, templating is substituting results of expressions in appropriate places, such as when setting attribute values on resources or constructing strings. You’ve used it in the previous steps and the tutorial prerequisites to dynamically generate Droplet names and other parameter values.
When substituting values in strings, the values are specified and surrounded by ${}. Template substitution is often used in loops to facilitate customization of the created resources. It also allows for module customization by substituting inputs in resource attributes.
Terraform offers the templatefile function, which accepts two arguments: the file from the disk to read and a map of variables paired with their values. The value it returns is the contents of the file rendered with the expression substituted—just as Terraform would normally do when planning or applying the project. Because functions are not part of the dependency graph, the file cannot be dynamically generated from another part of the project.
Imagine that the contents of the template file called droplets.tmpl is as follows:
%{ for address in addresses ~}
${address}:80
%{ endfor ~}
Longer declarations must be surrounded with %{}, as is the case with the for and endfor declarations, which signify the start and end of the for loop respectively. The contents and type of the droplets variable are not known until the function is called and actual values provided, like so:
templatefile("${path.module}/droplets.tmpl", { addresses = ["192.168.0.1", "192.168.1.1"] })
This templatefile call will return the following value:
Output
192.168.0.1:80
192.168.1.1:80
This function has its use cases, but they are uncommon. For example, you could use it when part of the configuration must exist in a proprietary format, but is dependent on the rest of the values and must be generated dynamically.
Savic Digitalocean.com How-to-create-reusable-infrastructure-with-terraform-modules-and-templates
What are Terraform Templates? Basics and Use Cases
IaC has introduced consistency, improved speed of provisioning, reduced manual maintenance efforts and thus minimized risks.
Terraform Templates
Terraform provisions infrastructure resources. It helps create virtual machines, network components, databases, etc., to support the application architecture. Virtual resources often need additional configuration files – in various formats – to function.
Terraform templates provide a way to create these files in the desired format on the target resource. Virtual resources are purpose-driven. To accomplish a certain task, they are configured in a certain way. A slight difference in a given configuration may mean a world of difference in their purpose.
Managing the infrastructure as code also includes managing these core configurations on the resources. Otherwise, management of these configurations happens manually. When the infrastructure scales, it is desirable to rely on certain template files that help us configure the target resource correctly. Terraform templates implement just this.
If you need any help managing your Terraform infrastructure, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, Spacelift is a fantastic tool for this. It supports Git workflows, policy as code, programmatic configuration, context sharing, drift detection, and many more great features right out of the box.
Benefits
Terraform offers a way to package a group of Terraform scripts as modules. Modules are reusable infrastructure components based on which additional customized infrastructure components are built.
Modules offer a way to customize the included components using input variables. Input variables provide a way to control aspects like scale, range, type, etc., for infrastructure to be provisioned.
Terraform Templates are a great way to extend further the flexibility provided by modules. Templates manage the configuration and data files. This enables granular control and makes the module more flexible and reusable.
Like modules, usage of input variables drives template files to shape the actual config or data files on the target resource.
Key Concepts
Terraform template combines a few features – templatefile function, string literals, and template files themselves.
templatefile function
The templatefile() function accepts couple of inputs:
- A template file (*.tftpl)
- Variable – which could be a list or map.
The template file contains text in UTF-8 encoded text, representing the desired format data. For example, configuration files required by applications have different formats. These files support Terraform’sstring literals, which help replace the actual values supplied in variables.
For the final configuration, variables provide a way to set values in these template files dynamically. Before runtime, make sure that the template files are available on the disk.
File provisioned
File provisioners provide a way to copy the files from the machine where Terraform code executes to the target resource. The source attribute specifies the file’s source path on the host machine, and the destination attribute specifies the desired path where the file needs to be copied on target.
provisioner "file" {
source = "./app.conf"
destination = "/etc/app.conf"
}
jsonencode and yamlencode functions
If the string being generated from the template file is a JSON or YAML file, it could become quite tedious to format it for its validity. The chances of error are high if these files are not formatted properly.
Use jsonencode and yamlencode functions in the template file to produce an output file in respective formats easily.
Create user_data with Terraform
AWS EC2 instances offer support to run shell/bash scripts when the instances are booted. These scripts perform essential tasks like updating the packages, creating environment variables, installing patches, etc. These scripts are provided for EC2 creation in the form of user_data.
Given their nature, these scripts can get very complex. However, once the script is tested it may be used multiple times with slightly different values. As an example, we may need to provide a different set of environment variables for two different sets of EC2 instances.
Using Terraform, we can create a template for this script by using string literals to provide variables’ values dynamically. In the example below, the script creates a directory, cds into that directory, and creates a file within that with some name.
#!/bin/sh
sudomkdir ${request_id}
cd ${request_id}
sudo touch ${name}.txt
Let the name of above template file be script.tftpl. String literals (${ …}) are used to represent variables. To add this script as user_data for EC2 instance, we use the templatefile() function as below.
resource"aws_instance""demo_vm" {
ami = var.ami
instance_type = var.type
user_data = templatefile("script.tftpl", { request_id = "REQ000129834", name = "John" })
key_name = "tftemplate"
tags = {
name ="Demo VM"
type ="Templated"
}
}
We have passed the template file name (script.tftpl) as first parameter, and a map object with request_id and name key-value pairs for substitution. Creating an EC2 instance with a given user_data script runs as expected when Terraform code is applied.
Please note that both template and terraform code files are located in the same directory. Further, Terraform variables are used to create larger map objects for easier management of supplied values.
Template file with for loop
Template file used in previous section is a bash script. Similarly, strings generated by shell or any other type of script are non-repetitive in their formats. Certain file formats are repetitive in nature. I.e. certain lines may have same format, with different values.
As an example, DNS resolution configurations are maintained in resolv.conffile, that lists the name servers in below format.
nameserverx.x.x.x
nameserverx.x.x.x
nameserverx.x.x.x
As we can see, the expected format of this file is repetitive. Use for loop expressions to create a template file for the same as below.
Filename: resolv.conf.tftpl
%{ for addr in ip_addrs ~}
nameserver ${addr}
%{ endfor ~}
The corresponding terraform file would have file provisioner with a list type variable as second parameter.
provisioner "file" {
source = templatefile("resolv.conf.tftpl", {ip_addrs = ["192.168.0.100", "8.8.8.8", "8.8.4.4"]})
destination = "/etc/resolv.conf"
}
Applying above Terraformconfig results in creating the expected resolve.conf file in target EC2 instance as below.
nameserver192.168.0.100
nameserver8.8.8.8
nameserver8.8.4.4
Creating JSON files using templatefile()
There may be times when applications depend on externally provided configuration files in JSON format. Since the application logic depends on JSON format, it becomes imperative for Terraform to format the configuration file accordingly.
Creating a valid JSON object using string operations can get tricky. A slight mistake in indentation, or mistyped : or " – in template files – can cause errors. Even if we successfully create a script to create a valid JSON object, there is always a risk of unhandled escape sequences from incoming data.
Terraform provides a function to create valid JSON files from the given template, without worrying about valid formatting. Let us consider that we want to host a product based on micro service architecture. Our micro services are developed in NodeJS.
For the sake of this example, it is critical that the application nodes in our micro service architecture depend on slightly different versions of dependencies. In that case, we create a template file as below – dependencies.tftpl.
${
jsonencode("dependencies": {
"cradle": ${cradle_v},
"jade": ${jade_v},
"redis": ${redis_v},
"socket.io": ${socket_v},
"twitter-node": ${twitter_v},
"express": ${express_v} })
}
The corresponding Terraform code looks like below. Actual version values are passed via a variable – which would generate the desired dependencies.json file in target path.
variabledep_vers{
default = {
"cradle_v": "0.5.5",
"jade_v": "0.10.4",
"redis_v": "0.5.11",
"socket_v": "0.6.16",
"twitter_v": "0.0.2",
"express_v": "2.2.0"
}
}
provisioner "file" {
source = templatefile("dependencies.tftpl", var.dep_vers)
destination = "/desired/path/dependencies.json"
}
Conditions in Templates
Extending the above example, let us assume certain applications do not need certain dependencies to be installed. In that case, the developers may choose to just not supply the corresponding version values.
The current template file (dependencies.tftpl) throws an error in that case. If conditions provide this flexibility and improve the reusability of a given template file. Make the dependencies depend on the whether the corresponding information is supplied or not – using if conditions as below.
${
jsonencode("dependencies": {
%{ifcradle_v != "" }
"cradle": ${cradle_v},
%{ endif }
%{ifjade_v != "" }
"jade": ${jade_v},
%{ endif }
%{ifredis_v != "" }
"redis": ${redis_v},
%{ endif }
%{ifsocket_v != "" }
"socket.io": ${socket_v},
%{ endif }
%{iftwitter_v != "" }
"twitter-node": ${twitter_v},
%{ endif }
%{ifexpress_v != "" }
"express": ${express_v}
%{ endif }
})
}
The summary of the above file is that if no version information for the given dependencies is provided, the dependencies will not be included in the dependencies.conf file. This makes this template file reusable with any micro service application and can be controlled by the dep_vers variable values in Terraform code.
Key Points
Terraform’s string literals are a great asset of the HCL language. Terraform offers a whole range of functions to perform string templating. The combination of string literals, templatefile() function, and file provisioner can prove to be of huge advantage when triggering the configuration management workflows on Day 0. Terraform templates offer flexibility around the file formats – thus not limiting to specific ones. Additionally, filesystem functions are used to perform validation tasks.
SumeetNinawe Terraform-templates
Disclaimer: Some of the contents of this website have been taken from various sources on the internet. If you find any content that should be removed from this site because of Copyrights, please send a message and it will be promptly removed.
|
|
|
|
|
Home/ Info/ Products/ BIG TECH Metaverse Metaverse Vs. Virtual Reality PC Buyers Guide/ IEEE 802 Standards Social Media Platforms Computer & IT Certifications Processor Generations Memory DDR3 Vs. DDR4 SSD Vs. HDD SAS vs. SATA HTML 5G Android Tips and Tricks STEM Business Intelligence Tools Web Intelligence Quantum Computing Artificial Intelligence (AI) ChatGPT Robotics Internet of Things (IOT) Web Of Things (WoT) Renewable Energy Nano Technology Cleantech Office Suites Windows Run Commands Hiren's Boot CD Benchmarks Android Vs. IOS Mac Vs. PC Mac Keyboard Shortcuts Linux CLi Commands Venus Project/ Computer Security and Law Techno Lingo Encyclopedias Search Engines Glossary Online Jobs Contact
Active Components Passive Components Test Electrical Components Electronics Classification
AWS Certification Google Certification Oracle Certifications cisco certifications Huawei Certification Microsoft Certifications Linux Certification Business Certifications
Google-Cloud-Platform-Guide Amazon-Web-Services-Guide Global-Cloud-Infrastructure-Of-AWS Amazon-Web-Services-Cli-Guide AWS-Cloudformation Devops Microsoft-Azure Oracle-Cloud Digitalocean-Cloud Openstack-Cloud IaC CloudFormation Anatomy Edge Vs. Cloud Vs. Fog Computing Security Topics
Certified Enterprise Blockchain Professional (CEBP) Web 3.0 Satoshi Nakamoto Cryptocurrency Dark Web Ethereum NFT Merkle Tree El-Salvador eNaira Challenges Of Crypto To Cash
Web C++ JAVA Python Python Glossary Angular.js Scala
Copyright BICT Solutions Privacy Policy. | Terms and Conditions apply | All rights reserved.