GCP - How to Provision GCP Resources with Terraform?
Terraform is an open-source Infrastructure as Code (IaC) tool that allows you to define and provision cloud and on-premises resources in human-readable configuration files. When it comes to Google Cloud Platform (GCP), Terraform enables you to manage your entire GCP infrastructure, from virtual machines and networks to databases and Kubernetes clusters, in a declarative and reproducible manner.
Here's a step-by-step guide on how to use Terraform to provision GCP resources:
1. Prerequisites
Before you start, ensure you have the following:
- Google Cloud Project: An active GCP project where you want to provision resources.
- GCP Project ID: You'll need the unique ID of your GCP project.
gcloudCLI: The Google Cloud SDK (which includesgcloudCLI) installed and configured. This is used for authentication.- Terraform: Download and install Terraform from the official HashiCorp website. Ensure it's in your system's PATH.
2. Authentication to GCP
Terraform needs credentials to authenticate with your GCP project. The most common and recommended way for local development is to use your gcloud login credentials.
Log in to gcloud: Open your terminal and run:
gcloud auth application-default login
This command will open a browser window, asking you to log in with your Google account. It creates application default credentials that Terraform can automatically pick up.
Set your project (optional but good practice):
gcloud config set project YOUR_PROJECT_ID
Replace YOUR_PROJECT_ID with your actual GCP project ID.
3. Create Your Terraform Configuration Files
Terraform configurations are written in HashiCorp Configuration Language (HCL) files with a .tf extension. It's good practice to organize them.
Create a new directory for your Terraform project (e.g., my-gcp-infra) and navigate into it:
mkdir my-gcp-infra
cd my-gcp-infra
Now, let's create a few files:
main.tf (Main Configuration)
This file will contain the resource definitions.
# main.tf
# Define the Google Cloud provider
provider "google" {
project = var.gcp_project_id
region = var.gcp_region
zone = var.gcp_zone
}
# --- Resource 1: Create a GCP Virtual Network (VPC) ---
resource "google_compute_network" "custom_network" {
name = "my-custom-vpc"
auto_create_subnetworks = false # We'll create a custom subnetwork
}
# --- Resource 2: Create a Subnetwork within the VPC ---
resource "google_compute_subnetwork" "custom_subnetwork" {
name = "my-custom-subnet"
ip_cidr_range = "10.0.1.0/24"
region = var.gcp_region
network = google_compute_network.custom_network.name
}
# --- Resource 3: Create a Cloud Storage Bucket ---
# Note: Bucket names must be globally unique. Choose a unique name.
resource "google_storage_bucket" "my_bucket" {
name = "${var.gcp_project_id}-my-unique-bucket" # Using project ID for uniqueness
location = var.gcp_region
force_destroy = false # Set to true to allow deletion of non-empty buckets
}
# --- Resource 4: Create a Compute Engine Instance (VM) ---
resource "google_compute_instance" "default" {
name = "my-terraform-vm"
machine_type = "e2-medium"
zone = var.gcp_zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = google_compute_network.custom_network.name
subnetwork = google_compute_subnetwork.custom_subnetwork.name
access_config {
// Ephemeral IP
}
}
metadata_startup_script = <<-EOF
#!/bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl enable apache2
sudo systemctl start apache2
echo "Hello from Terraform!" | sudo tee /var/www/html/index.html
EOF
tags = ["http-server", "terraform"]
}
# --- Resource 5: Create a Firewall Rule to allow HTTP traffic ---
resource "google_compute_firewall" "allow_http" {
name = "allow-http-80"
network = google_compute_network.custom_network.name
allow {
protocol = "tcp"
ports = ["80"]
}
source_ranges = ["0.0.0.0/0"] # Allow from anywhere (be cautious in production)
target_tags = ["http-server"]
}
variables.tf (Define Input Variables)
This file declares variables that allow you to parameterize your configurations.
# variables.tf
variable "gcp_project_id" {
description = "The ID of the GCP project to provision resources in."
type = string
}
variable "gcp_region" {
description = "The GCP region to deploy resources into."
type = string
default = "us-central1"
}
variable "gcp_zone" {
description = "The GCP zone to deploy resources into."
type = string
default = "us-central1-c"
}
outputs.tf (Define Output Values)
This file defines values that will be displayed after Terraform applies the configuration, useful for getting information about created resources.
# outputs.tf
output "instance_external_ip" {
description = "The external IP address of the Compute Engine instance."
value = google_compute_instance.default.network_interface[0].access_config[0].nat_ip
}
output "bucket_self_link" {
description = "The self_link URL of the created storage bucket."
value = google_storage_bucket.my_bucket.self_link
}
output "vpc_name" {
description = "The name of the created VPC network."
value = google_compute_network.custom_network.name
}
4. Initialize Terraform
In your my-gcp-infra directory, run:
terraform init
This command downloads the necessary Google Cloud provider plugin. You should see a message confirming that Terraform has been successfully initialized.
5. Plan Your Infrastructure Changes
Before applying any changes, it's crucial to see what Terraform plans to do.
terraform plan -var="gcp_project_id=YOUR_PROJECT_ID"
Replace YOUR_PROJECT_ID with your actual GCP project ID. Terraform will analyze your configuration and the current state of your GCP project, then show you a detailed plan of resources it will create, modify, or destroy.
Example output snippet:
Terraform will perform the following actions:
# google_compute_network.custom_network will be created
+ resource "google_compute_network" "custom_network" {
+ auto_create_subnetworks = false
+ gateway_auto_create = true
+ name = "my-custom-vpc"
+ project = "your-project-id"
+ routing_mode = "REGIONAL"
}
# google_storage_bucket.my_bucket will be created
+ resource "google_storage_bucket" "my_bucket" {
+ location = "US-CENTRAL1"
+ name = "your-project-id-my-unique-bucket"
+ project = "your-project-id"
...
}
... (and so on for other resources)
Plan: 5 to add, 0 to change, 0 to destroy.
Review this plan carefully to ensure it aligns with your expectations.
6. Apply Your Infrastructure Changes
If the plan looks correct, apply the changes:
terraform apply -var="gcp_project_id=YOUR_PROJECT_ID"
Terraform will again show you the plan and prompt you to confirm by typing yes. Type yes and press Enter to proceed.
Terraform will now provision the resources in your GCP project. This process can take several minutes. Once complete, it will display the output values defined in outputs.tf.
Example output snippet:
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Outputs:
bucket_self_link = "https://www.googleapis.com/storage/v1/b/your-project-id-my-unique-bucket"
instance_external_ip = "34.123.45.67"
vpc_name = "my-custom-vpc"
You can now use the instance_external_ip to access your web server (e.g., http://34.123.45.67 in your browser, assuming port 80 is allowed by the firewall and the web server is running).
7. View the Terraform State
Terraform stores the state of your managed infrastructure in a terraform.tfstate file in your working directory. This file maps the resources in your configuration to the real-world resources in GCP.
Do not manually edit terraform.tfstate!
You can inspect the state with:
terraform state show
For production environments, you should configure remote state (e.g., using a Cloud Storage bucket) to enable collaboration and prevent accidental loss of state.
8. Destroy Your Infrastructure
When you're done with the resources, you can clean them up using Terraform.
terraform destroy -var="gcp_project_id=YOUR_PROJECT_ID"
Terraform will show you a plan of all the resources it will destroy. Type yes to confirm.
This command will deprovision all the resources defined in your main.tf file from your GCP project.
Key Concepts and Best Practices
- Idempotence: Terraform operations are idempotent, meaning applying the same configuration multiple times will result in the same infrastructure state without unintended side effects.
- Declarative Language: You declare the desired state of your infrastructure, and Terraform figures out how to achieve it.
- Modules: For larger, more complex infrastructures, break down your configurations into reusable modules.
- Remote State: Always use remote state (e.g., Google Cloud Storage backend) for team collaboration and to prevent loss of state information.
- Workspaces: Use Terraform workspaces to manage multiple distinct environments (e.g., dev, staging, prod) within the same configuration.
- Version Control: Store your Terraform configurations in a version control system like Git.
terraform fmt: Useterraform fmtto automatically reformat your configuration files to a canonical style.
By following these steps, you can effectively use Terraform to provision and manage your GCP resources, bringing the benefits of Infrastructure as Code to your cloud deployments.