If you are working on a team, then its best to store the Terraform state file remotely so that many people can access it. In order to setup terraform to store state remotely you need two things: a S3 bucket to store the state file in and a Terraform S3 backend resource.
If the state file is stored remotely so that many people can access it, then you risk multiple people attempting to make changes to the same file at the exact same time. So we need to provide a mechanism that will “lock” the state if its currently in-use by another user. We can accomplish this by creating a dynamoDB table for terraform to use.
The Terraform configuration on this directory creates the S3 bucket and DynamoDB table for storing and locking the Terraform state file remotely. This is known as the S3 backend🔗.
The S3 bucket is created in a particular AWS region. The name of the S3 must be globally unique. You can check its availability by checking this URL:
https://<BUCKET NAME>.s3.amazonaws.com/
It should output a XML with this content:
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
- Edit the vars.tf file to specify the region and bucket name.
- Optionally edit the
dynamodb_name
variable in the vars.tf file. - Run
terraform init
- Run
terraform plan
to check whether the following command will succeed: - Rename the remote-state.sample file to remote-state.tf inside your project. Make sure that the values for the
bucket
,dynamodb_table
andregion
are the same as the used in the vars.tf file. - In your project directory, run the command
terraform init
to reset the state file. - Run
terraform plan
to check whether the following command will succeed: - Run
terraform apply
- Check whether you can run
terraform destroy
from another directory or machine.