Lastly, I’ve mentioned that terraform is a good choice for AWS automation. Now I had my first experience and I want to share them whit you.
Separation of Environments
Deployment Environments do not require introduction also that non production environments should as far as the possible copy production environment. Therefore it’s become crucial to have one Infrastructure code for all environments.
terraform env select dev terraform apply -var-file=dev.tfvars -refresh=true
In this excerpt, the resource definition uses variables for all changed parts between environments. Separate states are maintained per environment (fist line), but you need to apply proper environment variables (second line).
To prevent errors or choosing the wrong environment it’s better to use predefined scripts that capture that.
Modules are great to capture a common set of resources you can reuse. But at the beginning do not start with modules. Try to understand how things are working and what belongs together and what do not.
Avoid nested modules, because it overcomplicates variable passing and refactoring.
And be aware of the fact, that it’s not easy to define explicit dependencies between modules. It’s still very wanted and not implemented feature
You don’t need to start with remote states. So I’m still checking local state to git. But as soon as several people begin to contribute you need to think about it. The Environment States are very fresh and now1 are already supported with AWS S3, but e.g. not with Google Cloud Buckets.
Be careful with state, make sure you don’t end up in a broken state. I run into a stupid issue with the local state where I was busy with big refactoring and didn’t check that [Ctr+C] can destroy the local state with is a bug I reported.
As always feel free to comment ;)
At a time where terraform 0.9.4 is the most recent released version. ↩︎