Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions examples/deploy/terraform/infra.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,12 @@ storage = {
size_in_megabytes = 1099511
storage_efficiency_enabled = true
}
staging_volume = {
create = true
junction_path = "/trident_domino_staging_vol"
name = trident_domino_staging_vol
size_in_megabytes = 1099511
}
}
s3 = {
create = true
Expand Down
3 changes: 1 addition & 2 deletions examples/deploy/terraform/infra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,7 @@ No resources.
| <a name="input_network"></a> [network](#input\_network) | vpc = {<br/> id = Existing vpc id, it will bypass creation by this module.<br/> subnets = {<br/> private = Existing private subnets.<br/> public = Existing public subnets.<br/> pod = Existing pod subnets.<br/> }), {})<br/> }), {})<br/> network\_bits = {<br/> public = Number of network bits to allocate to the public subnet. i.e /27 -> 32 IPs.<br/> private = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br/> pod = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br/> }<br/> cidrs = {<br/> vpc = The IPv4 CIDR block for the VPC.<br/> pod = The IPv4 CIDR block for the Pod subnets.<br/> }<br/> use\_pod\_cidr = Use additional pod CIDR range (ie 100.64.0.0/16) for pod networking. | <pre>object({<br/> vpc = optional(object({<br/> id = optional(string, null)<br/> subnets = optional(object({<br/> private = optional(list(string), [])<br/> public = optional(list(string), [])<br/> pod = optional(list(string), [])<br/> }), {})<br/> }), {})<br/> network_bits = optional(object({<br/> public = optional(number, 27)<br/> private = optional(number, 19)<br/> pod = optional(number, 19)<br/> }<br/> ), {})<br/> cidrs = optional(object({<br/> vpc = optional(string, "10.0.0.0/16")<br/> pod = optional(string, "100.64.0.0/16")<br/> }), {})<br/> use_pod_cidr = optional(bool, true)<br/> })</pre> | `{}` | no |
| <a name="input_region"></a> [region](#input\_region) | AWS region for the deployment | `string` | n/a | yes |
| <a name="input_ssh_pvt_key_path"></a> [ssh\_pvt\_key\_path](#input\_ssh\_pvt\_key\_path) | SSH private key filepath. | `string` | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br/> filesystem\_type = File system type(netapp\|efs\|none)<br/> efs = {<br/> access\_point\_path = Filesystem path for efs.<br/> backup\_vault = {<br/> create = Create backup vault for EFS toggle.<br/> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br/> backup = {<br/> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br/> cold\_storage\_after = Move backup data to cold storage after this many days.<br/> delete\_after = Delete backup data after this many days.<br/> }<br/> }<br/> }<br/> netapp = {<br/> migrate\_from\_efs = {<br/> enabled = When enabled, both EFS and NetApp resources will be provisioned simultaneously during the migration period.<br/> datasync = {<br/> enabled = Toggle to enable AWS DataSync for automated data transfer from EFS to NetApp FSx.<br/> schedule = Cron-style schedule for the DataSync task, specifying how often the data transfer will occur (default: hourly).<br/> verify\_mode = One of: POINT\_IN\_TIME\_CONSISTENT, ONLY\_FILES\_TRANSFERRED, NONE.<br/> }<br/> }<br/> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br/> storage\_capacity = Filesystem Storage capacity<br/> throughput\_capacity = Filesystem throughput capacity<br/> automatic\_backup\_retention\_days = How many days to keep backups<br/> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br/><br/> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br/> enabled = Enable automatic storage capacity increase.<br/> threshold = Used storage capacity threshold.<br/> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br/> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br/> notification\_email\_address = The email address for alarm notification.<br/> }<br/> volume = {<br/> create = Create a volume associated with the filesystem.<br/> name\_suffix = The suffix to name the volume<br/> storage\_efficiency\_enabled = Toggle storage\_efficiency\_enabled<br/> junction\_path = filesystem junction path<br/> size\_in\_megabytes = The size of the volume<br/> }<br/> s3 = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br/> }<br/> ecr = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br/> }<br/> enable\_remote\_backup = Enable tagging required for cross-account backups<br/> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br/> }<br/> } | <pre>object({<br/> filesystem_type = optional(string, "efs")<br/> efs = optional(object({<br/> access_point_path = optional(string, "/domino")<br/> backup_vault = optional(object({<br/> create = optional(bool, true)<br/> force_destroy = optional(bool, true)<br/> backup = optional(object({<br/> schedule = optional(string, "0 12 * * ? *")<br/> cold_storage_after = optional(number, 35)<br/> delete_after = optional(number, 125)<br/> }), {})<br/> }), {})<br/> }), {})<br/> netapp = optional(object({<br/> migrate_from_efs = optional(object({<br/> enabled = optional(bool, false)<br/> datasync = optional(object({<br/> enabled = optional(bool, false)<br/> target = optional(string, "netapp")<br/> schedule = optional(string, "cron(0 */4 * * ? *)")<br/> verify_mode = optional(string, "ONLY_FILES_TRANSFERRED")<br/> }), {})<br/> }), {})<br/> deployment_type = optional(string, "SINGLE_AZ_1")<br/> storage_capacity = optional(number, 1024)<br/> throughput_capacity = optional(number, 128)<br/> automatic_backup_retention_days = optional(number, 90)<br/> daily_automatic_backup_start_time = optional(string, "00:00")<br/> storage_capacity_autosizing = optional(object({<br/> enabled = optional(bool, false)<br/> threshold = optional(number, 70)<br/> percent_capacity_increase = optional(number, 30)<br/> notification_email_address = optional(string, "")<br/> }), {})<br/> volume = optional(object({<br/> create = optional(bool, true)<br/> name_suffix = optional(string, "domino_shared_storage")<br/> storage_efficiency_enabled = optional(bool, true)<br/> junction_path = optional(string, "/domino")<br/> size_in_megabytes = optional(number, 1099511)<br/> }), {})<br/> }), {})<br/> s3 = optional(object({<br/> create = optional(bool, true)<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {})<br/> ecr = optional(object({<br/> create = optional(bool, true)<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {}),<br/> enable_remote_backup = optional(bool, false)<br/> costs_enabled = optional(bool, false)<br/> })</pre> | `{}` | no |
| <a name="input_tags"></a> [tags](#input\_tags) | Deployment tags. | `map(string)` | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br/> filesystem\_type = File system type(netapp\|efs\|none)<br/> efs = {<br/> access\_point\_path = Filesystem path for efs.<br/> backup\_vault = {<br/> create = Create backup vault for EFS toggle.<br/> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br/> backup = {<br/> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br/> cold\_storage\_after = Move backup data to cold storage after this many days.<br/> delete\_after = Delete backup data after this many days.<br/> }<br/> }<br/> }<br/> netapp = {<br/> migrate\_from\_efs = {<br/> enabled = When enabled, both EFS and NetApp resources will be provisioned simultaneously during the migration period.<br/> datasync = {<br/> enabled = Toggle to enable AWS DataSync for automated data transfer from EFS to NetApp FSx.<br/> schedule = Cron-style schedule for the DataSync task, specifying how often the data transfer will occur (default: hourly).<br/> verify\_mode = One of: POINT\_IN\_TIME\_CONSISTENT, ONLY\_FILES\_TRANSFERRED, NONE.<br/> }<br/> }<br/> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br/> storage\_capacity = Filesystem Storage capacity<br/> throughput\_capacity = Filesystem throughput capacity<br/> automatic\_backup\_retention\_days = How many days to keep backups<br/> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br/><br/> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br/> enabled = Enable automatic storage capacity increase.<br/> threshold = Used storage capacity threshold.<br/> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br/> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br/> notification\_email\_address = The email address for alarm notification.<br/> }<br/> volume = {<br/> create = Create a volume associated with the filesystem.<br/> name\_suffix = The suffix to name the volume<br/> storage\_efficiency\_enabled = Toggle storage\_efficiency\_enabled<br/> junction\_path = filesystem junction path<br/> size\_in\_megabytes = The size of the volume<br/> }<br/> staging\_volume = {<br/> create = Create a staging volume associated with the filesystem<br/> name = The name of the staging volume<br/> junction\_path = filesystem junction path<br/> size\_in\_megabytes = The size of the staging volume<br/> }<br/> }<br/> s3 = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br/> }<br/> ecr = {<br/> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br/> }<br/> enable\_remote\_backup = Enable tagging required for cross-account backups<br/> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br/> }<br/> } | <pre>object({<br/> filesystem_type = optional(string, "efs")<br/> efs = optional(object({<br/> access_point_path = optional(string, "/domino")<br/> backup_vault = optional(object({<br/> create = optional(bool, true)<br/> force_destroy = optional(bool, true)<br/> backup = optional(object({<br/> schedule = optional(string, "0 12 * * ? *")<br/> cold_storage_after = optional(number, 35)<br/> delete_after = optional(number, 125)<br/> }), {})<br/> }), {})<br/> }), {})<br/> netapp = optional(object({<br/> migrate_from_efs = optional(object({<br/> enabled = optional(bool, false)<br/> datasync = optional(object({<br/> enabled = optional(bool, false)<br/> target = optional(string, "netapp")<br/> schedule = optional(string, "cron(0 */4 * * ? *)")<br/> verify_mode = optional(string, "ONLY_FILES_TRANSFERRED")<br/> }), {})<br/> }), {})<br/> deployment_type = optional(string, "SINGLE_AZ_1")<br/> storage_capacity = optional(number, 1024)<br/> throughput_capacity = optional(number, 128)<br/> automatic_backup_retention_days = optional(number, 90)<br/> daily_automatic_backup_start_time = optional(string, "00:00")<br/> storage_capacity_autosizing = optional(object({<br/> enabled = optional(bool, false)<br/> threshold = optional(number, 70)<br/> percent_capacity_increase = optional(number, 30)<br/> notification_email_address = optional(string, "")<br/> }), {})<br/> volume = optional(object({<br/> create = optional(bool, true)<br/> name_suffix = optional(string, "domino_shared_storage")<br/> storage_efficiency_enabled = optional(bool, true)<br/> junction_path = optional(string, "/domino")<br/> size_in_megabytes = optional(number, 1099511)<br/> }), {})<br/> staging_volume = optional(object({<br/> create = optional(bool, true)<br/> name = optional(string, "trident_domino_staging_vol")<br/> junction_path = optional(string, "/trident_domino_staging_vol")<br/> size_in_megabytes = optional(number, 1099511)<br/> }), {})<br/> }), {})<br/> s3 = optional(object({<br/> create = optional(bool, true)<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {})<br/> ecr = optional(object({<br/> create = optional(bool, true)<br/> force_destroy_on_deletion = optional(bool, true)<br/> }), {}),<br/> enable_remote_backup = optional(bool, false)<br/> costs_enabled = optional(bool, false)<br/> })</pre> | `{}` | no || <a name="input_tags"></a> [tags](#input\_tags) | Deployment tags. | `map(string)` | n/a | yes |
| <a name="input_use_fips_endpoint"></a> [use\_fips\_endpoint](#input\_use\_fips\_endpoint) | Use aws FIPS endpoints | `bool` | `false` | no |

## Outputs
Expand Down
13 changes: 13 additions & 0 deletions examples/deploy/terraform/infra/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -277,6 +277,13 @@ variable "storage" {
storage_efficiency_enabled = Toggle storage_efficiency_enabled
junction_path = filesystem junction path
size_in_megabytes = The size of the volume
}
staging_volume = {
create = Create a staging volume associated with the filesystem
name = The name of the staging volume
junction_path = filesystem junction path
size_in_megabytes = The size of the staging volume
}
}
s3 = {
force_destroy_on_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.
Expand Down Expand Up @@ -331,6 +338,12 @@ variable "storage" {
junction_path = optional(string, "/domino")
size_in_megabytes = optional(number, 1099511)
}), {})
staging_volume = optional(object({
create = optional(bool, true)
name = optional(string, "trident_domino_staging_vol")
junction_path = optional(string, "/trident_domino_staging_vol")
size_in_megabytes = optional(number, 1099511)
}), {})
}), {})
s3 = optional(object({
create = optional(bool, true)
Expand Down
Loading
Loading