- Terraform 0.12+
-
private_key = file("~/.ssh/id_rsa")
- Existing Atlas Organization with:
- Existing Atlas Project with:
- Existing Project API Key with Project Owner permissions
- Existing Atlas Project with:
- Existing Azure subscription
locals {
# Atlas cluster name
cluster_name = "Sample"
# Atlas Setup Stuff
atlas_org_id = "YOUR_ORG_ID"
# Atlas Public providor
provider_name = "AZURE"
# Azure Subscription ID
azure_subscription_id = "SUBSCRIPTION_ID_GOES_HERE"
azure_client_id = "AZ_CLIENT_ID_GOES_HERE"
azure_client_secret = "AZ_CLIENT_SECRET_GOES_HERE"
azure_tenant_id = "AZ_TENANT_ID_GOES_HERE"
How exactly do Private Endpoints work for Network Access to MongoDB Atlas?
Transitive peering is how we solve for high availability on a Multi-Region cluster.
In Azure, peer-to-peer transitive routing describes network traffic between two virtual networks that is routed through an intermediate virtual network. For example, assume there are three virtual networks - A, B, and C. A is peered to B, B is peered to C, but A and C are not connected.
resource "azurerm_virtual_network_peering" "example-transitive-peering1" {
name = "peer2to3"
resource_group_name = azurerm_resource_group.atlas-group2.name
virtual_network_name = azurerm_virtual_network.atlas-vn-group2.name
remote_virtual_network_id = azurerm_virtual_network.atlas-vn-group3.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
}
resource "azurerm_virtual_network_peering" "example-transitive-peering2" {
name = "peer3to2"
resource_group_name = azurerm_resource_group.atlas-group3.name
virtual_network_name = azurerm_virtual_network.atlas-vn-group3.name
remote_virtual_network_id = azurerm_virtual_network.atlas-vn-group2.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
}
resource "azurerm_virtual_network_peering" "example-transitive-peering3" {
name = "peer1to2"
resource_group_name = azurerm_resource_group.atlas-group1.name
virtual_network_name = azurerm_virtual_network.atlas-vn-group1.name
remote_virtual_network_id = azurerm_virtual_network.atlas-vn-group2.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
}
resource "azurerm_virtual_network_peering" "example-transitive-peering4" {
name = "peer2to1"
resource_group_name = azurerm_resource_group.atlas-group2.name
virtual_network_name = azurerm_virtual_network.atlas-vn-group2.name
remote_virtual_network_id = azurerm_virtual_network.atlas-vn-group1.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
}
This particular setup is not included in the Terraform script but provided here for educational purposes
The Atlas UI let's you test a full region outage right from the UI!
The application will not be able to connect to the primary! It needs to route traffic through VPC peering on the Azure side to the region with the primary node!
export MONGODB_ATLAS_PUBLIC_KEY=yourkeyhere
export MONGODB_ATLAS_PRIVATE_KEY=yourkeyhere
update locals.tf with your values
terraform init
terraform apply
Once terraform finishes doing its magic, all you gotta do is ssh into your VM and login using mongosh
ssh testuser@Public IP of azure VM
mongosh <PrivateLinkConnectionString> --username testuser
(with password "helloworld")