fortinet/azure-templates

Failover for VIP

rmc515 opened this issue · 9 comments

In single FG deployments within Azure, we create an additional ipconfig on the external interface and associate that with a public IP resource. Then on the FG, we have a NAT that ties that external subnet IP to the actual IP of the server. When we failover from one FG to the other, how do we get that ipconfig to move from FG-A's vNIC to FG-B's vNIC? We tested that the public IP detaches from the A vNIC and re-attaches to the B vNIC with no problem. However, the public IP resource for this NAT external IP is still only attached to the vNIC on FG-A.

Hi,

Can you provide us with some more details here? In your first sentence, you mention the use of a single fw? Later on you refer to 2 FortiGate instances in HA. Below is a link that will give you an idea on how to configure the FortiGate if you have the Active/Passive SDN connector setup deployed:
https://github.com/fortinet/azure-templates/blob/main/FortiGate/Active-Passive-SDN/doc/config-failover.md

In the SDN connector, you configure the ipconfig that needs to move from FGT-A to FGT-B. This is something that is easier when using the LB setup:
https://github.com/fortinet/azure-templates/tree/main/FortiGate/Active-Passive-ELB-ILB

Regards,

Joeri

Sorry let me clarify. I was first explaining how we perform DNAT behind a single FW in Azure. (attach additional ipconfig with IP in external subnet that has a public IP). Then on the FG, we create Virtual IP to DNAT the external IP to the internal IP (in protected subnet for example).

Now on this deployment we did the SDN connector. We got failover working as expected; meaning the cluster public IP fails over from FG-A to FG-B, and the UDR changes to the internal IP of either FG.

So my question is, how do we perform DNAT with the SDN deployment and how would a public IP (other than cluster) fail over from FG-A to FG-B.

Hi,

If you have multiple Public IP on the front-end you need to configured them as well in the SDN connector. The connector will then also failover this second public ip. You will always assign a public IP to 1 private IP so you need to configure multiple ipconfigs on the FGT on both sides.

config nic
edit "FortiGate-A-NIC1"
config ip
edit "ipconfig1"
set public-ip "FGTAPClusterPublicIP"
next
edit "ipconfig2"
set public-ip "FGTAPClusterPublicIP2"
next
end
end

Regards,

Joeri

Thanks, almost there.... but within Azure what configuration is needed on the network interface resources? I went to the active virtual machine (the one that has the Cluster public IP attached to it) and added the ipconfig2 that includes the additional external IP address and new public IP resource. Then when we fail over, the other VM does not have that configuration there so how does the SDN connector communicate with Azure to move that configuration (ipconfig2) from FortiGate-A-NIC1 to FortiGate-B-NIC1.

I see what you say above but then there will be a different external subnet IP and different public IP? that's what we're trying to avoid.

Is this not doable? how are additional public IPs failed over between devices?

Hi,

Multiple public IPs are supported. You will need to configure both FGT VMs separately. On FortiGate B you will need to reference the port1 NIC for that VM. Below is a sample config for both the FGT A and FGT B. The difference is the nic name for each VM is differently:

FGT a:
config nic
edit "FortiGate-A-NIC1"
config ip
edit "ipconfig1"
set public-ip "FGTAPClusterPublicIP"
next
edit "ipconfig2"
set public-ip "FGTAPClusterPublicIP2"
next
end
end

FGT b:
config nic
edit "FortiGate-B-NIC1"
config ip
edit "ipconfig1"
set public-ip "FGTAPClusterPublicIP"
next
edit "ipconfig2"
set public-ip "FGTAPClusterPublicIP2"
next
end
end

You can't add multiple public IPs to a private IP on a VM in Azure. If you want multiple inbound public IPs for different services published via the FortiGate you can use the SDN connector. Alternatively, you can use a Load Balancer in front of the FortiGate devices. That will be easier and faster. Look at the architecture on the below link for more information:

https://github.com/fortinet/azure-templates/tree/main/FortiGate/Active-Passive-ELB-ILB

If you need additional support we can connect you with an architect in your region.

Regards,

Joeri

Thanks for your help with this so far. We're trying to get this to work with the SDN connector and have made progress but having problems with the outbound traffic. First we created ipconfig2 on both FG-A (10.250.0.11) and FG-B (10.250.0.12) and attached a new public IP resource and attached only to FG-A. Then on the FG, we created 2 new VIPs pointing both of these IPs to the same internal IP (10.250.4.10) of our server workload. After adjusting the SDN config as you mention, we are able to float that IP between FG-A and FG-B. Firewall rules allow for ICMP so we can ping this with no problem. Problem now is when FG is failed over to B, the return traffic is still trying to use 10.250.0.11 and not .12.

Hi,

Apologies for the delay. Have you also configured the route tables to be updated for your protected subnets? That is the other purpose of the SDN connector failover.

Joeri

No problem, we fixed it by removing the VIP settings from the cluster config so we had to put one VIP (.11) on FG-A and another VIP (.12) on FG-B. Then in addition this was using a 5th NIC on the Azure VM so we needed to ensure that the virtual NIC resource allowed for IP Forwarding.