CFE declaration in Azure when big IP is not the pool member GW
Closed this issue · 10 comments
Description
In my environment all routes tables have a default route pointing to o the Azure FW which is located in different Virtual Network (but same region and subscription than F5 and pool members), wondering what route table should be added in the declaration because there is a lack of information regarding a declaration example when Bigip and pool members are behind an Azure FW.
Please advise how a declaration looks like in this scenario regarding route table, because CFE is not working as expected.
Environment information
For bugs, enter the following information:
- Cloud Failover Extension Version: 1.13.0
- BIG-IP version: BIG-IP 16.1.3.2 Build 0.0.4 Point Release 2
- Cloud provider: Azure
Severity Level
For bugs, enter the bug severity level. Do not set any labels.
Severity: 3
Hi @marpad20, just to clarify, your clients are routed to the Azure FW and then you want the Azure FW to use the active BIG-IP as the next hop? If so then you need to add (either by tag or scoping address in CFE config) the route table where the egress interface of the Azure FW lives. If that is not working, can you share the output of this command on the device that became active: tail -f /var/log/restnoded/restnoded.log | grep f5-cloud-failover
thanks
Hi Mike,
Thanks for prompt reply.
Yes, Traffic this is the flow. Client->Az FW <-> F5 <-> Az FW <->Web servers.
We will update the declaration with you advise and let you know.
Ty.
HI @mikeshimkus , the azure FW egress subnet does not have a route table associated. do you have any clue how to write the declaration then?
Azure FW is in VNET A, F5 VM is in VNET B and there is a peering between them.
Please advise
It will need to have a route table associated.
Hi @mikeshimkus , i wonder why do i need to do this if azure manage FW is properly sending traffic to F5 VM1 without any route table associated to the FW egress subnet?
Please advise,
Thank you
MP
Sorry, I'm not clear. Previously you said that CFE was not working as expected. Are you saying that now it is using the active F5 virtual machine for the next hop when the HA pair fails over?
@mikeshimkus , what i am trying to say is that when VM1 is active traffic properly hits the VIPs, but when i do manual failover and force the VM1 to standby, VIP is failing, for a short period of time i see traffic still hitting the VM1 for that VIP but i see no reply back from VIP, and then after 1 or 2 minutes suddenly VM1 fails back to active but VM2 keeps as active and both VMs sends ARPs request asking for the VIP IP. Traffic VIP is recovered when i forced VM2 to standby.
I know that both in active is expected not to work, but why if VM1 fails back to active the VM2 remains active too?
and
why during the period VM2 is active traffic is not forwarded to it. (if traffic hits the VM1 without any UDR attached to the FW)
I hope i can explain.
Thanks in advance for your help.
Marlon P.
It is unclear to me how you have this setup. Can you share your CFE logs (cat /var/log/restnoded/restnoded.log | grep f5-cloud-failover)? If you can open a support case so we can see the configuration, that would be helpful.
How are you routing traffic to the self IP address of the active VM, if not using Azure route table?
How are you routing traffic to the self IP address of the active VM, if not using Azure route table?
Hi @mikeshimkus , last time i have asked that but so far i have not a good answer.
In the mean time, does F5 has a design document/guide with Azure FW as the GW for every subnet (F5 and Nodes)?
FW is a HUB virtual Network and F5 in a "spoke" different virtual network.
Thanks in advance
Closing. Please leave a message here, if you would like additional assistance, and I will reopen the issue.