2024-11-29

Migrating away from BGP default-information originate

Background

I recently had yet another nbn unplanned outage. Now I have a GL-MT300N-V2 which I have basic config and a floating, static route on my central/downstream multilayer switch as a backup route with a worse metric than BGP, so that I can share my mobile phone's Mobile Broadband in the event that my Fortigate (FGT) can't forward default route traffic, but for some reason it was not working as expected/intended.

Problem #1 - IPTABLES default reject on FORWARD table

I did not capture the issue in detail, but it turned out that the GL-MT300N-V2 was blocking traffic in the forwarding table, changing this setting is what allowed forward traffic to pass to the MBB tether.



Problem #2 - default-information originate

The upstream BGP default route from my FGT persisted even in the event of an outage, when it should have disappeared so that the floating static route comes takes over internet forwarding (the Fortigate article linked herein explains this however this is normal BGP behaviour, but it was initially overlooked at the time of implementation. whoops!), but this was because I was using the Fortinet option 'set capability default-information-originate` in the BGP configuration, so I ended up tuning the BGP configuration and made the default route more dynamic as follows:

The solution

  1. Created a DEFAULT route prefix list
  2. Created a Route-map that uses the prefix list
  3. Redistributed static routes into the BGP table using the route-map
It now looks something like this:

config router prefix-list
    edit "PL_DEFAULT"
        config rule
            edit 1
                set prefix 0.0.0.0 0.0.0.0
                unset ge
                unset le
            next
        end
    next
end
config router route-map
    edit "RM_DEFAULT"
        config rule
            edit 1
                set match-ip-address "PL_DEFAULT"
            next
        end
    next
end
config redistribute "static"
    set status enable
    set route-map "RM_DEFAULT"
end

I then disconnected the nbn and enabled `debug ip routing` on my switch to test the solution.

During testing and while the nbn was offline, the floating static was in place, exactly as expected:

SWITCH#show ip route | invl 0\.0\.0\.0\/0
S*    0.0.0.0/0 [254/0] via 192.168.81.1
SWITCH#

Once the nbn service was back and the upstream FGT inserted a static default, it wasn't long before I saw this the resulting debug message:

1w0d: RT: updating bgp 0.0.0.0/0 (0x0):
    via 10.8.18.1

1w0d: RT: closer admin distance for 0.0.0.0, flushing 1 routes
1w0d: RT: add 0.0.0.0/0 via 10.8.18.1, bgp metric [20/0]
1w0d: RT: default path is now 0.0.0.0 via 10.8.18.1

I followed this up with a check on the routing table, and here is the dynamic default route from an upstream ppp(oe) link in all its glory.

SWITCH#show ip route bgp | incl 0\.0\.0\.0\/0
B*    0.0.0.0/0 [20/0] via 10.8.18.1, 00:27:41
SWITCH#

Conclusion


This method provides a more elegant solution so that the backup internet solution can be leveraged with almost no touch.

In case your wondering why I use a floating static route, this is because the GL-MT300N-V2 is extremely limited in flash storage making it difficult to install and operate Quagga/FRR and I am tired of resetting the device as it has a tendency to fall over after a while which I suspect is due to lack of space.

The only possible improvement I could do right now is improving security through policy by putting the GL-MT300N-V2 behind the firewall itself, but that is a project for another day (not to mention it runs OpenWRT under the hood and has its own IPTABLES firewall anyway). I also plan to swap out the FGT for a dedicated, OPNSense appliance hosted on an SBC.


I hope this has been informative and I'd like to thank you for reading!

Stay tuned for more...


2024-11-23

Reflections on Cisco ENARSI Study

One LAB to rule them all

While studying for my Cisco Certified Network Processional Enterprise Advance Routing (ENARSI) specialisation, I eventually figured out a strategy to help me focus on learning and less on constantly creating LABs.

I decided to build a reusable (flexible) lab by simply specifying the L2 VLAN at the router with a sub-interface so that I could potential attach any router to any other router in a P2MP broadcast setup.

It is impractical for a production network as each router shares the bandwidth of a single link for all VLANs on the trunk, but it does provide a very simple, elegant and flexible solution in its design allowing for focusing on less lab building and more hands on in a variety of scenarios.



Interestingly, both IOSvL2 switches refuse to provision certain VLAN ranges in the device config I built, but that's probably a bug with either CML or the image itself.

EXAM

So the exam to date is nothing short of frustrating. Just when I think I've nailed a lot of the concepts in EIGRP, OSPF, redistribution and many other L3 topics and begin feeling more confident, I find that the exams pool of questions are completely disjointed. For example, in my last exam - just before the time of writing this - I had less than 50% of actual advanced routing questions and more around services, device access and extremely low-level and corner-case things like really nuanced MPLS. It's not helping me to overcome imposter syndrome and to me makes it feel like I'm just part of Cisco's additional revenue stream and business strategy rather than learning and valuable certification process.

It also frustrates me that I seem to have to retain and recall an insane and almost inhuman amount of low-level information on EVERYTHING no mater how relevant or related to advanced routing it may seem.

I may just pivot across to other vendors and technology because it seems as though my brain is incompatible with rote learning.

2024-11-06

Dealing with old Cisco gear and SSH

I've spent enough time dealing with old Cisco get to know that the old outdated Ciphers and Key exchanges can be tricky to deal with. Unfortunately, we can't just run the latest and greatest in a LAB and it generally considered isolated, so we have to live with this to a certain degree even if insecure.

I'm documenting the process of how to use an SSH client (Linux) to force it to use the right KEX and cypher etc. so people (including myself) don't have to piece the solution together from different sources every single time.

First off, the answers to the command-line parameters required lie in debugging in the client application itself.

ssh -vvv $host

This spits out a lot of information, which I could not seem to be able to filter through egrep, nonetheless, key items are listed here for reference

debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: KEX algorithms: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: host key algorithms: ssh-rsa

With that information gleaned, I was able to construct the parameters required to successfully connect to an SSH session in a lab.

ssh -oStrictHostKeyChecking=no -oKexAlgorithms=+diffie-hellman-group1-sha1,diffie-hellman-group14-sha1 -oCiphers=aes128-ctr -oHostkeyAlgorithms=+ssh-rsa $host


 
Google+