2024-11-06

Dealing with old Cisco gear and SSH

I've spent enough time dealing with old Cisco get to know that the old outdated Ciphers and Key exchanges can be tricky to deal with. Unfortunately, we can't just run the latest and greatest in a LAB and it generally considered isolated, so we have to live with this to a certain degree even if insecure.

I'm documenting the process of how to use an SSH client (Linux) to force it to use the right KEX and cypher etc. so people (including myself) don't have to piece the solution together from different sources every single time.

First off, the answers to the command-line parameters required lie in debugging in the client application itself.

ssh -vvv $host

This spits out a lot of information, which I could not seem to be able to filter through egrep, nonetheless, key items are listed here for reference

debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: KEX algorithms: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: host key algorithms: ssh-rsa

With that information gleaned, I was able to construct the parameters required to successfully connect to an SSH session in a lab.

ssh -oStrictHostKeyChecking=no -oKexAlgorithms=+diffie-hellman-group1-sha1,diffie-hellman-group14-sha1 -oCiphers=aes128-ctr -oHostkeyAlgorithms=+ssh-rsa $host


2024-10-22

CML2 - Some Thoughts and Comparison to EVE-ng

Since I'm back in study mode I thought I'd get a hold of a Cisco Moddeling Labs (CML)2 licence so that I can try and gain some efficiency and therefore more focus in LABs at hand rather than troubleshooting and working around various kinks and nuances of the LAB environment, which I found I was doing a lot of in EVE-ng (prior to 6.0.x).

Installation

Once the purchase over at lerningnetworkstore was complete, Installation was quite straight forward except I had an extra step, which I will explain later.

Here's the high-level steps I took to acomplish the task;

  1. Downloaded the OVA and the refplat bundle from the Cisco software center
  2. Copied the OVA to the hypervisor
  3. Converted the vmdk from the OVA to qcow2
  4. Imported the qcow2 image into the hypervisor
Step 3 was only required for me since I'm using QEMU/KVM+libvrt as my hypervisor, but a quick search online guided me to the solution which allowed me to almost seamlessly import it.

I imported the qcow2 image and started the VM, but it would not boot properly, but it seems as though UEFI is required. Easy fix.

Initial Setup

Initial setup required access to the VM console, but it was very straight forward, it offered to use DHCP, expand the disk to its biggest possible extent and set the passwords. All quite lacklustre, painless and somewhat anticlimactic.

Once booted the console informs you that you can log into the CML application and also the Cockpit web interface for sysadmin tasks (queue the sysadmin credentials).

Licence and Registration

At first login to CML the application setup continues a bit more, you then can register the instance by inputting the licence key (providing you're not distracted by the setup option to navigate away from the wizard).

LAB Time

Once CML is configured I seem to remember it immediately creating a new lab and leaving me in the driving seat at that point. I found it trivial to navigate, add nodes access console, create links etc.

So far so good. Less than a couple of hours to set up as opposed to EVE-ng, which took me over a few hours to install from scratch (ISO) in a VM and that's not including the time it took to copy and set up each image (convert the few Cisco qcow2 images that were actually raw), test and then start labbing and figure out all the strange and weird behaviours like interface state configuration not being saved in exported configs etc.

I have no idea yet how to add custom (non-Cisco) nodes to CML. I know that qcow2 images (only) can be added, but they demand a lot of hypervisor options for node definitions, which I also don't want to have to worry myself over. And then theres the quirk of how node default node definitions are read-only. I want to edit them but CML scares you into not doing so warning you that it could break LABs and there doesn't seem to be an option to revert them back to default.

Another quirk I found is that CML (out of the box) doesn't give you the same sort of naming construct as EVE-ng does with bulk node numbering. While you can prefix nodes, it puts a '-' and then a number which is starts from 0, not 1. So I end up with R-0, R-1, R-2 and so on. Not a big deal as renaming is fairly straight-forward, but renaming 5-10 devices isn't something I want to have to spend time doing.

The last thing I'd like to mention is that that I'm noticing a lot of EVE-ng like similarities with regards to LAB IDs, exported configs (or as CML puts it Fetch config, which I discovered needs to be done individually on each node in order to include the config in the exported LAB YAML).

Final Opinion

CML is very polished. It's a breath of fresh air for Cisco-centric stuff out-of-the-box. Where it is lacking though is the system performance in the bottom bar is quite distracting. When starting a lab and while its settling or whenever a router reloads or just decides it needs more CPU, you the user see it. This is distraction and if there isn't an option to toggle it off or hide it completely, there should be. I want to focus on the LAB at hand not sysadmin tasks.

I've never been a fan of the EVE-ng UNL file format for UNet Labs, but the ability to easily export and import labs in a standard file format (YAML) is fantastic.

Licensing is something that I don't like. While CML does come with an eval licence, you still have to purchase it to get access to download it. CML Personal could still be free/accessible for personal/eval use and could include a perpetual licence or just require it to be registered to get a free licence would be better. Cisco are definitely bringing in a revenue stream across the entire CML product-line, but that goes without saying that they probably pumped a lot of resources into developing the KVM+Cockpit-based hypervisor and WebGUI into quite a polished product which also has a rich API for automation which can be leveraged for things like CI/CD.

The disk capacity of the OVA seems rather small, so I'm going to consider using libvirts guestfs tools to expand the qcow2 image and then figure out how to expand the PV/LV within the OS/Cockpit.

CML is now my go-to Network Modelling LAB tool for my next CCNP ENARSI exam since it offers less quirks and more polish to allow me to more easily create, manage and operate my labs and focus on what matters. Learning.

2024-08-24

HomeLab Mk.3 - Project Closeout

From a project methodology-standpoint, I'm missing some udates since the last post, but this is because I had since entered a redundancy, had immediate funding as a result, not to mention, limitted time to kick-off, execute and deploy before securing new employment.

The whole project is now complete with a 4RU AMD Ryzen-based custom-built server runnig Debian GNU/Linux.

Some of the improvemnts that have been made so far are as follows (in no particular order);

  1. Employed cryptsetup on top of software RAID
  2. Purchased and installed the 4RU system into an 18RU rack
  3. Installed Cockpit for easier host/hypervisor management
  4. Migrated the VMs from the previous HomeLab hypervisor to the new one
  5. Built a functioning eve-ng instance as a VM using nested virtualisation for network moddeling
One key compromise, was that I decided to reduce costs with memory so the hypervisor host is outfited with 64Gb instead of the maximum 192Gb of RAM. This was due to the higher than expected motherboard cost not to mention my requirements are farily low at this stage so the cost of that sort of outlay isn't justified.

In addition to the above, I've also embarked on a more secured and virtualised infrastructure by using OPNSense for PROD, DEV, (NET)LAB and DMZ networks which pretty much just stiches together and firewalls multiple isolated virtual networks inside of libvirt and peers with the multi-layer switch over a P2P L3 interface via a dot1q trunk while also advertising a summary route and accepts a default route only from upstream.

I think its a failry elegant design given my constraints and requirements but more importantly, it is a much more manageble setup now which reduces some technical debt for me. Now theres very few improvements to make even in the next iteration of the HomeLab in future, which will mostly be a hardware refresh - That and re-racking everything since the racks mounting rails needs adjusting to accomidate the 4RU server depth which was unfortunately not able to be done in time.

While I would love to share the overall design itself, it unfortunately has far too much information that is now considered somewhat confidential, but those who I trust and those who know me are always welcome to take a read (preferably onscreen) as I'm not in a position to re-write it for public consumption.

Debugging Cisco Access Lists

I want to share something specific I learned, that seems to be outside the official CCNP curriculum.

Despite the fact that I've done some (L2) traffic seperation for untrusted devices, there's still, unfortunately some that need to be on my internal L3 network for now (Google-based devices like a Google TV-based TV and an old Google Home - Nest products don't interest me) so I decided to do something about this to restrict vertical traffic and potential attacks from old, unsupported or not-so-trusted hosts.

While I could seperate the traffic at L2 and forward it to a virtual firewall or my FGT internet firewall appliance, that in my opinion, causes sub-optimal traffic flows due to network limitations/design since the budget won't allow better gear for my needs (like VXLAN/VPNV4 overlay with route leaking etc).

So, all I have to work with is an old unsupported Cisco IOS v15 (Classic) Multilayer central switch in my home network.

I thought, this would be pretty easy. Just allow host services like DHCP/netboot, intra-VLAN traffic etc., block RFC1918 and allow everything else. Ez Pz. Except netboot to my netboot.xyz server didn't work initially and I couldn't easilly figure out why.

ip access-list extended RESTRICTED_ACCESS
 remark NETWORK_SERVICES
 permit udp any eq bootpc any eq bootps
 permit udp any any eq domain
 remark ALLOW_PING
 permit icmp any any echo
 permit icmp any any echo-reply
 remark ALLOW_PXE_SERVER
 permit udp any host 192.168.56.3 eq tftp
 permit tcp any host 192.168.56.3 eq www
 remark PERMIT_INTRA-VLAN
 permit ip 192.168.0.0 0.0.0.255 192.168.0.0 0.0.0.255 log
 remark DENY_RFC1918
 deny   ip any 10.0.0.0 0.255.255.255
 deny   ip any 172.16.0.0 0.15.255.255
 deny   ip any 192.168.0.0 0.0.255.255
 remark ALLOW_EVERYTHING_ELSE
 permit ip any any log

I needed some visibility on the ports and protocols like a firewall log... Cisco conditional debugging to the rescue!

The specific Cisco debug I used was `debug ip packet detail`

Unfortunately, the detail was overwhelming and showed far too much information for any human to interpret and nearly brought down the switch, so I had to contrain or filter the output with a debug condition similar to the following:

`debug condition ip 192.168.0.4`

This produced the information I required and allowed me to pinpoint the missing port and protocol required!

21w3d: IP: s=192.168.0.1 (local), d=192.168.0.4 (Vlan666), len 56, sending

21w3d:     ICMP type=3, code=13
21w3d: IP: s=192.168.0.1 (local), d=192.168.0.4 (Vlan666), len 56, output feature
21w3d:     ICMP type=3, code=13, Check hwidb(88), rtype 1, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
21w3d: IP: s=192.168.0.1 (local), d=192.168.0.4 (Vlan666), len 56, sending full packet
21w3d:     ICMP type=3, code=13pak 599DB6C consumed in input feature , packet consumed, Access List(31), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
21w3d: IP: s=192.168.0.4 (Vlan666), d=192.168.56.3, len 32, access denied
21w3d:     UDP src=62557, dst=30002
21w3d: FIBipv4-packet-proc: route packet from Vlan666 src 192.168.0.4 dst 192.168.56.3
21w3d: FIBfwd-proc: packet routed by adj to Vlan56 192.168.56.3
21w3d: FIBipv4-packet-proc: packet routing succeeded
21w3d: IP: s=192.168.0.1 (local), d=192.168.0.4, len 56, local feature
21w3d:     ICMP type=3, code=13, CASA(4), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
21w3d: IP: s=192.168.0.1 (local), d=192.168.0.4, len 56, local feature

As you can see in the above output, UDP port 3002 was blocked (due to the implicit deny any rule), so adding that in before the deny RFC1918 entry resolved this for me. Happy days.

So here's the final ACL that worked a treat.

ip access-list extended RESTRICTED_ACCESS
 remark NETWORK_SERVICES
 permit udp any eq bootpc any eq bootps
 permit udp any any eq domain
 remark ALLOW_PING
 permit icmp any any echo
 permit icmp any any echo-reply
 remark ALLOW_PXE_SERVER
 permit udp any host 192.168.56.3 eq tftp
 permit udp any host 192.168.56.3 eq 30002
 permit tcp any host 192.168.56.3 eq www
 remark PERMIT_INTRA-VLAN
 permit ip 192.168.0.0 0.0.0.255 192.168.0.0 0.0.0.255 log
 remark DENY_RFC1918
 deny   ip any 10.0.0.0 0.255.255.255
 deny   ip any 172.16.0.0 0.15.255.255
 deny   ip any 192.168.0.0 0.0.255.255
 remark ALLOW_EVERYTHING_ELSE
 permit ip any any log

Yes, I know I can (and probably will) tighten it some more and make DNS more specific (or remove it entirely to enforce quad9 DNS and prevent poisoning), but I wanted an ACL that is as simple as possible so I can easlily model and apply to other interfaces and SVI's which I might add is being done and so far it is working well.

2023-11-03

Git

I'm not sure why git is called 'the stupid content tracker' (according to the man page that is), but I've discovered that - despite many tutorials overcomplicating the setup by adding the creation of a git user account and SSH key-based authentication - it is stupidly trivial to set up a remote repository.


By stupid I mean that git does not reference any of the object files in a way that you would expect or as you are used to working with them in your locally checked-out repository or IDE.

This method of file storage threw me off and caught me off guard but I eventually managed to get the initial comit added to the remote.

I also learned that git appears to work locally, meaning you can clone on the same system that's hosting the repository using directory paths without a transport protocol!

I'm now armed with information on how private git repo hosting works, which is especially useful for interim SCM or when private hosting is required for whatever reason.

2023-10-29

Libvirt virtio Networking

Devling deeper into Libvirt, has my trying to find ways to improve the previous build through lab testing.

The latest testing is virtio networking with an isolated network in order to mitigate libvirt not being able to snapshot guests unless the volumes they use are all qcow2.

With this limitation in mind, I employed NFS to a common datastore for guests that require access to the datastore, however the path taken in the current configuration is suboptimal and takes the path of the hosts management interface.

The virtio model provides much better throughput while at the same time allowing guests to communicate with the host, but not outside the host.

In my testing with a virtio model I was able to achieve over 10Gbps with no tuning whatsoever as follows;

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  16.3 GBytes  14.0 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  16.3 GBytes  14.0 Gbits/sec                  receiver

The current path which uses the suboptimal path is not only limited to the hardware NIC/switch, but we can also observe quite a lot of retries indicating TCP retransmits are likely also occuring which would be introducing latency with NFS.

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.10 GBytes   942 Mbits/sec  315             sender
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec                  receiver
I now have yet amother defined improvement concept ready for implementation on the new server build.

2023-10-26

Libvirt pool storage management

I was really looking forward to improving on my previous homelab by building a new server, defining succinct and well thought out pools that leverages and manages LVM, mounts etc in order to abstract away some of the sysadmin tasks.


In my limit testing, I've found that libvirt storage management is flexible yet limited insofar as the fact that I could have potentially done away with the complexities of mdadm, manual definition of a PV and/or VG and LVs, formatting, creating mountpoints and then adding the mounted filesystem(s) to libvirt or let libvirt mount it for me, but since I'm using crypto in order to mitigate potential data breaches during hard drive disposal, it means that I can't leverage RAID functionality within LVM itself as I require a simplified encryption with a single key on a single volume or in my case, an md array.

If I didn't require crypto, I may have been able to skip the manual mdadm RAID configuration and carved out nicer storage management, however this is unfortunately not the case.

It seems as though you can't easily carve up an LV as if it where a PV from libvirt's perspective when defining a pool (that is without the headaches that comes with partitioning LVs or overcomplicating the solution with pools defined from libvirt volumes). Libvirt pools also seem flat in nature and I can't figure out how to define a volume under a directory without defining seperate volumes (such as dir-based) to overcome this.

So for now my solution is to handle most of the storage manually with one single mount point based on a single md and crypto device along with a single LVM PV, VG and LV with dir-based pools defined to manage volumes.

It doesn't seem ideal nor efficient, but right now I need a solution to move the project forward to completion.

I will further test and refine (and possibly even automate) the solution on the new hypervisor host at some point. Who knows, there may be better tools or newly discovered ways of doing this in the future.

The next step in the overall solution is to test a virtiofs shared for and/or virtio high-speed (10Gbps) isolated SAN solution.

 
Google+