2023-09-28

Regular Expressions - Examples and Use Cases

Background

This post should serve as a repository of selected use-case reqular expressions, sorted by utility/name. It is predominantly centered around Linux and user-space utilies (with a certain amount of Cisco IOS-based examples as well in its heading and subheadings). It will hopefully be continually updated as I intent to keep adding to it as I see buld more regular expression use cases.

MDADM

The following was useful to gather mdadm information when I had an issue with a missing block device in a RAID array (which turned out to be SATA cables that where accidently swapped when performing maintenance/cleaning causing device unexpected device renaming which ultimately bumped a device off the array - sdb in my case). The examples here uses simple patterns to show the linux block devices in an array and looking for log entries

user@host:~$ sudo mdadm --detail /dev/md0 | egrep '\/dev\/sd?'
       3       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

user@host:~$ cat /etc/mdadm/mdadm.conf | egrep '\/dev\/sd?'
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde
user@host:~$
user@host:~$ sudo dmesg | grep md0
[    2.701684] md/raid:md0: device sdc operational as raid disk 1
[    2.701686] md/raid:md0: device sdd operational as raid disk 2
[    2.701687] md/raid:md0: device sde operational as raid disk 0
[    2.702549] md/raid:md0: raid level 5 active with 3 out of 3 devices, algorithm 2
[    2.702574] md0: detected capacity change from 0 to 8001304920064
user@host:~$ 

HDPARM

For similar reasons to the MDADM, I initially suspected that a disk was faulty and wanted to extract the serial numbers of each for warranty lookup. This is how I acheived that outcome (sans actual serial numbers).

user@host:~$ sudo hdparm -I /dev/sd? | egrep '(\/dev\/sd?|Serial\ Number)'
/dev/sda:
        Serial Number:      *** REDACTED ***
/dev/sdb:
        Serial Number:      *** REDACTED ***
/dev/sdc:
        Serial Number:      *** REDACTED ***
/dev/sdd:
        Serial Number:      *** REDACTED ***
/dev/sde:
        Serial Number:      *** REDACTED ***
user@host:~$

SCREEN

So, sometimes a screen is killed or exited (often accidently) and rather than opening up the local user screenrc file, looking for the screen/entry/command and then executing the screen command manually to restore it, I simply dirrectly execute a bach substitution with the help of grep. Here are a couple of examples

$(grep virsh ~/.screenrc)
$(grep /var/log/messages ~/.screenrc)
$(grep virt_snapshot ~/.screenrc)

LVM

At some point, we might need to review LVM volumes to see where we can scale and resize etc. The following allowed me to quickly see everything at a glance in order to formulate a plan for resizing.

user@host:~$ sudo lvdisplay | egrep "LV (Name|Size)"

[sudo] password for user:
  LV Name                video
  LV Size                <4.02 TiB
  LV Name                audio
  LV Size                750.00 GiB
  LV Name                hdimg
  LV Size                <2.51 TiB
  LV Name                swap
  LV Size                16.00 GiB
  LV Name                var-tmp
  LV Size                8.00 GiB
user@host:~$

Cisco IOS

A collection of various Cisco IOS commands and the very limited IOS regular expression engine on an IOS device (or IOS-XE's IOSD).

show version

Show a consolidated view of uptime, firmware and software version & reason for reload (minus all the Cisco copyright and releng information):

SWITCH#show ver | incl Cisco IOS Software|(ROM|BOOTLDR)|uptime|System (returned|restarted|image)
Cisco IOS Software, C3750 Software (C3750-IPSERVICESK9-M), Version 15.0(2)SE11, RELEASE SOFTWARE (fc3)
ROM: Bootstrap program is C3750 boot loader
BOOTLDR: C3750 Boot Loader (C3750-HBOOT-M) Version 12.2(44)SE5, RELEASE SOFTWARE (fc1)
SWITCH uptime is 1 week, 3 days, 22 hours, 29 minutes
System returned to ROM by power-on
System restarted at 12:28:16 WST Sun Sep 17 2023
System image file is "flash:/c3750-ipservicesk9-mz.150-2.SE11.bin"
SWITCH#

show etherchannel

Show portchannel member state times - This is particularly useful in correlating events for possible cause without having to rely on syslog:

SWITCH#show etherchannel 1 detail | incl ^(Port: |Age of the port)
Port: Gi1/0/15
Age of the port in the current state: 10d:22h:41m:32s
Port: Gi1/0/16
Age of the port in the current state: 10d:22h:41m:31s
Port: Gi1/0/17
Age of the port in the current state: 10d:22h:41m:30s
Port: Gi1/0/18
Age of the port in the current state: 10d:22h:41m:30s
SWITCH#

2023-09-21

Cisco IOS IPv6 observations

I wanted to delve deeper into some of the intricacies of IPv6, specifically Neighbour discovery and Directly attached Static routes as well as OSPFv3 using the legacy configuration. I recently discovered two odd Cisco behaviours with these following topics, possibly related to virtual lab devices, so not tested on real equipment.

  1. IPv6 Directly Attached Static Routes
  2. OSPF IPv6

IPv6 Directly Attached Static Route

This doesn't seem to work as described (at least not in a lab). Only fully-specified or a net-hop static route works. This could be due to either;
  • No MAC address to IPv6 neighbor binding - since IPv6 doesn't use the concept of ARP like IPv4 does, it instead relies on Neighbor discovery, which doesn't seem to work - more testing/research is required.
  • Limitation with the way Layer 2 forwarding is handled in an Emulated/Virtual environment.

OSPF IPv6

The protocol, according to some old Cisco Press article I dug up[1]. It appears to "leverage" the OSPFv3 engine, however it can be configured/started using the legacy IPv6 OPSF configuration similar to IPv4 as per the following:

ipv6 ospf process-id

Now, if there's an existing, legacy OSPF IPv4 configuration using the same process-id, it appears to silently fail when entering the configuration (except perhaps if you enable debugging). No neighbours will establish at all, despite documetation claiming that it migrates it to the OSPFv3 configuration (it most likely does this internally though as I observed that the configuration stays pretty much as you entered it in both running and start-up configuration).

The lesson I learned here, is to identify if multiple OSPF address-families share the same process in legacy configuration mode and either;

  1. Update all your configuration so that one of the "confliting" addresss families is unique or
  2. You migrate the conflicting processes/address families to the new OSPFv3 configuration as a consolidation of address families under the one process.
Further to the above, when removing an OSPF process with no ipv6 ospf process-id, any interface-specific IPv6 process/area configuration is also removewd without warning.

2023-09-20

HomeLab Mk.3 - Planning Phase

Background

I kicked off my homelab refresh project not long ago, dubbed "HomeLab mk.3" as its the third iteration since circa 2010. I'm now well into the planning phase but I've found that I'm also overlapping into the procurement phase (as described herein).

So far, I've decided to replace my pre-Ryzen AMD-based full-tower hyperconverged system with another hyperconverged system, but this time it will be housed in an 18RU rack for providing a small amount of noise management, but also neaten up the office a little, which will have the added benefit of assisting in home improvement (flooring) later.

Key requirements;

  1. Costs must be kept as low as possible
  2. Software RAID (due to #1)
  3. Hyperconverged system (due to item #1 and power constraints)
  4. Nested virtulisation for EVE-NG/network modelling

Therefore based on requirements, the system (excluding the rack) will comprise of the following;

  • One SSD for the hypervisor stack/host OS
  • Up to six (6) 8Tb CMR disks for the storage of guests etc.
  • 4RU ATX rackmount case (including rails of course) ✅
  • As much memory as the mainboard allows which relates to key requirement #4

Challenges

The current challenges surrounding the build are;

  1. Choice of Hypervisor (oVirt, libvirt, OpenStack, EVE-NG)
  2. Choice of CPU architecture (due to key requirement #4 and likely challenge #1)
  3. Possible Network re-architecture required to support the system including possible infrastructure Re-IP addressing.

Choice of Hypervisor

For item #1 the choices don't look that great, and I will probably stick with libvirt and the various virt toolsets, only because;

  • oVirt appears to no longer be supported downstream by RedHat which means contributions to the upstream project (oVirt) will likely and eventually kill the project
  • OpenStack is a pain to set up, even the all-in-one "packstack" which also means that could impact scalability in future if required
  • EVE-NG appears to be an inappropriate choice. While it supports KVM/QEMU/qcow2 images, I'm not sure I want this as the underlying HomeLab hypervisor (unless certain constraints can be overcome - these are considered not in scope of this post).

Choice of CPU architecture

For item #2 the CPU architecture is important only because network vendor (QEMU/qcow2) images highlight strict CPU architecture requirements as being Intel-based, and AFAIK nested virtulisation requires that the guest architecture matches that of the host.

Possible Network re-architecture

Item #3 is not insurmountable, but it is still a challenge nonetheless as I'm not sure about whether I will change the Hypervisor guest networks (dev, prod, lab etc) to connect back upstream at L2 or L3.

Procurement

As I mentioned already, the project planning phase is somewhat overlapping with the procurement phase, the reason for this is so that I can not only procure certain less tech-depreciating items over time to allow project budget flexibility, but also allow a certain level of reduced risk in operation of the system:

Case in point: HDD's - I never risk buying them from the same batch in case of multiple catastrophic failures.

I've already purchased 3 HDD's, the 4RU rackmount case and rails and an 18RU rack to house the new gear along with the existing kit (switch, router and UPS).

I'll continue to procure the HDD's until I have enough to build the system then all that left is to purchase the key parts for the rack mount case/system (CPU, mainboard, memory & PSU) once CPU architecture/hypervisor testing (see Hardware selection below) and the design is complete.

Hardware selection (CPU architecture)

In order to determine whether the new system will be Intel or AMD will depend on the testing performed on my AMD Ryzen-based desktop. If EVE-NG and the required images work in nested virtualisation (and/or bare-metal) with said CPU architecture, then I will be in a good position to stick with AMD for this iteration (and likely future iterations) of the HomeLab. After all, AMD-based systems appear to have a good pricepoint which relates back to key requirement #1

2023-09-19

EVE-NG and IOL copy run unix:

Lately, I've found myslelf working more on EVE-NG than the Cisco Learning Labs (CLL) which has allowed me to go beyond the constraints of the traditional learnings and key topics and allows me to tinker more than I probably should.

A long time ago I thought that EVE (possibly pre-NG) allowed the user to litterally download the text file of the running config to file instead of having to rely on term len 0, show run and screen-scraping the contents and then offloading the resulting clipboard to a file and saving it *yawn*

Today I discovered that you can save a config straight to a file in EVE-NG on the linux filesystem (at least you can with IOL).

The way to do this is simply use the copy command with unix:file as the destination, replacing file with the name of the file;

R1#copy start unix:r1.txt 
Destination filename [r1.txt]? 
1683 bytes copied in 0.011 secs (153000 bytes/sec)

R1#

It is litterally that simple.

You can then find the file under the EVE-NG staging area, which you can then work on as a plain-text file;

root@eve-ng:~# ls -alh /opt/unetlab/tmp/1/e6eadfea-e000-41d7-abe9-98f8004bb23f/1 | egrep "r.\.txt$"
-rw-rw-r-- 1 unl1 unl 1.7K Sep 19 16:40 r1.txt
root@eve-ng:~#

I can only imagine how useful this could be in reverse by merging config snips straight from the emulated nodes off the host filesystem and perhaps even generating templates for labs etc.

More testing is required.

 
Google+