2023-11-03

Git

I'm not sure why git is called 'the stupid content tracker' (according to the man page that is), but I've discovered that - despite many tutorials overcomplicating the setup by adding the creation of a git user account and SSH key-based authentication - it is stupidly trivial to set up a remote repository.


By stupid I mean that git does not reference any of the object files in a way that you would expect or as you are used to working with them in your locally checked-out repository or IDE.

This method of file storage threw me off and caught me off guard but I eventually managed to get the initial comit added to the remote.

I also learned that git appears to work locally, meaning you can clone on the same system that's hosting the repository using directory paths without a transport protocol!

I'm now armed with information on how private git repo hosting works, which is especially useful for interim SCM or when private hosting is required for whatever reason.

2023-10-29

Libvirt virtio Networking

Devling deeper into Libvirt, has my trying to find ways to improve the previous build through lab testing.

The latest testing is virtio networking with an isolated network in order to mitigate libvirt not being able to snapshot guests unless the volumes they use are all qcow2.

With this limitation in mind, I employed NFS to a common datastore for guests that require access to the datastore, however the path taken in the current configuration is suboptimal and takes the path of the hosts management interface.

The virtio model provides much better throughput while at the same time allowing guests to communicate with the host, but not outside the host.

In my testing with a virtio model I was able to achieve over 10Gbps with no tuning whatsoever as follows;

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  16.3 GBytes  14.0 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  16.3 GBytes  14.0 Gbits/sec                  receiver

The current path which uses the suboptimal path is not only limited to the hardware NIC/switch, but we can also observe quite a lot of retries indicating TCP retransmits are likely also occuring which would be introducing latency with NFS.

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.10 GBytes   942 Mbits/sec  315             sender
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec                  receiver
I now have yet amother defined improvement concept ready for implementation on the new server build.

2023-10-26

Libvirt pool storage management

I was really looking forward to improving on my previous homelab by building a new server, defining succinct and well thought out pools that leverages and manages LVM, mounts etc in order to abstract away some of the sysadmin tasks.


In my limit testing, I've found that libvirt storage management is flexible yet limited insofar as the fact that I could have potentially done away with the complexities of mdadm, manual definition of a PV and/or VG and LVs, formatting, creating mountpoints and then adding the mounted filesystem(s) to libvirt or let libvirt mount it for me, but since I'm using crypto in order to mitigate potential data breaches during hard drive disposal, it means that I can't leverage RAID functionality within LVM itself as I require a simplified encryption with a single key on a single volume or in my case, an md array.

If I didn't require crypto, I may have been able to skip the manual mdadm RAID configuration and carved out nicer storage management, however this is unfortunately not the case.

It seems as though you can't easily carve up an LV as if it where a PV from libvirt's perspective when defining a pool (that is without the headaches that comes with partitioning LVs or overcomplicating the solution with pools defined from libvirt volumes). Libvirt pools also seem flat in nature and I can't figure out how to define a volume under a directory without defining seperate volumes (such as dir-based) to overcome this.

So for now my solution is to handle most of the storage manually with one single mount point based on a single md and crypto device along with a single LVM PV, VG and LV with dir-based pools defined to manage volumes.

It doesn't seem ideal nor efficient, but right now I need a solution to move the project forward to completion.

I will further test and refine (and possibly even automate) the solution on the new hypervisor host at some point. Who knows, there may be better tools or newly discovered ways of doing this in the future.

The next step in the overall solution is to test a virtiofs shared for and/or virtio high-speed (10Gbps) isolated SAN solution.

2023-09-28

Regular Expressions - Examples and Use Cases

Background

This post should serve as a repository of selected use-case reqular expressions, sorted by utility/name. It is predominantly centered around Linux and user-space utilies (with a certain amount of Cisco IOS-based examples as well in its heading and subheadings). It will hopefully be continually updated as I intent to keep adding to it as I see buld more regular expression use cases.

MDADM

The following was useful to gather mdadm information when I had an issue with a missing block device in a RAID array (which turned out to be SATA cables that where accidently swapped when performing maintenance/cleaning causing device unexpected device renaming which ultimately bumped a device off the array - sdb in my case). The examples here uses simple patterns to show the linux block devices in an array and looking for log entries

user@host:~$ sudo mdadm --detail /dev/md0 | egrep '\/dev\/sd?'
       3       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

user@host:~$ cat /etc/mdadm/mdadm.conf | egrep '\/dev\/sd?'
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde
user@host:~$
user@host:~$ sudo dmesg | grep md0
[    2.701684] md/raid:md0: device sdc operational as raid disk 1
[    2.701686] md/raid:md0: device sdd operational as raid disk 2
[    2.701687] md/raid:md0: device sde operational as raid disk 0
[    2.702549] md/raid:md0: raid level 5 active with 3 out of 3 devices, algorithm 2
[    2.702574] md0: detected capacity change from 0 to 8001304920064
user@host:~$ 

HDPARM

For similar reasons to the MDADM, I initially suspected that a disk was faulty and wanted to extract the serial numbers of each for warranty lookup. This is how I acheived that outcome (sans actual serial numbers).

user@host:~$ sudo hdparm -I /dev/sd? | egrep '(\/dev\/sd?|Serial\ Number)'
/dev/sda:
        Serial Number:      *** REDACTED ***
/dev/sdb:
        Serial Number:      *** REDACTED ***
/dev/sdc:
        Serial Number:      *** REDACTED ***
/dev/sdd:
        Serial Number:      *** REDACTED ***
/dev/sde:
        Serial Number:      *** REDACTED ***
user@host:~$

SCREEN

So, sometimes a screen is killed or exited (often accidently) and rather than opening up the local user screenrc file, looking for the screen/entry/command and then executing the screen command manually to restore it, with the help of grep, I simply execute it dirrectly with bash substitution. Here are a couple of examples:

$(grep virsh ~/.screenrc)
$(grep /var/log/messages ~/.screenrc)
$(grep virt_snapshot ~/.screenrc)

LVM

At some point, we might need to review LVM volumes to see where we can scale and resize etc. The following allowed me to quickly see everything at a glance in order to formulate a plan for resizing.

user@host:~$ sudo lvdisplay | egrep "LV (Name|Size)"

[sudo] password for user:
  LV Name                video
  LV Size                <4.02 TiB
  LV Name                audio
  LV Size                750.00 GiB
  LV Name                hdimg
  LV Size                <2.51 TiB
  LV Name                swap
  LV Size                16.00 GiB
  LV Name                var-tmp
  LV Size                8.00 GiB
user@host:~$

Cisco IOS

A collection of various Cisco IOS commands and the very limited IOS regular expression engine on an IOS device (or IOS-XE's IOSD).

show version

Show a consolidated view of uptime, firmware and software version & reason for reload (minus all the Cisco copyright and releng information):

SWITCH#show ver | incl Cisco IOS Software|(ROM|BOOTLDR)|uptime|System (returned|restarted|image)
Cisco IOS Software, C3750 Software (C3750-IPSERVICESK9-M), Version 15.0(2)SE11, RELEASE SOFTWARE (fc3)
ROM: Bootstrap program is C3750 boot loader
BOOTLDR: C3750 Boot Loader (C3750-HBOOT-M) Version 12.2(44)SE5, RELEASE SOFTWARE (fc1)
SWITCH uptime is 1 week, 3 days, 22 hours, 29 minutes
System returned to ROM by power-on
System restarted at 12:28:16 WST Sun Sep 17 2023
System image file is "flash:/c3750-ipservicesk9-mz.150-2.SE11.bin"
SWITCH#

show etherchannel

Show portchannel member state times - This is particularly useful in correlating events for possible cause without having to rely on syslog:

SWITCH#show etherchannel 1 detail | incl ^(Port: |Age of the port)
Port: Gi1/0/15
Age of the port in the current state: 10d:22h:41m:32s
Port: Gi1/0/16
Age of the port in the current state: 10d:22h:41m:31s
Port: Gi1/0/17
Age of the port in the current state: 10d:22h:41m:30s
Port: Gi1/0/18
Age of the port in the current state: 10d:22h:41m:30s
SWITCH#

2023-09-21

Cisco IOS IPv6 observations

I wanted to delve deeper into some of the intricacies of IPv6, specifically Neighbour discovery and Directly attached Static routes as well as OSPFv3 using the legacy configuration. I recently discovered two odd Cisco behaviours with these following topics, possibly related to virtual lab devices, so not tested on real equipment.

  1. IPv6 Directly Attached Static Routes
  2. OSPF IPv6

IPv6 Directly Attached Static Route

This doesn't seem to work as described (at least not in a lab). Only fully-specified or a net-hop static route works. This could be due to either;
  • No MAC address to IPv6 neighbor binding - since IPv6 doesn't use the concept of ARP like IPv4 does, it instead relies on Neighbor discovery, which doesn't seem to work - more testing/research is required.
  • Limitation with the way Layer 2 forwarding is handled in an Emulated/Virtual environment.

OSPF IPv6

The protocol, according to some old Cisco Press article I dug up[1]. It appears to "leverage" the OSPFv3 engine, however it can be configured/started using the legacy IPv6 OPSF configuration similar to IPv4 as per the following:

ipv6 ospf process-id

Now, if there's an existing, legacy OSPF IPv4 configuration using the same process-id, it appears to silently fail when entering the configuration (except perhaps if you enable debugging). No neighbours will establish at all, despite documetation claiming that it migrates it to the OSPFv3 configuration (it most likely does this internally though as I observed that the configuration stays pretty much as you entered it in both running and start-up configuration).

The lesson I learned here, is to identify if multiple OSPF address-families share the same process in legacy configuration mode and either;

  1. Update all your configuration so that one of the "confliting" addresss families is unique or
  2. You migrate the conflicting processes/address families to the new OSPFv3 configuration as a consolidation of address families under the one process.
Further to the above, when removing an OSPF process with no ipv6 ospf process-id, any interface-specific IPv6 process/area configuration is also removewd without warning.

2023-09-20

HomeLab Mk.3 - Planning Phase

Background

I kicked off my homelab refresh project not long ago, dubbed "HomeLab mk.3" as its the third iteration since circa 2010. I'm now well into the planning phase but I've found that I'm also overlapping into the procurement phase (as described herein).

So far, I've decided to replace my pre-Ryzen AMD-based full-tower hyperconverged system with another hyperconverged system, but this time it will be housed in an 18RU rack for providing a small amount of noise management, but also neaten up the office a little, which will have the added benefit of assisting in home improvement (flooring) later.

Key requirements;

  1. Costs must be kept as low as possible
  2. Software RAID (due to #1)
  3. Hyperconverged system (due to item #1 and power constraints)
  4. Nested virtulisation for EVE-NG/network modelling

Therefore based on requirements, the system (excluding the rack) will comprise of the following;

  • One SSD for the hypervisor stack/host OS
  • Up to six (6) 8Tb CMR disks for the storage of guests etc.
  • 4RU ATX rackmount case (including rails of course) ✅
  • As much memory as the mainboard allows which relates to key requirement #4

Challenges

The current challenges surrounding the build are;

  1. Choice of Hypervisor (oVirt, libvirt, OpenStack, EVE-NG)
  2. Choice of CPU architecture (due to key requirement #4 and likely challenge #1)
  3. Possible Network re-architecture required to support the system including possible infrastructure Re-IP addressing.

Choice of Hypervisor

For item #1 the choices don't look that great, and I will probably stick with libvirt and the various virt toolsets, only because;

  • oVirt appears to no longer be supported downstream by RedHat which means contributions to the upstream project (oVirt) will likely and eventually kill the project
  • OpenStack is a pain to set up, even the all-in-one "packstack" which also means that could impact scalability in future if required
  • EVE-NG appears to be an inappropriate choice. While it supports KVM/QEMU/qcow2 images, I'm not sure I want this as the underlying HomeLab hypervisor (unless certain constraints can be overcome - these are considered not in scope of this post).

Choice of CPU architecture

For item #2 the CPU architecture is important only because network vendor (QEMU/qcow2) images highlight strict CPU architecture requirements as being Intel-based, and AFAIK nested virtulisation requires that the guest architecture matches that of the host.

Possible Network re-architecture

Item #3 is not insurmountable, but it is still a challenge nonetheless as I'm not sure about whether I will change the Hypervisor guest networks (dev, prod, lab etc) to connect back upstream at L2 or L3.

Procurement

As I mentioned already, the project planning phase is somewhat overlapping with the procurement phase, the reason for this is so that I can not only procure certain less tech-depreciating items over time to allow project budget flexibility, but also allow a certain level of reduced risk in operation of the system:

Case in point: HDD's - I never risk buying them from the same batch in case of multiple catastrophic failures.

I've already purchased 3 HDD's, the 4RU rackmount case and rails and an 18RU rack to house the new gear along with the existing kit (switch, router and UPS).

I'll continue to procure the HDD's until I have enough to build the system then all that left is to purchase the key parts for the rack mount case/system (CPU, mainboard, memory & PSU) once CPU architecture/hypervisor testing (see Hardware selection below) and the design is complete.

Hardware selection (CPU architecture)

In order to determine whether the new system will be Intel or AMD will depend on the testing performed on my AMD Ryzen-based desktop. If EVE-NG and the required images work in nested virtualisation (and/or bare-metal) with said CPU architecture, then I will be in a good position to stick with AMD for this iteration (and likely future iterations) of the HomeLab. After all, AMD-based systems appear to have a good pricepoint which relates back to key requirement #1

2023-09-19

EVE-NG and IOL copy run unix:

Lately, I've found myslelf working more on EVE-NG than the Cisco Learning Labs (CLL) which has allowed me to go beyond the constraints of the traditional learnings and key topics and allows me to tinker more than I probably should.

A long time ago I thought that EVE (possibly pre-NG) allowed the user to litterally download the text file of the running config to file instead of having to rely on term len 0, show run and screen-scraping the contents and then offloading the resulting clipboard to a file and saving it *yawn*

Today I discovered that you can save a config straight to a file in EVE-NG on the linux filesystem (at least you can with IOL).

The way to do this is simply use the copy command with unix:file as the destination, replacing file with the name of the file;

R1#copy start unix:r1.txt 
Destination filename [r1.txt]? 
1683 bytes copied in 0.011 secs (153000 bytes/sec)

R1#

It is litterally that simple.

You can then find the file under the EVE-NG staging area, which you can then work on as a plain-text file;

root@eve-ng:~# ls -alh /opt/unetlab/tmp/1/e6eadfea-e000-41d7-abe9-98f8004bb23f/1 | egrep "r.\.txt$"
-rw-rw-r-- 1 unl1 unl 1.7K Sep 19 16:40 r1.txt
root@eve-ng:~#

I can only imagine how useful this could be in reverse by merging config snips straight from the emulated nodes off the host filesystem and perhaps even generating templates for labs etc.

More testing is required.

2023-02-08

X11 Forwarding woes

Here is a very quick post. It's for a very common/annoying Linux X11/Forwarding quirk which I previously forgot what the solution was and decided that I would add it into my blog as a reference

Background:

When working on my Development laptop remotely with a LiveUSB (for either ripping audio or video content or using it as a dev environmenmt for my recent study), I found that having to setup/install packages locally with tge laptop keyboard/trackpad was anoying when having to constantly switch (physically) from main desktop to the laptop. Thanks to SSH most setup and interactions with the machine can be done from another machine.

The Problem:

Having SSH access is nice, but having access the locally installed browser and other apps displayed locally (via X11Forwarding) is even better (purely so I can avoid having to transfer downloaded content).

Some applications wil just work without changing anything, others (read: Firefox) require a little more persuaion to work.

This is kind of what it looks like out of the box on the host (with X11 Forwarding enabled in sshd and configured on the client SSH application):

kubuntu@kubuntu:~$ firefox &

[2] 17790 

kubuntu@kubuntu:~$ PuTTY X11 proxy: Unsupported authorisation protocol

Unable to init server: Broadway display type not supported: localhost:10.0

Error: cannot open display: localhost:10.0


[2]+  Exit 1                  firefox

kubuntu@kubuntu:~$


The solution
:

For me it was quite simple (not sure if this will work for others though so YMMV).

Update the $DISPLAY environment variable:

export DISPLAY={{ x11_server }}:0.0


Where {{ x11_server }} is the hostname or IP address of the X11 Server (XLauch or whatever).

There, now I shouldn't forget this ever again :-D


Additional information:

  1. This solution also seems to work inside individual screen sessions.
  2. Source of the solutiuon (#2 in the list) was rediscovered at https://stackoverflow.com/questions/61221498/x11-forwarding-works-on-ubuntu-using-windows-10-cmd-line-ssh-only-after-first-us

 
Google+