Showing posts with label debian. Show all posts
Showing posts with label debian. Show all posts

2024-12-11

Emulated/Virtual Test Network

Today I finally managed to get the foundations for my test network working at L2 within a virtual environment.

The purpose of what I'm trying to achieve allows me to simulate various aspects of my home network and hyper-converged homelab within the homelab itself!

Over on LinkedIn, I posted that I got L2 port-channelling/bonding working, but I as you can see in the snip below, Po2 doesn't show LACP as it's protocol. This is because I cheated with the config and used 'channel-group 2 mode on` instead of `channel-group 2 mode active` which brought the Port-channel interface up on the switch, but the bond on the Debian GNU/Linux host would still not form.

This post serves as a correction to that article/post.


The cause for the behaviours I was experiencing was because the libvirt VirtIO-based network adapters don't seem to report the speed to the guest however, I believe they operate at 10Gbps by default, which would make the bond interfaces incompatible with the IOS-based peer's port-channel interfaces, which are limited to 1Gbps (and LACP in general)

Changing the speed and duplex with nmcli solved this for me [1].

for i in 3 4 5 6; do sudo nmcli conn mod ens$i 802-3-ethernet.speed 1000 802-3-ethernet.duplex full; done

As soon as the speed and duplex was applied, the port-channel came up straight away. Marvellous.

Switch#show etherchan 2 summ | beg Port-

Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
2      Po2(SU)         LACP      Gi1/0(P)    Gi1/1(P)    Gi1/2(P)
                                 Gi1/3(P)
Switch#

Now, I can proceed to further network-related components similar to my 'production' network.

[1]https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-802-3-link-settings_configuring-and-managing-networking#proc_configuring-802-3-link-settings-using-the-nmcli-utility_configuring-802-3-link-settings


 

2024-08-24

HomeLab Mk.3 - Project Closeout

From a project methodology-standpoint, I'm missing some udates since the last post, but this is because I had since entered a redundancy, had immediate funding as a result, not to mention, limitted time to kick-off, execute and deploy before securing new employment.

The whole project is now complete with a 4RU AMD Ryzen-based custom-built server runnig Debian GNU/Linux.

Some of the improvemnts that have been made so far are as follows (in no particular order);

  1. Employed cryptsetup on top of software RAID
  2. Purchased and installed the 4RU system into an 18RU rack
  3. Installed Cockpit for easier host/hypervisor management
  4. Migrated the VMs from the previous HomeLab hypervisor to the new one
  5. Built a functioning eve-ng instance as a VM using nested virtualisation for network moddeling
One key compromise, was that I decided to reduce costs with memory so the hypervisor host is outfited with 64Gb instead of the maximum 192Gb of RAM. This was due to the higher than expected motherboard cost not to mention my requirements are farily low at this stage so the cost of that sort of outlay isn't justified.

In addition to the above, I've also embarked on a more secured and virtualised infrastructure by using OPNSense for PROD, DEV, (NET)LAB and DMZ networks which pretty much just stiches together and firewalls multiple isolated virtual networks inside of libvirt and peers with the multi-layer switch over a P2P L3 interface via a dot1q trunk while also advertising a summary route and accepts a default route only from upstream.

I think its a failry elegant design given my constraints and requirements but more importantly, it is a much more manageble setup now which reduces some technical debt for me. Now theres very few improvements to make even in the next iteration of the HomeLab in future, which will mostly be a hardware refresh - That and re-racking everything since the racks mounting rails needs adjusting to accomidate the 4RU server depth which was unfortunately not able to be done in time.

While I would love to share the overall design itself, it unfortunately has far too much information that is now considered somewhat confidential, but those who I trust and those who know me are always welcome to take a read (preferably onscreen) as I'm not in a position to re-write it for public consumption.

2019-05-15

Adventures in Automation: Part 1

So ever since I heard about automation (and then orchestration) I have finally taken some measures to not only learn but implement some of my own.

Since I realised that I have a reasonable amount of unwanted technical debt building up while having to maintain my 'homelab' in its current state, some the daily hastles can be automated away.

Two such tasks I have just undertaken and completed are"

  1. Ansible automation to update, upgrade and clean orphaned/unused packages
  2. Script to cleanup snapshots taken prior to ansible playbooks being run


I have a very long way to go before I can get other things automated, but its a start.

I'm also in the process if defining a docker environment where most of my services will operate from within one or more virtual machines.

I will also have to look into github repository of my code and a complete cloud backup solution with onsite encryption (encryped using my own privately owned and stored keys).

2019-05-04

Free Range Routing


Since I discovered Docker, I have been busy designing my homelab to be as Cloud Native as possible, but in doing so, I realised that the default docker network (aka bridge) and the other networks defined by other containers from docker-compose bridge type networks isn't known by the upstream network collapsed core access layer network.

In the past I have been adding static routes upstream and a default route on the docker host, but this was not ideal (read scalable) given the dynamic nature of docker networks created with docker-compose.

I quickly realised that since I've developed significant experience in BGP (in service provider environments), I planned to just peer the docker host to the upstream access layer, but until now didn't know how to do this with Linux.

I have always known about Quagga, but been a bit concerned about the learning curve required to get it working, but remembered about its fork called Free Range Routing. So I decided to make the a leap of faith and I have no regrets whatsoever.

I was supprised at how easy it was to install and configure and to get it working which comprised of the following;

On the Docker host;

  1. Setup the REPO
  2. Installed FRR and configred services for vtysh as per the official documentation
  3. Connected to the vtysh interface and
  4. Configured BGP and redistributed only conneted routes using route-map/prefix-list
On the upstream switch/access layer;
  1. Configured peering to FRR and squelched all but a summary route prefix-list/route-map.
Not only did I define the policies perfectly (woot!), but the neighbor came up without any issues at all!!!

My switch can now route traffic to any docker/docker-compose containers/services that are created with far less effort required to define routes to the containers as well as deleting them if no longer in use.

I am very supprised to learn that not only is it extremely easy to get FRR up and running but it is very easy to configure routing daemons/config for them especially if you have a grounding in Cisco as its vtysh is very closely matched with that of Cisco CLI - there are some slight and obvious differences to Cisco's not to mention the fact that you need to connect to the vtysh interface kind of like the root user does on a Juniper platform (cli).

Next up I need to figure out how to leverage Linux VRF namespaces using FRR vtysh, then I can migrate my infrastructure to MPLS!


2015-09-03

BIND (named) server remidiation [part 2]

Following up from my previous post (BIND (named) server remidiation), I spent a good couple hours further developing and testing the configuration but failing to get a bind9 reverse lookup zone to load only to find out that I had a slight typo in the reverse lookup zone definition

named-checkzone was returning OK, but named itself was failing to load the zone file with the error:

zone X.X.X.in.addr.arpa/IN: has 0 SOA records
zone X.X.X.in.addr.arpa/IN: has no NS records
zone X.X.X.in.addr.arpa/IN: not loaded due to errors.

It wasn't until I had a friend take a closer look at then the problem became clear:

I defined the zone as .in.addr.arpa instead of .in-addr.arpa in the named.conf include file which references the zone file.

Some things I have learned is:

  • Check the logs (in my case, on a default debian/bind9 install this was /var/log/syslog) when things don't work.
  • Always check your config with the bind DNS tools before reloading
  • Always check your zones files with the bind DNS tools before reloading
  • Keep zone files neat and group together similar resource record types.

Now that I have the dev domain DNS working, I just need to look at setting up DHCP and testing dynamic DNS.

I also considered moving different resource records for each zone into a separate file, but this is not necessary, due to the (current) size of the network.

Once this is all done, tested and implemented in 'production', I will also consider keeping a similar configuration in dev as a slave for all zones from the primary DNS or just as it is and just for testing.

2015-08-26

BIND (named) server remidiation

Since I virtualised my old failing physical server into a VM, I have found it less and less easy to administer and maintain (read: configuration files).

So, I am looking and spinning up new Debian servers for more specific tasks, network services, games servers, file services etc.

The fist, and most important thing I need to migrate is DNS. That way I can have it simply running in parallel with the old, ready to essentially, stop the service (after making sure DHCP serves out this DNS IP address as well of course!).

Now, here comes the "clever" part or the goals of this approach (or so I thought):

  1. Install named.
  2. Configure it to be a slave for the existing zones
  3. re-configure it to be a master (complete with zone files)

Pretty simple right? Not so much. Well, thanks be to the 'Debian' way of doing things, it was very quick and easy to have a the zones slaved, but when I went to look at the files I was expecting, they where still empty, since I had created empty zone files to begin with.

Some poking around later and I discover that it is transferring the zones fine, but there was an issue with permissions for the zone files, or more specifically, the directory where they lived. A quick chmod -R 0777 /zone/file/directory later and a restart of the service, voila! Except.... something was not right...

The zone files seemed to be in a binary format as file would have me believe they were of type: data

I could have converted them back to plain text using the bind-tool named-compilezone(8) but, I couldn't commit my time to learning how to get the syntax correct for one small job, besides I learned that it is a crazy default in order to get a performance increase, however minuscule that would be given such a small DNS server implementation (for now).

So as per the article "Bind 9.9 – Binary DNS Slave file format" (linked above) or more authoratively as per the Chapter 6. BIND 9 Configuration Reference section of the BIND 9.9 Administrator Reference Manual (ARM) which states (incorrectly):

masterfile-format
Specifies the file format of zone files (see the section called “Additional File Formats”). The default value is text, which is the standard textual representation, except for slave zones, in which the default value is raw. Files in other formats than text are typically expected to be generated by the named-compilezone tool, or dumped by named.

So, knowing this I edited /etc/bind/named.conf.options to include the following:

masterfile-format text;

Perfect. (Just like me ;-) I now have a duplicate of the zones served on the master server, which can, and will soon be decommissioned, not to mention the new servers zones getting a makeover with many many more zones as well as a dynamic-update zone - more to come on this soon.

2006-11-06

Gentoo business viability

After having a heated discussion (argument) on the weekend about it, my friends don't believe that Gentoo could be used as, or in a production server environment, sticking to their beliefs that Debian is a better choice because of it's (default) usage of binary packages as opposed to a source-based one and their idea that because Gentoo is ultimately more customizable, it is therefore more susceptible to "breaking".

So can any Linux distribution. An unskilled person with little or no knowledge should not maintain/administer a Gentoo system unless they are prepared to continually break their machine due to lack of knowledge of the system.

I can honestly say that my own (personal) Gentoo server has been "borked" before, but that was in my early initiation period in learning what NOT to do to in a Gentoo system (installing from the the wrong stage3 arch and accidentally forgetting to remove the ACCEPT_KEYWORDS="~x86" flag when doing a world update). Because of this, my friends have only given examples based on my early mistakes, but haven't provided any real evidence to back up their theory as to why Gentoo should NOT be used (as a production system).

If SUN have certified Gentoo for use on Sun Fire T1000 and T2000 machines doesn't this mean that Gentoo is worthy of being heralded as viable for production use?

Some of the reasons that I have abandoned Debian in favor of Gentoo is for that of customization, ease of package management using portage, having complete control of my system, ability to tune applications and the system itself for specific hardware optimization AND the availability of excellent howtos and official documentation.

I believe that a Gentoo system utilising the "hardened" profile and administered by a knowledgeable admin can provide high availability/uptime and be just as stable as any other Linux system if not more stable (especially with nice hardware such as that mentioned above).


"Do not judge me, until you have tried my way of life for yourselves". -- Bender (Futurama [episode 3ACV18 - Anthology of interest, Part 2])

2006-08-18

Debian Misconceptions

As a follow-up from my previous post, Is Gentoo becoming more like Debian? I unfairly treated Debian as an outdated distribution without proving the full facts.

A friend of mine (who knows the OS much better than I do), provided me with a more realistic insight into the misconceptions of Debian being an outdated distribution.

He writes:


"Debian isn't really outdated. This is really bad misconception.

And the misconception stems from the fact that those who don't know Debian believe Debian is just one GNU/Linux distribution. It's not.

Debian is in fact several different distributions.

The main one's of which are Debian GNU/Linux Stable, Debian GNU/Linux Testing and Debian GNU/Linux Unstable (for the purposes of this article, I'll here on in call them Stable, Testing and Unstable respectively). Other Debian distributions include GNU/Linux Experimental, GNU/Linux Frozen and even GNU/Hurd, but they are not as widely used by Debian users and are not central to my point).

Which one of the main three you choose is depending on what you require from your software distribution.

Unstable is a developer's playground. Unstable is where new packages are introduced in to the system by the Debian Developers. It is considered bleeding-edge, as it receives new functionality (new software versions) daily. While the quality of Debian software is generally very good, sometimes Unstable breaks in bad ways (e.g. loss of data, or requiring you to rebuild the machine). If you must have the latest and greatest of every application version on your computer, regardless of the fact that your machine might get hosed every now and again, use Unstable and be prepared to fix your machine if it breaks.


Testing is a good trade off between the latest applications and better quality then Unstable. Packages are only introduced in to Testing after 12 days of no one reporting a bug in the Unstable package.
This means that when Testing breaks, it's usually a trivial part of the system rather than debilitating the whole system. It's not an absolute guarantee, but the Debian Developers and Users are usually pretty good about noticing problems in Unstable before they get moved in to Testing. If you want a reasonable amount of quality and mostly up to date application versions, Testing can balance this trade-off quite well.

Finally, let me dispell a final myth about Debian software. Stable is indeed updated frequently but with a catch, only for bug fixes. Once a new version of Stable is released, the only reason it will receive an update is to correct security flaws that are discovered in its software. And while this means that no new software functionality is added, it also means you get really good quality software that is frequently updated for security problems. If you require really good quality software with as little downtime for breakages as possible (say on production servers that run 24x7), Stable is what you want.

So yes, while Debian Stable has fewer functionality upgrades than Gentoo, it is actually desirable to be so. Stable means to be (like the name says), stable. If you want more up to date software, you may wish to consider Testing or Unstable depending on your proficiency or willingness to fix breakages.

And now you know that Debian is updated constantly - just with different caveats attached depending on which of the Debian distributions you choose.

P.S. More information on Debian release cycles and Debian distribution goals can be found at Debian’s website (http://www.debian.org/releases/). If you wish to know how Testing becomes Stable, follow the links on that page to the Debian FAQ."


Thanks go out to Spods for providing an insight into this issue.
Till next time.

2006-08-12

Is Gentoo becoming more like Debian?

As it seems that the Gentoo stage3 tarball is fairly outdated, I decided to find out why and/or find an estimated release date for a newer one. I have been pouring though (almost) all of the Gentoo documentation to find out what I can and there seems to be no information about a newer stage3 release (nameley 2006.1).

I have found it increasingly difficult to build from an outdated stage3 tarball, due to newer profiles are being merged and the massive list of updates to get to current from the base 2006.0.

Since 2004 I have seen diminishing releases each year, with 3 releases in 2004, 2.5 (if you count 2005.1-r1) in 2005 and 1 so far in 2006.

What is happening to this brilliant OS?
Is it going the way of Debian (it's suppossed beginings) by being constantly delayed and outdated?

Perhaps I should deal with it by being more patient instead of ranting (whining) about it.



NOTE: Although Debian may be outdated, it is by far one of the best Linux distributions around due to it's stable branch being... well very stable!).

 
Google+