I recently made a major upgrade to my Linux desktop. It's one of several computers that I use on a regular basis; I've had it almost 10 years and it's been a solid workhorse for various tasks. As most people don't upgrade computers very often I thought I would share some notes about what it's like in 2024.

A bit of background - my prefered flavour of Linux is Debian Buster with KDE. I started playing with Debian Bookworm but had not yet fully moved over. The reason for this upgrade came about because I was having difficulty setting up the CUDA SDK with a legacy NVidia driver. It seemed like poor value to use my time to get it running on old hardware when it would be using an older version of CUDA. The decision to upgrade the graphics card turned into "well I may as well get a new CPU, motherboard and more memory".

Hardware

I call this a major upgrade because changing CPU and motherboard means this is effectively a new computer, just using some old components like drives, case and power supply. Before we get into the software side, let's discuss hardware selection. Final specs are unimportant but in summary this machine went from an OK spec for 2015 to a quite good spec for 2024. The performance jump is huge but this is not a workstation or gaming machine. I'm going to be using it sometimes as a server and sometimes as a seat to do work on; therefore power usage and efficiency are important.

Graphics card choice was the easiest. When looking at my budget and the various choices, I prioritised more VRAM because I want to use this computer for Machine Learning.

CPU choice was less easy. Budget was a major indicator of what was available to me. I had to decide if integrated graphics was important - in the end I chose yes, because I might have to do several rounds of tweaking settings to get it working with the motherboard. While usable core count was very important I think there are still a lot of single threaded applications so single core speed was also important. I could have got a slightly older generation CPU with more cores but ultimately chose a newer CPU with good power efficiency. Electricity prices are going up and up and I'm going to be doing GPGPU anyway, so no worries here.

For RAM I went for DDR5 and the fastest transfer speed that the CPU supports. The size came down to the budget and I can add more sticks later. There was still a chance that the motherboard would reject whatever I chose.

Motherboard choice was probably the most challenging. I wanted enough SATA connectors for my drives, fast USB, a 2.5Gb network adapter and support for the memory. Almost all the reviews are unclear e.g., "the BIOS sucks" (I don't care too much about overclocking and I'm not expecting to spend much time in BIOS settings after setting it up), "really strong MB" (why?) or "Really good" (again, why?). I ended up going with the same brand as the previous system because at least I have used a similar BIOS. I also wanted ATX because it's got more slots and more support for ancillaries than the smaller boards.

CPU

More Cores
Yes, it's a Ryzen 5.

Pre-Build

I hoped that having changed the core hardware I could still boot into my system and then change drivers. In advance of this I backed up almost everything (as usual) and made sure everything was as up to date as possible.

Something that I anticipated was that I may need to go several weeks before having a working/reliable machine so any work that I might need access to had to be copied to another location for the duration.

The Build

Putting everything together was mostly straightforward. I tried to keep the same order when re-attaching the SATA connectors but this turned out to be not possible because the new board had less connectors and on the old board I had used some higher numbered ones for convenience.

I bought a Noctua CPU cooler for those processing jobs that get things hotter. This isn't for gaming so I didn't bother with a larger cooler. Plugging in the case fans required a switch around but was not too difficult.

Due to the different ports on the new graphics card I had to work out which ports I could use with which inputs on my monitors as I have a multi computer, multi monitor setup. This was not something I thought about in advance, but it turned out I had all the cables I needed for the new system.

First Boot

With much trepidation I went to power up for the first time: black screen. I found that the flashing red light on the motherboard was a status light and further checking revealed that I had plugged the memory into the wrong slots - of course you use the second slots in each channel if you only have 2 sticks...

I finally got into BIOS, the CPU was recognised so a firmware update was unnecessary. However, none of my drives were recognised as bootable. I happened to have the Debian net installer on a USB thumb drive so booted that which showed that there was nothing fundamentally wrong with the hardware.

After a lot of searching and experimentation I came to the conclusion that the previous boot set up was legacy and not UEFI (I'm not very knowledgeable about these things). This was the cause of most of my troubles: I did not have an EFI System Partition (ESP) on any of my drives. I decided the best thing to do was to reinstall Bookworm and repartition the drive it was on so I could add an ESP. This is less than ideal but got me in and I would get to move to a newer Debian.

I further noticed that the previous install included a network interface driver for the old motherboard that would not have worked with the network adapter on my new motherboard. I would have needed to install a suitable network driver from local storage before updating from an online software repository.

Configuration and Running the System

Finally booting into Debian and able to set up the system, I found that boot sometimes failed. Bookworm has moved over to using journald so system logs had moved there. It seems that, probably due to the motherboard, drives were not enumerated in a consistent order, for example, on one boot /dev/sda3 would be the third partition on one drive and on another boot it would be a different drive. Whenever mounting a disk failed, it halted booting. I edited fstab to use UUIDs for each drive and added the nofail option. It has since been reliable.

At this stage I could access /etc and all the configuration files from my home directory from the old Buster install.

I wanted to set up as much as possible before installing the external GPU. Some of the AMD firmware that drove the integrated graphics seemed to be missing so I tried to install the missing files. However that only seemed to cause the machine to freeze so I gave up as it worked well enough to start a text console and I was planning to run an external GPU. I may return to this later, for completeness.

The external graphics card went in easily and nouveau worked straightaway. After installing the NVidia driver it seemed that only one monitor was detected. I needed to set a kernel parameter in GRUB to fix it.

I run Bugzilla on this machine; luckily I had a backup of my databases otherwise things could have got awkward. What I didn't realise is that there is Bugzilla documentation on moving between machines. At least I know this now and will use it when moving to the cluster. I didn't look at it closely but I guess it switches Bugzilla into a read only state to stop other users editing while you move to the new system.

I could simply have copied all my dot/rc files over to the new system I was worried that they may not be compatible with newer versions so I had to be careful to test things before transfering config; also config that has local paths in it could become invalid.

Copying the configuration for Firefox and Thunderbord was very straightforward and a lot of things like history moved over too which was nice.

One of the first things I did when I had the configuration in a more familiar preferenece settings was to build FFMPEG with hardware acceleration which seems to be working nicely.

Final Thoughts

You really do need to do your homework on component selection and also software transfer. The more services you can move to docker or the cluster in advance, the easier your upgrade will be.

If you want to run your upgraded system off your existing software then you probably need a better knowledge of UEFI than I have. In the end I did have to do a new install in order to get things running and you should be prepared for that possibility too.

Make sure you have backed up all your databases.