Hi everyone, I’ve been working on my homelab for a year and a half now, and I’ve tested several approaches to managing NAS and selfhosted applications. My current setup is an old desktop computer that boots into Proxmox, which has two VMs:
- TrueNAS Scale: manages storage, shares and replication.
- Debian 12 w/ docker: for all of my selfhosted applications.
The applications connect to the TrueNAS’ storage via NFS. I have two identical HDDs as a mirror, another one that has no failsafe (but it’s fine, because the data it contains is non-critical), and an external HDD that I want to use for replication, or some other use I still haven’t decided.
Now, the issue is the following. I’ve noticed that TrueNAS complains that the HDDs are Unhealthy and has complained about checksum errors. It also turns out that it can’t run S.M.A.R.T. checks, because instead of using an HBA, I’m directly passing the entire HDDs by ID to the VM. I’ve read recently that it’s discouraged to pass virtualized disks to TrueNAS, as data corruption can occur. And lately I was having trouble with a selfhosted instance of gitea, where data (apparently) got corrupted, and git was throwing errors when you tried to fetch or pull. I don’t know if this is related or not.
Now the thing is, I have a very limited budget, so I’m not keen on buying a dedicated HBA just out of a hunch. Is it really needed?
I mean, I know I could run TrueNAS directly, instead of using Proxmox, but I’ve found TrueNAS to be a pretty crappy Hypervisor (IMHO) in the past.
My main goal is to be able to manage the data that is used in selfhosted applications separately. For example, I want to be able to access Nextcloud’s files, even if the docker instance is broken. But maybe this is just an irrational fear, and I should instead backup the entire docker instances and hope for the best, or maybe I’m just misunderstanding how this works.
In any case, I have some data that I want to store and want to reliably archive, and I don’t want the docker apps to have too much control over it. That’s why I went with the current approach. It has also allowed for very granular control. But it’s also a bit more cumbersome, as everytime I want to selfhost a new app, I need to configure datasets, permissions and mounting of NFS shares.
Is there a simpler approach to all this? Or should I just buy an HBA and continue with things as they are? If so, which one should I buy (considering a very limited budget)?
I’m thankful for any advice you can give and for your time. Have a nice day!
Are there any specific limitations/requirements? Any recommended models or things to look out for? I looked on Amazon, and they range from around $30 to $200, and I really have no criteria, and I want to spend as little as possible.
If it’s an LSI card then make sure it’s either been flashed into IT mode, is capable of being flashed into IT mode, or is relatively modern and has that option built in.
What you really want is an HBA, but HBAs can be expensive, a raid card flashed to act as an HBA is typically much cheaper. A 6 gbit SAS card will do 3gbit sata, and no hard drive should be writing more than 3gbit. If you want to do SSDs then find a relatively more modern 12 gbit SAS card which will do 6gb sata.
I guess also look out for the REALLY old ones that won’t do over like 3tb. But I bought one of those for $20 almost 10 years ago so that shouldn’t be a concern. Those are probably all in the trash by now.
Thank you very much for the info. In the case of Raid Cards that can be flashed, is there something I need to look out for, besides the speed? Only HDDs will be used, so speed isn’t a priority.
I don’t want to speak to your specific use case, as it’s outside of my wheelhouse. My main point was that SATA cards are a problem.
As for LSi SAS cards, there’s a lot of details that probably don’t (but could) matter to you. PCIe generation, connectors, lanes, etc. There are threads on various other homelab forums, truenas, unraid, etc. Some models (like the 9212-4i4e, meaning it has 4 internal and 4 external lanes) have native SATA ports that are convenient, but most will have a SAS connector or two. You’d need a matching (forward) breakout cable to connect to SATA. Note that there are several common connectors, with internal and external versions of each.
You can use the external connectors (e.g. SFF-8088) as long as you have a matching (e.g. SFF-8088 SAS-SATA) breakout cable, and are willing to route the cable accordingly. Internal connectors are simpler, but might be in lower supply.
If you just need a simple controller card to handle a few drives without major speed concerns, and it will not be the boot drive, here are the things you need to watch for:
Also, make sure you can point a fan at it. They’re designed for rackmount server chassis, so desktop-style cases don’t usually have the airflow needed.
Thank you very much for the detailed information. I’ll look into eBay again, maybe I can find a good offer that works. I’m unsure how to choose the number of lanes. Does that relate to the number of drives it supports? Also, in terms of cooling, would any PC case fan be enough, if strapped onto the HBA?
Kind of. They will be multiples of 4. Let’s say you got a gigantic 8i8e card, albeit unlikely. That would (probably) have 2 internal and 2 external SAS connectors. Your standard breakout cables will split each one into 4 SATA cables (up to 16 SATA ports if you used all 4 SAS ports and breakout cables), each running at full (SAS) speed.
But what if you were running an enterprise file server with a hundred drives, as many of these once were? You can’t cram dozens of these cards into a server, there aren’t enough PCIe slots/lanes. Well, there are SAS expansion cards, which basically act as a splitter. They will share those 4 lanes, potentially creating a bottleneck. But this is where SAS and SATA speeds differ- these are SAS lanes, which are (probably) double what SATA can do. So with expanders, you could attach 8 SATA drives to every 4 SAS lanes and still run at full speed. And if you need capacity more than speed, expanders allow you to split those 4 lanes to 24 drives. These are typically built into the drive backplane/DAS.
As for the fan, just about anything will do. The chip/heatsink gets hot, but is limited to the ~75 watts provided by the PCIe bus. I just have an old 80 or 90mm fan pointing at it.