I guess i shouldn’t have answered, I do have experience with multiple storage-classes, but none of the classes you mention (so like i don’t really know anything about them). I envisioned you dealing with pod level storage issues and thought that’d be something most programs would have lots of difficulty dealing with, where as a more service oriented approach would expect remote failures (hence the recommendation).
All of the things you mentioned don’t seem like they have provisioners, so maybe you mean your individual nodes would have these associated remote fs’. At that point i don’t think kubelet cares, you just mount those on the machines and tell kubelet about it via host mount
Oh shit look there’s a CSI driver for juicefs https://juicefs.com/docs/csi/introduction/, they kinda start out recommending the host mount https://juicefs.com/docs/cloud/use_juicefs_in_kubernetes/.
We make some use of PV’s but people i find my team often tend to avoid them.
I probably should have shut my mouth from the start!
For this kind of thing i usually go by popularity (active repo/popular repo), mostly to have the most other people in your boat. It doesn’t always work but generally if other users have to migrate at least you can ask them questions.
On the face of it i’d go with the csi driver version, only because we use alternative csi drivers ourselves, and haven’t seen any issues (ours are pretty aws vanella though).
We use storage classes (for our drivers) the “dynamic provisioning” section of https://juicefs.com/docs/csi/guide/pv, you’ll need to make one of those, then create a statefulset and mount the pv in there.
I do find statefulsets to be a bit of a not as well supported part of kubernetes, but generally they work well enough.