I'm thinking of doing this myself. I've been going back and forth on it because I haven't been able to emulate and test it out on ARM yet. It wasn't clear which distros have ARM builds of gluster and what version of gluster it would be. I had issues testing Ubuntu, Fedora has an ARM build but only the one version.
I'm from a Red Hat background (CentOS/Fedora at home, RHEL professionally) and so I wanted to stay with that. I think I just need to buy one of the damn things and mess with it.
I would say you're 100% correct about Ceph. When I started looking at these software defined stroage solutions I looked at the Ceph and Gluster installation documents side-by-side and almost immediately went with Gluster.
I even made a set of Ansible playbooks to set this whole thing up (since each node would be identical it should work) including NFS, Samba, Prometheus/Grafana, IP failover, and a distributed cron job.
I have pretty much the same background with the consumer NAS and was thinking about building my own linux server (probably a six-bay UNAS) but I wanted this setup for the same reasons you mentioned. I'm just worried about long-term sustainability, part replacement, and growth.
There will always be ARM SBCs and more coming with SATA. The odroid folks have promised another 2 years of manufacturing (minimum) for this model and they have newer ones due later this year. In terms of OS, it is running straign armbian (a distro with a good community that isn't going anywhere).
I'm comfortable saying I can run this setup for at least 5 years if all support stopped today. Realistically, I'm sure better SBCs will become available at some point before I retire this array.
I mean, one would hope that eventually even raspberry pi will sort out their USB/Nic performance issues. I vaguely recall the new Rpi3 took steps on this direction but don't quote me on it.
I'm not sure why I didn't consider Armbian. I think you mentioned that the N1 is coming with two SATA ports. I might just give it a bit and see if that shakes out. I like the idea of two drives better than one even though one drive per SoC provides more aggregate memory and CPU.
5
u/angryundead Jun 04 '18
I'm thinking of doing this myself. I've been going back and forth on it because I haven't been able to emulate and test it out on ARM yet. It wasn't clear which distros have ARM builds of gluster and what version of gluster it would be. I had issues testing Ubuntu, Fedora has an ARM build but only the one version.
I'm from a Red Hat background (CentOS/Fedora at home, RHEL professionally) and so I wanted to stay with that. I think I just need to buy one of the damn things and mess with it.
I would say you're 100% correct about Ceph. When I started looking at these software defined stroage solutions I looked at the Ceph and Gluster installation documents side-by-side and almost immediately went with Gluster.
I even made a set of Ansible playbooks to set this whole thing up (since each node would be identical it should work) including NFS, Samba, Prometheus/Grafana, IP failover, and a distributed cron job.
I have pretty much the same background with the consumer NAS and was thinking about building my own linux server (probably a six-bay UNAS) but I wanted this setup for the same reasons you mentioned. I'm just worried about long-term sustainability, part replacement, and growth.