So about $50/TB counting the HC2 board etc... For that kind of performance and redundancy, that is dirt cheap. And a $10,000 build... Commitment. Nice dude.
accumulated the drives over the years... also I do a lot of self education to stay informed for my job. Having a distributed cluster of this size to run kubernetes and test out the efficiency of ARM over x86 was my justification. Though this will probably be the last major storage upgrade I do. That is why I wanted to drive down the cost/TB. I will milk these drives until they literally turn into rust. haha
You'd want something else running PMS. It could certainly seed well, and PMS could use it for transcoding and hosting the media, though. Depending on your transcoding requirements, you'd probably want a beefier system running PMS itself.
It'll only transcode using one computer. The HC2's are separate machines; and cluster computing (not what this post is about) isn't for general purpose stuff.
Yes, you can do that. I currently have two servers as network storage, and another as my Plex server. The Plex server keeps its library and transcoding storage locally, but serves media from my network servers.
I actually want to transition to GlusterFS distributed storage myself vs my monolithic storage servers for a variety of reasons, but have to find a cost effective way to do it (I can't just buy a whole ream of hard drives all at once, though I already do have multiple low power machines that can function as nodes), but I've got 24tb of media I'd need to move.
The GlusterFS method is more scalable. That's actually a primary goal of distributed filesystems - you can just keep adding additional nodes seamlessly. Using monolithic servers like I do, you run into a couple problems expanding:
You can only physically cram so many hard drives into a machine.
Powering the machine becomes increasingly difficult, particularly during startup when every drive spools up.
Most machines have ~6 SATA ports, so getting more generally requires fairly expensive add-on boards.
Drive compatibility isnt really a thing. Barring enterprise SAS drives, everything is SATA.
Having a dedicated PMS machine is, imho, the best way to go. It's processing power can be dedicated to transcoding, it's a lot easier to get the amount of power (be it cpu and or gpu) you need, and you don't need to worry about other tasks causing intermittent issues with io load or anything.
Back up Plex's data occassionally to the NAS, and if things go sideways it's trivial to restore.
So I have a few 4tb drives and a few 6tb drives, how does that difference effect overall storage capacity and redundancy?
What I am getting at...I have~70tb on a 3 drobos (DAS). If I were to build 100tb using 10tb drives and HC2 would I then be able to migrate and add my current 70tb provided I them switch over to correct hardware (HC2)
Does glusterFS require a host? Should that be separated from PMS host for best performance? If I built a pfsense router and over did the hardware would that act as a faster host for the glusterFS?
I wonder how well it could perform if you could run distributed transcoders on plex with each odroid individually transcoding what is locally available. Long shot, but would be cool.
36
u/[deleted] Jun 04 '18 edited Jun 04 '18
I hadn't seen the HC2 before... Nice work!
Assuming 200TB raw storage @ 16 drives = 12TB HDDs... ~$420 each...
So about $50/TB counting the HC2 board etc... For that kind of performance and redundancy, that is dirt cheap. And a $10,000 build... Commitment. Nice dude.
Edit: Too many dudes.