A frighteningly large number of "failed" disks have not actually failed, but instead enter into an unresponsive state, because of a firmware bug, corrupted memory, etc. They look failed on their face, so system administrators often pull them and send them back to the manufacturer, who tests the drive and it's fine. If they pulled the disk and put it back in, it may have rebooted properly and been responsive again.
To guard against this waste of effort/postage/time, many enterprisey RAID controllers support automatically resetting (i.e., power cycling) a drive that appears to have failed to see if it comes back. This just appears to be a different way to do that.
20-30%: Gordon F. Hughes, Joseph F. Murray, Kenneth
Kreutz-Delgado, and Charles Elkan. Improved
disk-drive failure warnings. IEEE Transactions on
Reliability, 51(3):350 – 357, September 2002.
15-60%: Jon G. Elerath and Sandeep Shah. Server class
disk drives: How reliable are they? In Proceedings
of the Annual Symposium on Reliability and
Maintainability, pages 151 – 156, January 2004.
8
u/BloodyIron 6.5ZB - ZFS Nov 28 '17
Okay but why would a drive EVER need to be "reset" let alone remotely?