HPE SSDs will fail themselves after 32768 Hours of Operation

If you have a HP server, you will want to check to see if any of the hard drives are affected by this bug because after 32,768 hours of use (3 years, 9 months roughly) they turn into a brick. See more.

A similar problem has happened again since posting this article, you can read about it here.

This is not like a normal failure, where you might have one particular drive fail, this bug will likely kill all your drives at once. The HPE support document specifically advises: “SSDs which were put into service at the same time will likely fail nearly simultaneously.”

This would be the worst possible scenario for your data storage. Heck, if you bought your offsite external site servers at the same time, your backup location could potentially fail at the same time too.

Now that number is a big hint – the end is not random. I’m willing to bet some dunce coded some critical value as a 16-bit integer which can only store numbers up to 32,768 after which if you add one more number it rolls over to -1. This is Integer overflow stuff we learn about in university – hell, they even cover the stuff in high school classes nowadays. If it helps you to understand, this is similar to the Y2K bug. It’s actually the exact same issue as the y2038 bug which you’ll hear about again in another couple of decades.

There is a rumour they messed up the time bomb. It was SUPPOSED to be a random amount of time after the 3 year warranty expires but whoever programmed it stuffed it up.

How to mitigate? Turn off your server or Install the updated firmware ASAP. HP advises SSD Firmware Version HPD8.

As usual, our customers didn’t have to worry about this because we have identified any affected drives and are working on the solution for them already.

Most Recent:

Random Pick: