Quote Originally Posted by Circuits View Post
we're setting ourselves up for this

I love my SSDs - wouldn't want to live without them after making the switch, but they're mostly doomed to failure through write cycle fatigue after five or six years (depending on usage). Always be multiply backing up, always be looking to upgrade or replace your storage.

On the plus side, storage is cheaper and faster and better than ever, now, than it has ever been before.
Quote Originally Posted by Gman View Post
I have SSDs that are about 5 years old and they have plenty of life left in them. Mechanical drives are cheaper per GB, but SSDs are pretty tough to wear out unless you're doing a ridiculous amount of writes. If you're really concerned about losing data, setup regular backups to an on-prem NAS or to 'the cloud'.
Run the utilities to keep an eye on the wear rates - but of course backups are essential.

We're putting read intensive (= cheap) enterprise SSD with 1 DWPD specs in moderately high transactional server environments and for secondary storage behind traditional backup systems (Commvault, Netbackup, IBM's ProtectTier [which is EOL] etc.) and not seeing wear rates that are cause for concern and fall well outside typical replacement cycles. These are not used in mission critical/healthcare/Fortune 100 of course; those get high end (IBM) FlashCore modules anyway (no drive interface to slow things down) with petabytes of <100us latency systems.

That said, nothing is unbreakable. We had a higher end system go sideways due to crap Samsung RI drive firmware and some half-assed DRAID6 code and spent 2 weeks making the server environment whole again. DR site to the rescue.