Something similar happened here. Tech Services/Infrastructure/Ops manager allowed his team to have all three of the company Domain Controllers on only one of our twin EMC VNX clusters, and only one of the twin EMC SAN's that served them.
Despite some of his team warning him it wasn't a good idea, and that they should be spread out.
Apparently a particular run of HGST enterprise class drives had a problem where gas would "stick" to the platters and screw up the head's read/write ability. A few drives went down, more came online, except drives that had been idle had that problem worse.
And the HSGT supplied firmware "fix" to clean off the drives just wrecked them all en-masse. Oops.
Prod users didn't notice much, a few things ran slow as things flipped over to the mirrored disaster recovery datacenter, and along with the DC's, DNS got hosed for a bit... but IT sure knew. And we lost drives well past what the RAID could rebuild from, and with the damaged/missing virtual DC's, we lost the backup index too. So everything was backed up 10 ways come Sunday, but it was unsearchable.
Fortunately our team here is top notch despite oversights like that, and within 24 hours, 90% of the environment was restored. Within 72 hours 100% was. And that counted almost 250+ virtual servers for Stage, Test, and Dev which were set back up all manually, and only fed their data from backups.
EMC and HGST was onsite by the next day working with us, and while I'm not privy to the details, we got some even newer hardware out of whatever happened, and I don't think we paid, or paid much for it.
That guy was "soft fired", given 3 months to find something else and move on.
The team member most vocal about spreading the DC's and other critical infrastructure across both VNX's and both SAN's has his job now.