eighnuss wrote on Aug 25
th, 2016 at 8:40am:
i think the best part is that instead of automating it, they do it when the office opens
i picture like a bunch of idiots in scarfs gathering every wednesday with their coffees and standing around a monitor while one guy logs into each vm shutting everything down manually. litreally like 6 people getting paid 15 an hour and a free bagel to spend 3 hours infront of a monitor and the tech guy pressing restart over and over every wednesday, which will end up costing them tens of thousands very fast in man hours as opposed to just having it all scheduled and scripted and automatic during lowest server population time.
Or doing it like a professional company does: Schedule it for the time frame when user volume is low, and have someone come in during a maintenance window to carry out the work. Or even to just be there to clean up if any automation fouls up. I've done that for telecoms where the number of 'servers' involved was in the thousands. I'd write and have peer reviewed the maintenance plan during the day. Then I'd come in during the maint window and fire off the automation, and then do clean up on the inevitable handful of machines that didn't take kindly to the process.
It's how any professional company would handle things. But I guess their salaried people are all special snowflakes and can't be bothered to come in for a maint window once per update.
I suppose when you've thrown your hands up at the possibility of actually solving lag via code changes and have decided to just reboot everything once a week this would be more of a chore for the technical staff. But then if they could actually fix the problem instead of band-aiding it once a week they'd be able to keep a more regular sleep cycle.
Or they could, you know, just do shit during the day when they customers are the most impacted. Because fuck the customers, what the hell do they do for our business anyway?