Is Reimaging the New Rebooting?


Photo by rpongsaj

I was recently reading through the comments on a Slashdot.org story and ran into something interesting. I forget what the original article was about, but the discussion was talking about troubleshooting computers. One of the commentors mentioned that his company doesn’t waste time with troubleshooting problems with software. If a computer experiences problems they just reimage it and get the user back going. A number of others jumped in to tell him what a bad policy this is; that it was a sign of laziness or stupidity. Of course you want to find out what the problem is and fix it, what kind of self-respecting computer technician woudn’t?

This got me thinking about the issue and it occured to me that there are some very compelling reasons to follow the “just reimage it” policy. Being an amateur student (is there such a thing?) of economics I understand about how we humans are constantly making tradeoffs based on various cost/benefit analyses. At some point the costs of troubleshooting a software problem just can’t be justified against the benefits.

It struck me as funny that the commentors who jumped on this individual for being lazy and/or stupid also draw a line where it’s better to replace than repair, they just draw the line in a different place.  I doubt that any of them would repair one bad platter in a hard disk instead of throwing out the entire drive. There is some level of abstraction where we treat everything below as a black box that either works or fails as a single unit.

In a world where data and configuration is stored centrally and workstations act less like independent computers and more like really powerful terminals it can make sense to just reimage when there’s a problem. This isn’t always the case, of course, but when it does a reimage can just be thought of as a more vigorous reboot. A three finger salute on steroids.

What’s your opinion?  Are you in a situation where workstations are just interchangeable widgets?