July 2004  

TCO Should Include Value as Well as Cost

By Stephen Lawton

The conventional wisdom analyzes total cost of ownership (TCO) as determining the direct and indirect costs of purchasing a specific IT component. Direct costs include hardware and software acquisition, power consumption, maintenance and floor space. Indirect costs comprise such expenses as staffing, training, and a variety of items that often might not be immediately associated with the IT product being priced.

According to Forrester Research, indirect costs could amount to seven times more than the direct costs themselves. Being able to determine both is what separates value from technology and a wise investment from a crapshoot.

TCO of an individual component or component family, such as storage, has to be viewed in a broader perspective. There is little value in storage without data, and having stored data without a working computing environment to access it can be downright infuriating, not to mention cost-ineffective.

Despite the sharply lower price of storage on a cost-per-gigabyte scale, storage devices and networks today are one of the most significant expenses in an enterprise. Reducing the TCO of your storage investment will show real and immediate results on the bottom line.

A well-organized IT department will address its storage requirements by having the right hardware and software in place. Depending on your enterprise's requirements, this could include investing in uninterruptible and redundant power supplies, external power generators, redundant storage subsystems, and the appropriate software to allow you to recover from a disaster. These are all capital expenses that can be cost-justified using traditional TCO and return on investment (ROI) metrics. But what about the non-capital, indirect expenses-what can you do to reduce those costs?

Disk Management
Disk management is a broad-based term that encompasses quite a few applications and disciplines. Some of the key components of disk management are disaster recovery, data backup and disk organization.

One way to significantly reduce TCO of your storage investment is to optimize your applications so that one action can accomplish multiple tasks. Such is the case with disaster recovery and data backup. A well-devised disaster recovery plan will not only secure your system, but also protect and back up your data. There are two popular backup strategies-disk imaging and file-based backups-that take very different approaches to disaster recovery. We'll look at these issues later. For now, however, let's look at a more global issue: how a company implements its disaster recovery plan.

Plans will vary based on the size of a company. The disaster recovery plan of a home business will differ significantly from the 50-person architectural firm or the four-branch credit union with 100 people. Likewise, the credit union's plan will differ greatly from the disaster recovery plan for a Fortune 500 company. For the small to mid-size business, real money can be saved by developing and implementing a plan that takes advantage of the company's size and flexibility.

It's not enough to have a plan written down in a binder no one reads; you have to practice the plan, making sure that everyone knows what to do and how to do it. Potentially one of the most significant losses an enterprise faces is downtime. If your computing environment fails, chances are you aren't making any money. Here's how one company made sure it wasn't caught off guard.

At Hudson Valley Federal Credit Union in Poughkeepsie, NY, a program is employed that includes a variety of components ranging from working with a business continuity partner like IBM to offsite storage of backup tape with Iron Mountain to internal disaster recovery and disk imaging software from Acronis Inc.

Every year, the credit union's IT staff conducts a full-scale, off-site, disaster test that entails restoring systems that are identified as critical to the institution's operations. These systems include Internet banking services, key file and print servers, Windows domain controller functionality, and, of course, the core processing system. Remote credit union offices connect to the recovery site and then test their ability to perform "normal" transactions.

Testing a disaster recovery plan plays directly into TCO. Should a plan fail because equipment was not tested properly, people didn't know what to do, and the expectations of systems recovery could not be met, that increases the cost of storage ownership.

For example, let's assume that a company had to do a bare-metal restoration of a critical system that was damaged in a flood or fire. Assuming that all of the requisite software to restore the system was kept off site and not damaged in the disaster, what would it take to get the system back up and running? Just finding all of the applications, including all the patches and upgrades, literally could take hours, if not days. (Do you know where all of your installation disks, activation codes and registration numbers are?) Reinstalling all of the software and reconfiguring all of the system and application files also is time consuming. In such a case, the quickest part of the recovery would be restoring individual data files that had been backed up; restoring the system would be a nightmare.

Now let's assume that the IT department made an exact image of the hard disk while the system was working properly, along with nightly incremental images. Restoring the damaged system could now take a matter of minutes. This restoration would include all system and configuration files, application files, data, and anything else that was on the disk.

The time savings alone for restoring an image versus doing a bare-metal installation can be counted in hours if not days. From a financial standpoint, that means that not only is the computer more productive, but so is the IT staff that administers the system and all the employees who use the system.

File-Based Backups vs. Images
Now let's consider another common scenario: a file-based backup. File-based backups are probably the most common type today, although they are far from adequate for most applications. For desktop users, it means simply copying the My Documents folder from their computer to a remote or removable drive. That's easy enough-assuming you don't have data stored elsewhere on the system. For example, Microsoft Outlook by default stores e-mail in the C:\Documents and Settings\<User Name>\Local Settings\Application Data\Microsoft\Outlook\ folder. Eudora by default saves mail in the C:\Program Files\Qualcomm\Eudora folder. Either way, if you use a simple file backup strategy of only saving you're My Documents folder, you won't be saving your mail.

Much of the popular backup software today is simply file backup software. Although it is possible to do a complete file-based backup (that is, copying every file on your hard disk to a backup device), this strategy is flawed since Windows is not able to copy files currently in use by Windows or any other application. That means a file backup does not save hidden files or some system and configuration files. In the end, the resulting backup is fatally flawed and cannot be used to restore a disk back to a usable form. From a materials and time standpoint, this is an expensive and inadequate solution.

What? No Backup Hardware?
A third scenario to consider is backing up a computer that doesn't have built-in backup capabilities. Such a system might include an older laptop or a standalone, unattended system that is not connected to a network. How does one economically back up such a system?

The most cost-efficient way to back up a standalone, unattended system with only one hard disk (such as an embedded system running Windows) is to create an image on the same disk drive. You can create a hidden partition and then schedule backups to a complete image and incremental images onto this hidden partition.

Should the system's software fail for any reason and the system suffers a logical crash, it is possible to restore the latest backup from the hidden partition. Since the system is not connected to a network, this would require hands-on intervention by a technician; but the restoration would take a matter of minutes from the hidden partition; it would not be necessary to rebuild the hard disk from scratch.

Managing Hard Disk Space
Applications that allow IT staff to better manage disk space can reduce costs in multiple ways. For example, disk virtualization is becoming more popular in some market segments. Originally, products such as VMware allowed IT managers in large enterprises to create virtual hard disks, but each of these disks required its own licensed operating system and applications. Applications such as Virtuozzo from companies such as SWsoft Inc. (Herndon, VA) provide mainframe-like resource management with full isolation of each partition, allowing an IT manager not only to virtualize the disks, but also to virtualize the operating system. As a result, only one licensed operating system is required per server, not one per virtual disk. The associated costs related to operating system licensing can be staggering, depending on how many virtual and legal copies are created.

But not every company needs to virtualize their disks. If you need just one-or several-operating systems on a single disk drive, you also can use resource management tools that create multiboot systems.

Rather than buying new hardware to test the latest operating systems, some software allows companies to partition their drives and create multiboot systems. By isolating each operating system, users can test new OSes without putting their production data at risk.

While most of the products in this class of multiboot, disk-partitioning software can only boot from one of the four primary partitions on the disk, others allow the user to boot from virtually any partition on the disk, logical or primary. One offers both a manual and an automatic mode, so that disk partitioning can be done either by the technical staff or by non-technical staff. While some company policies will require the IT department to repartition a drive, it's comforting to realize that even non-technical users can repartition a drives should that become necessary. The underlying savings for this class of application runs the gamut from lower hardware costs to employing lower-level technicians to manage the systems resources.

Whether you're considering a disaster recovery application, disk virtualization or other disk management tasks, remember that the real savings aren't just the price you pay, but the value you receive over time.

Stephen Lawton is director of marketing at Acronis, Inc. (San Francisco,CA)

© Copyright West World Productions 2004; reprinted with permission

Copyright 2001-2004
All trademarks are the property of their respective companies.