Disk
Disk performance is something of an art; some would say it is a "black art," much more than a science. This is especially true where storage area networks (SANs) and network-attached storage (NAS) come into play. Disk performance also impacts memory performance (how much data is cached in main memory) and processor performance (how many I/O requests we have to process), making it even more confusing. In Exchange Server 2010, the confusion has been reined in a bit, since Microsoft's guidance for Exchange servers is now to utilize SATA direct-attached storage (DAS) - that is, disk directly connected to server.
SATA vs. SCSI
SATA is slow. In general, while higher-quality SATA disk is available, SATA disk is workstation-class instead of enterprise-class (that means it fails more often). However, it has the major benefit of being cheap.
SCSI and fiber-channel (FC) disk are fast. In general, SCSI and FC disks are enterprise-class. However, they have the major drawback of being expensive.
Through and including Exchange Server 2007, Microsoft recommended that Exchange Server be hosted on high-performing SCSI disk. In many companies, that meant using expensive SAN solutions. With Exchange Server 2010, Microsoft is now recommending the use of SATA as an acceptable disk platform. This recommendation is causing lots of conversations throughout the Exchange partner ecosystem.
Disk performance is all about IOPS (input-output operations per second). However, we're not here today to tell you how to design your disk subsystem, but instead how to monitor it and determine whether it is performing as well as you desire. Monitoring is complicated enough, but not quite the art that better-performing design is. First, let's explore some background information. Windows breaks disk monitoring into two separate performance monitor objects: LogicalDisk and PhysicalDisk. LogicalDisk is the standard disk drive letter that you are used to, such as C:\ or D:\. Within Windows, a logical disk may consist of one or more physical disk drives (the drives can be spanned across multiple physical disk drives or set up in a software-based RAID array).
From the Windows perspective, PhysicalDisk is a single physical disk. Within Windows (or just about any other operating system), there may be more than one logical disk contained on a physical disk. This uses a technique known as partitioning in order to place multiple logical disks on the physical disk. Note that this can be confusing because Windows may see a single physical disk when, in fact, the disk is composed of multiple spindles aggregated by a hardware RAID controller or host bus adapter for a SAN. In the case of a SAN, the logical unit number (LUN) presented to Windows as a single physical disk may actually be an array split between many systems.
Sounds complicated, doesn't it? Usually, though, it isn't too bad. The take-away from this section is that the relationship between logical disks and physical disks may be complex in some environments. When you are designing storage arrays, the simpler you can design it, the easier your long-term support will be.
Exchange Server 2010 Disk Needs
An Exchange Server has a number of disk needs, depending on the roles that are installed on it:
- Operating system
- Log files
- Paging file
- Event log files
- Databases
- Database transaction log files
- Content indexing
- Content conversion
- Backup and restore
- Replication (optional)
- Zero Out Deleted Database Pages
Every Exchange server, no matter the role, will have an operating system and probably a page file. The log files referred to in the second bullet are text-based log files, such as those generated by protocol and activity logging or by IIS on the Client Access server. Databases are mailbox stores and public folder stores on Mailbox servers and queue databases on Hub Transport and Edge servers. Transaction logs are the files used for recovery in the case a database were to crash.
Content indexing is the generation of a fast searchable index for the emails contained within an Exchange database. Content conversion occurs when an email message is received by a Hub Transport server and is translated into a format appropriate for storage in an Exchange database. It's also the reverse - the conversion that takes place when a message is leaving the Exchange organization. Online maintenance is a daily activity that assures the health of an Exchange database. Replication is copying the contents of a database, as it changes, to another location as a high-availability option.
Zero Out Deleted Database Pages is a security option. In Exchange 2007 and before, Zero Out Deleted Database Pages was off by default. In Exchange 2010, the default is for the option to be enabled. Previously, when an Exchange database page was made available (for example, after a message had been deleted and it was time for the message to be purged from the database), the page was simply marked as ''available.'' Now, during normal operations, available pages are gathered together and added to the whitespace in the database that is available for reuse. By zeroing out database pages, a page is not simply marked as available and added to the whitespace tables, but the contents of the page are set to zero and the page is rewritten to disk. In Exchange Server 2010 this has a relatively minor I/O cost, unlike previous versions of Exchange Server. Databases and operating system files are accessed on a random basis. Log files and transaction log files are accessed sequentially (after all, they are written record by record, and if they ever need to be read, they will be read record by record). This difference in usage patterns makes it best, in an ideal situation, to separate each of the disk requirements onto different physical disks and onto separate controllers.
However, in Exchange Server 2010 we have two possible configurations for disk environments: a DAG-based solution and a non-DAG-based (stand-alone) solution.
From a performance perspective, there is no benefit to creating multiple logical volumes on a single physical volume. For example, some administrators mistakenly believe that performance will improve if they take a RAID 5 array, partition it into two logical drives, and place databases on one of those logical drives and transaction log files on the other. That is not true. Each usage type should be on separate physical devices for optimum performance.
A Stand-alone Disk Configuration
In a stand-alone disk solution, you probably only have a couple of servers and each is very important. Your configuration is designed to enhance the resiliency of each server to attempt to ensure that the server does not go down.
In this solution, the operating system and database transaction log files go onto separate RAID 1 (mirrored) drive sets. This strategy allows for doubling the read performance and minimizing the write overhead that is associated with RAID. Depending on the importance of text logs to your organization, they should be placed on either a stand-alone disk or another RAID 1 drive set. Database files on Hub Transport servers are pretty easy, too. Except when queues grow to very large sizes, the queue databases remain fairly small. Another set of RAID 1 is just the ticket. For mailbox databases it gets a little more complicated. The ideal situation is a striped set of mirrored disks (that is, RAID 1+0 or RAID 10). However, that approach has a very high disk cost (that is, you must have twice the number of disks as you have usable disk space). The alternatives are RAID 5 (which has a one-disk cost) and RAID 6 (which has a two-disk cost). The problem with both RAID 5 and RAID 6 is that the mechanism they use to stripe the data puts a very high overhead on write operations.
A DAG Disk Configuration
If you are setting up your servers with DAGs, you have at least two Mailbox servers and likely you have multiple Client Access and Hub Transport servers as well. In this case, you are most interested that a failing server will fail seamlessly and send all its users to another server that has enough capacity to take them over. Then, you'll repair or replace the failed server and bring it back online.
In this case, you'll probably have the operating system and paging file on a single volume (still with RAID 1) and transaction log files and the mailbox database(s) on another volume. Everything is replicated from one server to another, and all configurations are standard and documented.
Which works better: DAG or stand-alone solutions? The answer in two words: it depends.
Stand-alone solutions are typically better suited to smaller environments where multiple servers are not an option due to cost, configuration, or other concerns. It is worthwhile to note that in Exchange Server 2010 the only Microsoft-supported high-availability solution for the Mailbox server is the DAG. All of the ''continuous replication'' solutions and "single-copy cluster" solutions that were present in earlier versions of Exchange Server have been removed from Exchange Server 2010.
Although this removal of choice does simplify both configuration and the decision-making process, it comes at a cost. Literally. The DAG uses components of Windows Failover Clustering (WFC). Since WFC components are only available with Windows Server Enterprise Edition, you can only use DAGs on that edition of Windows Server. This leads to DAG-based solutions becoming significantly more expensive than stand-alone solutions.
Cost issues aside, the DAG-based solution works very, very well. Many of the issues associated with failover and failback and user connectivity are gone. Instead of an end user (that is, an Outlook or Outlook Web Access user) connecting directly to a Mailbox server, the end user now connects to a client access array (CAA). The CAA knows immediately when it loses contact with any Mailbox server that is part of a DAG. The database fails over within 30 seconds and the CAA starts contacting the new DAG primary server. If the user is in cached mode, they never notice that anything happened. If the user is on OWA, they get a short period of no response. This is very unlike the failover situation with CCR or SCC, which could take from two to five minutes and then might be required to reauthenticate.
However, implementing this type of solution is not a single-server kind of solution. A DAG consumes a minimum of two servers (up to 16). A CAA consumes a minimum of two servers (based on documentation, there is no set limit). While Exchange Server 2010 allows collocation of the DAG, Client Access, and Hub Transport servers, to have a highly available solution in that scenario requires that your CAS array be front-ended by a redundant hardware-based load balancer.
So, if your tolerance for downtime is very low, a DAG-based solution will work well for your company. If your tolerance for added software expense is very low, then a stand-alone solution may be a better solution for your company.
As always, in Exchange Server 2010 you should buy the biggest and best Exchange hardware you can afford. Your company will probably grow into it. Refer to the detailed sizing guidelines for Exchange Server 2010 on TechNet.
However, it is also worthwhile to know that Exchange is pretty forgiving. If your disk space configuration isn't exactly right, Exchange will continue to run and will (probably) eventually get all the work done (unless it runs out of disk space); it just may be slow for a while. The term for this is ''degrading gracefully.'' This gives you the opportunity to update your disk subsystem to a better-performing solution. And, in fact, for most companies this is a nonissue. Computers are fast, disks are fairly fast, and memory is cheap; for the small and medium-sized company (500 mailboxes or less), the Exchange server hardware is generally more than those companies need, without going into any detailed design specification.
In this tutorial:
- Monitoring and Performance for Exchange Server
- Key Performance Monitor Counters
- Memory
- Processor
- Disk
- Disk Performance Counters
- Active Directory for Exchange Server
- Network
- MAPI
- Using System Center Operations Manager
- Modifying Management Pack Objects
- Event Logs
- Defining a Security Audit Policy
- Protocol and Connection Logs
- POP
- Send and Receive Logs