I/O Operations
Not only has the schema and contiguity been improved, but there are also a number of changes to improve the way I/O is done. (As the saying goes, work smarter, not harder.) Rather than pushing the limits of the hardware, intelligence was built in the product to improve how and when I/O operations are done. These improvements include I/O gap coalescing, smarter view updates, and smoother database writes.
Gap Coalescing
With improved contiguity, coalescing-combining adjacent I/O operations-is now more viable. Exchange Server 2007 was able to coalesce adjacent I/O operations to reduce the number of I/O needed to write database changes. Exchange Server 2010 introduces the ability for gap coalescing, which is the ability to group nearby read or write operations together into a single I/O operation. This can be illustrated, when Exchange is reading a message from disk. Rather than initiating a read I/O for the page that contains the message header and then several additional read I/O operations to get each of the pages that contain the message body, Exchange will initiate a single read I/O that reads all contiguous pages from the message header through the last page that contains the message body and discard any unneeded pages. In this example the Exchange Server 2010 gap coalescing would require two fewer read I/O operations, and provide a much higher rate of I/O.
On-dem and view updates
In Exchange Server 2007 and earlier, anytime that an e-mail message impacted a view, the view was immediately updated. In Exchange Server 2010 the only time the view is updated is when it is being accessed.
Database Write Smoot hing
In a typical database, anytime I/O needs to occur the database will immediately send all of the I/O requests to the storage. This causes larger bursts of work that needs to done. Most applications will allow the burst of work to be handled by adding disks or cache to the disk subsystem. However, with these bursts of work also comes disk contention, which means that the work takes additional time to complete, or adds latency. As mentioned earlier in the tutorial, the faster the disk spins the faster that random I/O operation can occur. This also correlates with the number of concurrent operations that a disk can handle adequately.
Disk contention during these bursts can be likened to pouring a liquid into a funnel. When you pour the liquid into the funnel at roughly the same rate the liquid passes through the bottom of the funnel, the funnel will not overflow. If the liquid is poured too fast, it will overflow and cause a mess. The solution is to get a bigger funnel or change the rate at which the liquid is poured into the funnel. Rather than requiring a bigger funnel-more expensive hardware-Exchange Server 2010 uses database write smoothing.
Database write smoothing throttles disk writes to reduce disk contention while still maintaining the checkpoint target. This also better accommodates slower-spinning disks and multiple workloads on each disk. Database write smoothing cannot be manually configured and is always enabled, following these rules:
- When the checkpoint depth equals 1 to 1.24 times the checkpoint target, database write smoothing limits the maximum writes for each LUN to one.
- When checkpoint depth equals 1.25 times or more of checkpoint target, database write smoothing begins to increase the maximum writes for each LUN. The farther the target falls behind the checkpoint, the more aggressively it raises the maximum outstanding writes/LUN, with a total maximum of 512.
Improved Caching
Caching information in memory reduces the number of times the disks need to be accessed. Because memory is far faster than disk, the more accurate and complete the caching is, the less disk I/O is needed and the faster the information can be used. Cache warming is a process that preloads queries that were executed against a database the last time the database was started. After a server restart, failover, or switchover, the larger I/O allows ESE to increase the rate at which the cache is warmed. The information store process is now used to replay the logs on the passive copy of the database, which allows a cache to be populated with recently used information to be in memory. This is unlike Exchange Server 2007, which used a separate process to replay the logs, limiting the cache effectiveness in a switchover scenario.
Another way caching was improved was by increasing the checkpoint depth. The checkpoint depth is the amount of data waiting to be committed into the database file. The checkpoint depth has been increased to 100 MB when the database is configured in a Database Availability Group (DAG); however, the limit is still 20 MB for databases not configured in a DAG. It turns out that when a database page is written to, it often is rewritten to shortly thereafter. This makes sense, because when a user receives an e-mail message, he may read it, set it for follow-up, or move it to another folder. By waiting longer to commit the changes to the database pages, the subsequent changes are made in the database cache, and then a single I/O is performed to the disk that encompasses all of the changes.
In Exchange 2000 Server, it was common and supported to manually reduce the checkpoint depth from the default 20 MB to a lower depth in a clustered environment to improve switchover times. This is because when a switchover occurred from one clustered node to another, all of the outstanding transactions needed to be completed on the active node before the switchover or shutdown could complete. In Exchange Server 2010, this does not happen because the passive copies of the databases have a checkpoint depth of only 5 MB. With only up to 5 MB of outstanding transactions, these can be committed quickly, allowing for faster switchover operations. Also, Exchange 2010 allows all databases to failover in parallel, which improves the shutdown and the speed of committing the data before the switchover.
No matter how large the cache is, if it doesn't have valid or useful cached information it doesn't do any good. To keep the cache fresh with valid cache data, the Exchange Server 2010 cache allows lower priorities to be set to cache data that has limited usefulness. Cache data generated from database maintenance activities such as online defragmentation, database check summing, and passive copy log replay all are given lower priorities so that the data can be evicted from the cache more quickly.
With page sizes four times larger than previous versions, only a fourth of the number of pages can be cached in the same amount of memory. To combat this potential ineffectiveness, database cache compression, also known as cache dehydration, was introduced. This removes the whitespace and only stores the active data from each of the 32 KB database pages in memory. For example, if a database page only has 16 KB of data written to it, only the 16 KB of data is stored cached, rather than caching the entire 32-KB page. This allows additional pages to be cached and provides a more effective database cache.
In this tutorial:
- Mailbox Services in Exchange Server
- Exchange Server Mailbox Services
- Exchange Mailbox Services Architecture
- The Exchange Services
- Deleted Item Recovery and Dumpster 2.0
- Discontinuation of Storage Groups
- Increased Database Page Size
- I/O Operations
- Online Archive
- Exchange Mailbox Services Configuration
- Database Maintenance
- Mailbox Limits
- Poison Mailbox Detection and Correction
- Client Configuration
- Configuring Public Folders
- Configuring Public Folders for Site Redundancy