Windows 7 / Getting Started

Increased Database Page Size

To accomplish these performance goals, a number of core changes were made to the database. The information inside the database is stored in B-trees, and these B-trees are segmented into pages where the data is written. The page size then sets the minimum size for any sort of I/O operation to the database. The first fundamental design change was that the page size was increased from 4 KB to 8 KB in Exchange Server 2007, which contributed to a large improvement in performance. In Exchange Server 2010 the page size has been increased all the way up to 32 KB. This increase in page size means that each I/O can now either write or read four times the amount of data than in Exchange Server 2007, which translates into needing less I/O and improved performance.

Improved Data Contiguity

To perform fewer random I/O operations the data needs to be written to the disk in a predictable, non-random fashion. When data is made contiguous and database pages are written to the database in order, the database gains a little extra whitespace. When data is compacted, the data is moved in the database to consolidate the whitespace in one area within the database file. As might be apparent here, contiguity and compaction work in direct opposition. Compaction preserves and consolidates whitespace and contiguity often leaves whitespace within the database to ensure that data is written contiguously. In the earlier versions of Exchange, the defragmentation process favored compaction over data contiguity to maintain the smallest available database file size.

In Exchange Server 2010, contiguity of data is preferred over compaction of data. In testing, the changes that were made to the database added about 20 percent of whitespace into the database. To combat this bloat, compression of message headers and text or html message bodies was added to the database, which in testing again shrank the database size around 20 percent. The results of these changes leave a much more contiguous file with better read and write performance than previous versions of Exchange Server.

As the data is written into the database, the store will look for space within the database so that the data can be written contiguously.

Not only is contiguity built into the initial creation of the data, but processes are also in place to ensure that the data is maintained in this state, even as it is accessed and modified by the end clients.

The database defragmentation process has been improved to reduce I/O operations. Defragmentation is now performed in-place, rather than creating a new B-tree and then renaming all of the indexes and tables. This reduces the number of I/O operations that need to be completed. Data is read from and written to the hard disk from right to left- right merges require more disk head movement to complete. The defragmentation process in previous versions used right merges, meaning the data is read and then moved to the left or earlier in the file. With left merges, the data is read and then moved to the right, or the same direction as the I/O operation. Because space is allocated also from left to right, and page moves need to allocate a new page, defragmenting the database from left to right is much more efficient because it reduces the need to move the hard disk head.

With the improvements to the performance and added throttling to the defragmentation process, it is no longer required to run the process only during off hours. To be sure that contiguity is maintained, defragmentation is set by default to run continuously. The defragmentation and compactions processes will move items to maintain the optimal contiguity of the database, even after a user has disrupted the original contiguous layout.

[Previous] [Contents] [Next]