A+ Certification / Beginners

Upper memory

Upper memory refers to a portion of the memory that exists between the 640K and 1MB marks. A large portion of this area was originally allocated for use by system devices, such as your video display. A Windows 9x computer can use this area to emulate expanded memory, to load drivers, or both. When you use this area for either of these purposes, you'll need to load the himem.sys and emm386.exe drivers because high memory is used to gain access to the upper memory area. Both of these drivers - and some of their options - are covered in the sections "himem.sys" and "emm386.exe," later in this tutorial.

Virtual memory

As improvements were made in the field of RAM, and as computers with more and more memory continued to ship, software developers created applications that used the new memory. To make the entire process of managing memory easier, Microsoft decided to implement virtual memory for the Windows operating systems. Virtual memory allows Windows to present a Virtual Machine (VM) that contains 4GB of memory to applications running in the Windows environment. It then used a Virtual Memory Manager (VMM) to control or manage the mapping of data between the virtual addresses used by the application and where the data was stored in physical memory. The VMM was also able to move data that was not being actively used in RAM to a file on the disk. The swapping of memory data pages to and from the disk file lead to the file being named swap file in Windows 9x and paging file in Windows NT. The drawback in the system of swapping data shows up when an application wants to use data that is in the swap file, since it then has to wait for the data to be retrieved back into RAM before it can be accessed.

Access speeds of hard disks are measured in milliseconds (10-3), while memory access is measured in nanoseconds (10-9). It should not be hard to guess that this means that when data has to be retrieved from the swap file on a hard drive, the process is extremely slow relative to retrieving it directly from RAM.

The Virtual Memory Manager manages virtual memory addresses up to 4GB and the mapping of those addresses to a physical location, either in RAM or a hard drive. You should rely on using the paging file only when applications need a small amount of additional memory. Most operating systems implement virtual memory and allow it to use swap space on a hard drive to allow applications with high memory requirements to function, but greater performance will be achieved by adding more physical RAM to the system.

When an application needs to store information to memory, it passes the request to the VMM. VMM stores the information in RAM but may move the information to the swap file on the drive at a later time. The process for retrieving application data; the process looks like this:

  1. When the application requests information, the VMM checks to see whether the information is in RAM.
  2. If the information is in RAM, the information is simply returned to the application, and the process is complete.
  3. If the information isn't in RAM, VMM checks to see if there is enough space in RAM to retrieve the information from the swap file.
  4. If there is enough space to retrieve the information, then the information is retrieved from the drive, stored into RAM, and passed on to the application, and the process is complete.
  5. If there isn't enough space to retrieve the information, VMM checks for memory locations that have not been accessed recently and passes them from RAM down to the swap file.
  6. When enough information is moved to the swap file to make room for the requested information, that information is moved into RAM and then returned to the application.

A clean memory location in RAM is a location that has not been accessed since the last time the VMM marked it clean. If the memory location has been accessed with a read or write request, then this location is marked as dirty. When looking for memory data to move to the swap file, each location is checked; if it is clean it is moved to the hard drive, and if it is dirty, it is marked as dirty and left. If the first scan did not free enough RAM, then an immediate second search for movable memory data is required, at which time any memory data that is now dirty is data that was accessed since the first scan, mere milliseconds ago. This algorithm is called the Least Recently Used (LRU) algorithm, and it ensures that data that is actively used in RAM will stay in RAM.

[Previous] [Contents] [Next]