An Acronym for Write-ahead log.
This is the mechanism used by PostgreSQL to guarantee durability in ACID compliant transactions. Without it, it is not possible to guarantee durability. The WAL is divided into segments. After some configurable number of filled segments, a checkpoint is created. These terms will be explained later. First, however, it is important to understand what the WAL actually does.
In order to provide durability, the database must be able to correctly recover all the changes made by commited transactions before a crash. Further, previously committed data must not be corrupted. The significant errors that can occur during a crash are:
Using fsync(2) on the modified data files does not provide the required durability. During fsync(2), a partial write may still result, corrupting previously committed data. This is very real possibility under heavy I/O and an OS crash or power outage. Furthermore, if many different data files were touched by a transaction, all the data files need to be flushed using fsync(2) and very poor performance is the result.
The WAL solves these problems:
It is important to understand that before any data page can be written back to the FileSystem, the appropriate WAL records for the changes must be on disk. When removing a page from the shared buffers, PostgreSQL ensures the WAL record for the last update is actually on the disk.
The WAL, conceptually, is an infinite sequence of blocks which are numbered starting at zero. In PostgreSQL the default block size is 8kiB, and is also used for all heap and index data file pages. The first block is created when the database system is initialised at install time. To ease WAL file management, this infinite sequence of blocks is divided into segments which are 2048 blocks long. In the default configuration, this creates 16MiB files. New segments can be created quickly, and old segments can easily be unlinked or moved elsewhere for archival, as required.
The log segments are named according to where in the sequence of the WAL they occur.
In principle, if the entire WAL from the database system initialisation is available, then the exact state of the database at any point in time can be recovered. In practice the storage requirements for this are probably infeasible, and recovery would take an inordinate amount of time in any case.
Checkpoints are one solution to the storage and recovery time problem. After a configurable quantity of WAL activity, all changes made to the entire database are committed to disk. This operation, called a checkpoint, can take quite a long time; several hundred MiB of data may need to be flushed to disk depending upon the size of memory caches.
Once a checkpoint has completed, it is guaranteed that the all the changes from log segments before the checkpoint are correctly on the physical disk. Thus, when recovering from a crash, only the changes from the last fully completed checkpoint need to be replayed. Furthermore, the old log segment files can be easily renamed for use as new log segments; this allows new log segments with effectively no I/O overhead. Alternatively, the old log segments could be archived for use as an incremental backup from a previous database snapshot.
The first time a data page is modified after a checkpoint, the entire original page is written the WAL. This provides the valid base page for incremental changes to work against. If the page in the main data file is corrupted by a partial write, it does not matter; the correct page can be reconstucted by replaying the WAL from the checkpoint.