I did a simple test:
Starting conditions: 80G partition (fat32)
1 fragmented file
2 fragments (1 excess fragment)
Free space = 23GB
Largest free space = 6.7GB
Saved 1 file: 700MB
Result: 2 fragmented files
8 fragments (6 excess fragments)
This shows the file was saved in 7 peices
Windows does not do a thourough check for free space large enough for the fille being saved, as a result fragmentation will almost always ocurr immediately after a defrag that leaves multiple free spaces. If the defrag process leaves only 1 free space then any file saved can not be fragmented. fragmentation would only occur after a file is deleted and a larger one saved.
Less than 3% of files on the average system change size, most are static. In addition, most of these files do not affect system performence since they are only accessed when being eddited by the user(with the exception of log files).
If the defrag programs left free space only around files that are frequently modified I would buy that argument, but the free spaces occur randomly throughout the drive. These spaces are only left to speed up the defrag process. In the old DOS defrag almost every file was moved twice which took a great amount of time, but the result saved time in future defrags.
Based on the responses I guess that all the current defragmenters leave multiple free spaces.
My recommendations to M$ and any defragmenter developers:
1. All locked system files should operate like the MFT with reserved space that is accessible for file storage only if ablolutely necessary.
2. Defraggers organize the files so that rarely modified files be packed tight at the beginning of the drive, occasionally modified files be packed tight against the first set of files, and frequently modified files be loosely spaced at the end of the drive. This would leave the bulk of the free space between the occasional and frequently modified files. The result would be a reduced need to regularly defrag the drive.
My 2 cents
Edited by mic77, 15 October 2005 - 10:41 PM.