Node:IO speed,
Next:Slow-down,
Previous:Pentium,
Up:Performance
Q: I measured the time required to read a 2 MByte file in DJGPP and in
Borland C. It took the DJGPP program 2.5 seconds to do it, while Borland
did it in just under 2. This is horribly slow: it's 25% slower
than Borland!
Q: I tried to improve DJGPP I/O throughput by defining a 64KB-large
buffer for buffered I/O with a call to setvbuf
, but that had no
effect. Why is that?
Q: It is obvious that disk-bound programs compiled with DJGPP will
run awfully slow, since FAT is such a lousy filesystem!
A: First, I would like to point out that waiting another 0.5sec for reading a 2 MByte file isn't that bad: it is indeed about 25% longer than you can do under DOS, but it's only half a second.... Besides, most programs read and write files which are only a few hundreds of kilobytes, and those will suffer only a negligible slow-down.
Doing I/O from protected-mode programs requires that low-level library functions move the data between the extended memory and low memory under the 1 MByte mark, where real-mode DOS can get at it. That area in the low memory is called the transfer buffer23. This data shuffling means that some I/O speed degradation is inevitable in any protected-mode program which runs on top of DOS (including, for example, Windows programs when Windows 3.X is set to 386-Enhanced mode).
By default, DJGPP moves data in chunks of 16 KB, so defining a buffer
larger than that won't gain anything. The size of the transfer buffer
is customizable up to a maximum of 64 KB24, so if your program really reads a lot of large files, you might
be better off enlarging it (with the STUBEDIT
program).
The DJGPP buffered I/O functions utilize a special algorithm to optimize
both sequential and random reads. These two usually contradict, since
sequential reads favor larger buffers, while random access favors small
buffers. DJGPP solves this contradiction by doubling the buffer size on
each sequential read, up to the size of the transfer buffer, and
resetting the buffer size back to the minimum of 512 bytes each time the
program calls fseek
. Experience shows that programs which use
both sequential and random access to files, like ld.exe
, the
linker, run significantly faster when linked with these optimized I/O
functions (introduced with version 2.02 of DJGPP).
Some people think that FAT is such a lousy filesystem, that programs which do a lot of disk I/O must run terribly slow when compiled with DJGPP. This is a common misconception. The speed of disk I/O is determined primarily by how efficient is the code in the operating system kernel that handles the filesystem, and the device drivers for the I/O-related devices like the hard disk, not by the disk layout. It is true that DOS and BIOS don't implement I/O too efficiently (they use too many tight loops waiting for low-level I/O to complete), but a large disk cache can help them tremendously. In addition, Windows 9X bypasses DOS and BIOS I/O code entirely, and uses much more efficient protected-mode code instead. Experience shows that DJGPP programs on plain DOS systems with a large (8MB and up) disk cache installed run about 30% slower than a Linux system on the same machine; and Windows 9X will run the same programs at roughly the same speed as Linux. If you get much slower performance on DOS/Windows, chances are that your system is not configured optimally.
Some programs which only copy data between two files might gain significantly if you write your custom low-level I/O functions that avoid moving data to extended memory (only to move them back to the transfer buffer). However, these cases are rare.