You can stat() file to get the file size and the number of disk blocks, search for a relatively small number of disk blocks at the end of the file, write the known number of blocks, and then install the file again. Compare the initial number of disk blocks with the final number. Only a few disk blocks should not be delayed for too long if the file system does not support a sparse file.
Given the initial and final number of disk blocks, try to determine if the file system supports sparse files. I say โtryโ because some file systems can do it hard - for example, ZFS with compression enabled.
Something like that:
#include <unistd.h> #include <stdlib.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <string.h> #include <errno.h> int check( const char *filename ) { struct stat sb; long blocksize; off_t filesize; blkcnt_t origblocks; char *buffer; int fd; fd = open( filename, O_CREAT | O_RDWR, 0644 ); fstat( fd, &sb ); blocksize = sb.st_blksize; filesize = sb.st_size; origblocks = sb.st_blocks; lseek( fd, 16UL * blocksize, SEEK_END ); buffer = malloc( blocksize ); memset( buffer, 0xAA, blocksize ); write( fd, buffer, blocksize ); fsync( fd ); free( buffer ); // kludge to give ZFS time to update metadata for ( ;; ) { stat( filename, &sb ); if ( sb.st_blocks != origblocks ) { break; } } printf( "file: %s\n filesystem: %s\n blocksize: %d\n size: %zd\n" " blocks: %zd\n orig blocks: %zd\n disk space: %zd\n", filename, sb.st_fstype, blocksize, sb.st_size, ( size_t ) sb.st_blocks, ( size_t ) origblocks, ( size_t ) ( 512UL * sb.st_blocks ) ); // return file to original size ftruncate( fd, filesize ); return( 0 ); } int main( int argc, char **argv ) { for ( int ii = 1; ii < argc; ii++ ) { check( argv[ ii ] ); } return( 0 ); }
(error checking omitted for clarity)
ZFS with compression turned on does not seem to update the file metadata quickly, so rotation awaits changes.
When launched in a Solaris 11 field with asdf files (ZFS file system, compression enabled) /tmp/asdf (tmpfs file system) and /var/tmp/asdf (ZFS, no compression), this code generates the following output:
file: asdf filesystem: zfs blocksize: 131072 size: 2228224 blocks: 10 orig blocks: 1 disk space: 5120 file: /tmp/asdf filesystem: tmpfs blocksize: 4096 size: 69632 blocks: 136 orig blocks: 0 disk space: 69632 file: /var/tmp/asdf filesystem: zfs blocksize: 131072 size: 2228224 blocks: 257 orig blocks: 1 disk space: 131584
From this conclusion, it should be obvious that /tmp/asdf is in a file system that does not support sparse files, and /var/tmp/asdf is in a file system that supports such files.
And simple asdf completely connected to something else, where writing 128 kb of data adds all 9 blocks to 512 bytes. From this you can conclude that there is some kind of compression in the file system. Offline, I suspect it's pretty safe to assume that any file system that supports this kind of native compression will also support sparse files.
And the fastest way to determine if the file system supports sparse files when issuing a file name or an open file descriptor is to call stat() in the file name or fstat() in the file descriptor, get the st_fstype field from struct stat and compare the file system type with a set of file system type strings that are known to support sparse files.