Common filesystem I/O pitfalls

Filesystem I/O is one of the key elements of the standard library in many programming languages. Most of them derive it from the interfaces provided by the standard C library, potentially wrapped in some portability and/or OO sugar. Most of them share an impressive set of pitfalls for careless programmers.

In this article, I would like to shortly go over a few more or less common pitfalls that come to my mind.

Overwriting the file in-place

This one will be remembered by me as the ‘setuptools screwup’ for quite some time. Consider the following snippet:

if not self.dry_run:
    f = open(target,"w"+mode)

This is the code that setuptools used to install scripts. At a first glance, it looks good — and seems to work well, too. However, think of what happens if the file at target exists already.

The obvious answer would be: it is overwritten. The more commonly noticed pitfall here is that the old contents are discarded before the new are written. If user happens to run the script before it is completely written, he’ll get unexpected results. If writes fail for some reason, user will be left with partially written new script.

While in the case of installations this is not very significant (after all, failure in middle of installation is never a good thing, mid-file or not), this becomes very important when dealing with data. Imagine that a program would update your data this way — and a failure to add new data (as well as unexpected program termination, power loss…) would instantly cause all previous data to be erased.

However, there is another problem with this concept. In fact, it does not strictly overwrite the file — it opens it in-place and implicitly truncates it. This causes more important issues in a few cases:

  • if the file is hardlinked to another file(s) or is a symbolic link, then the contents of all the linked files are overwritten,
  • if the file is a named pipe, the program will hang waiting for the other end of the pipe to be open for reading,
  • other special files may cause other unexpected behavior.

This is exactly what happened in Gentoo. Package-installed script wrappers were symlinked to python-exec, and setuptools used by pip attempted to install new scripts on top of those wrappers. But instead of overwriting the wrappers, it overwrote python-exec and broke everything relying on it.

The lesson is simple: don’t overwrite files like this. The easy way around it is to unlink the file first — ensuring that any links are broken, and special files are removed. The more correct way is to use a temporary file (created safely), and use the atomic rename() call to replace the target with it (no unlinking needed then). However, it should be noted that the rename can fail and a fallback code with unlink and explicit copy is necessary.

Path canonicalization

For some reason, many programmers have taken a fancy to canonicalize paths. While canonicalization itself is not that bad, it’s easy to do it wrongly and it cause a major headache. Let’s take a look at the following path:


You could say it’s ugly. It has a double slash, and a parent directory reference. It almost itches to canonicalize it to more pretty:


However, this path is not necessarily the same as the original.

For a start, let’s imagine that foo is actually a symbolic link to baz/ooka. In this case, its parent directory referenced by .. is actually /baz, not /, and the obvious canonicalization fails.

Furthermore, double slashes can be meaningful. For example, on Windows double slash (yes, yes, backslashes are used normally) would mean a network resource. In this case, stripping the adjacent slash would change the path to a local one.

So, if you are really into canonicalization, first make sure to understand all the rules governing your filesystem. On POSIX systems, you really need to take symbolic links into consideration — usually you start with the left-most path component and expand all symlinks recursively (you need to take into consideration that link target path may carry more symlinks). Once all symbolic links are expanded, you can safely start interpreting the .. components.

However, if you are going to do that, think of another path:


If you expand it on common Gentoo old-style multilib system, you’ll get:


However, now imaging that the /usr/lib symlink is replaced with a directory, and the appropriate files are moved to it. At this point, the path recorded by your program is no longer correct since it relies on a canonicalization done using a different directory structure.

To summarize: think twice before canonicalizing. While it may seem beneficial to have pretty paths or use real filesystem paths, you may end up discarding user’s preferences (if I set a symlink somewhere, I don’t want program automagically switching to another path). If you really insist on it, consider all the consequences and make sure you do it correctly.

Relying on xattr as an implementation for ACL/caps

Since common C libraries do not provide proper file copying functions, many people attempted to implement their own with better or worse results. While copying the data is a minor problem, preserving the metadata requires a more complex solution.

The simpler programs focused on copying the properties retrieved via stat() — modes, ownership and times. The more correct ones added also support for copying extended attributes (xattrs).

Now, it is a known fact that Linux filesystems implement many metadata extensions using extended attributes — ACLs, capabilities, security contexts. Sadly, this causes many people to assume that copying extended attributes is guaranteed to copy all of that extended metadata as well. This is a bad assumption to make, even though it is correct on Linux. It will cause your program to work fine on Linux but silently fail to copy ACLs on other systems.

Therefore: always use explicit APIs, and never rely on implementation details. If you want to work on ACLs, use the ACL API (provided by libacl on Linux). If you want to use capabilities, use the capability API (libcap or libcap-ng).

Using incompatible APIs interchangeably

Now for something less common. There are at least three different file locking mechanisms on Linux — the somehow portable, non-standardized flock() function, the POSIX lockf() and (also POSIX) fcntl() commands. The Linux manpage says that commonly both interfaces are implemented using the fcntl. However, this is not guaranteed and mixing the two can result in unpredictable results on different systems.

Dealing with the two standard file APIs is even more curious. On one hand, we have high-level stdio interfaces including FILE* and DIR*. On the other, we have all fd-oriented interfaces from unistd. Now, POSIX officially supports converting between the two — using fileno(), dirfd(), fdopen() and fddiropen().

However, it should be noted that the result of such a conversion reuses the same underlying file descriptor (rather than duplicating it). Two major points, however:

  1. There is no well-defined way to destroy a FILE* or DIR* without closing the descriptor, nor any guarantee that fclose() or closedir() will work correctly on a closed descriptor. Therefore, you should not create more than one FILE* (or DIR*) for a fd, and if you have one, always close it rather than the fd itself.
  2. The stdio streams are explicitly stateful, buffered and have some extra magic on top (like ungetc()). Once you start using stdio I/O operations on a file, you should not try to use low-level I/O (e.g. read()) or the other way around since the results are pretty much undefined. Supposedly fflush() + rewind() could help but no guarantees.

So, if you want to do I/O, decide whether you want stdio or fd-based I/O. Convert between the two types only when you need to use additional routines not available for the other one; but if those routines involve some kind of content-related operations, avoid using the other type for I/O. If you need to do separate I/O, use dup() to get a clone of the file descriptor.

To summarize: avoid combining different APIs. If you really insist on doing that, check if it is supported and what are the requirements for doing so. You have to be especially careful not to run into undefined results. And as usual — remember that different systems may implement things differently.

Atomicity of operations

For the end, something commonly known, and even more commonly repeated — race conditions due to non-atomic operations. Long story short, all the unexpected results resulting from the assumption that nothing can happen to the file between successive calls to functions.

I think the most common mistake is the ‘does the file exist?’ problem. It is awfully common for programs to use some wrappers over stat() (like os.path.exists() in Python) to check if a file exists, and then immediately proceed with opening or creating it. For example:

def do_foo(path):
    if not os.path.exists(path):
        return False

    f = open(path, 'r')

Usually, this will work. However, if the file gets removed between the precondition check and the open(), the program will raise an exception instead of returning False. For example, this can practically happen if the file is part of a large directory tree being removed via rm -r.

The double bug here could be easily fixed via introducing explicit error handling, that will also render the precondition unnecessary:

def do_foo(path):
        f = open(path, 'r')
    except OSError as e:
        if e.errno == errno.ENOENT:
            return False

The new snippet ensures that the file will be open if it exists at the point of open(). If it does not, errno will indicate an appropriate error. For other errors, we are re-raising the exception. If the file is removed post open(), the fd will still be valid.

We could extend this to a few generic rules:

  1. Always check for errors, even if you asserted that they should not happen. Proper error checks make many (unsafe) precondition checks unnecessary.
  2. Open file descriptors will remain valid even when the underlying files are removed; paths can become invalid (i.e. referencing non-existing files or directories) or start pointing to another file (created using the same path). So, prefer opening the file as soon as necessary, and fstat(), fchown(), futimes()… over stat(), chown(), utimes()
  3. Open directory descriptors will continue to reference the same directory even when the underlying path is removed or replaced; paths may start referencing another directory. When performing operations on multiple files in a directory, prefer opening the directory and using openat(), unlinkat()… However, note that the directory can still be removed and therefore further calls may return ENOENT.
  4. If you need to atomically overwrite a file with another one, use rename(). To atomically create a new file, use open() with O_EXCL. Usually, you will want to use the latter to create a temporary file, then the former to replace the actual file with it.
  5. If you need to use temporary files, use mkstemp() or mkdtemp() to create them securely. The former can be used when you only need an open fd (the file is removed immediately), the latter if you need visible files. If you want to use tmpnam(), put it in a loop and try opening with O_EXCL to ensure you do not accidentally overwrite something.
  6. When you can’t guarantee atomicity, use locks to prevent simultaneous operations. For file operations, you can lock the file in question. For directory operations, you can create and lock lock files (however, do not rely on existence of lock files alone). Note though that the POSIX locks are non-mandatory — i.e. only prevent other programs from acquiring the lock explicitly but do not block them from performing I/O ignoring the locks.
  7. Think about the order of operations. If you create a world-readable file, and afterwards chmod() it, it is possible for another program to open it before the chmod() and retain the open handle while secure data is being written. Instead, restrict the access via mode parameter of open() (or umask()).