The ability to mount a remote filesystem and read and write to it is something everyone has wanted to do for a long, long time. SMB was an ok solution in the 90s, at least for users with minimal needs. Another dinosaur that we all love to hate is NFS. Despite its synchronization and locking issues, NFS actually works pretty well. But it’s a real headache when it comes to firewalls and certainly is not suitable for use over the open internet. Sysadmins will always tell you “don’t use NFS, it sucks,” but when you ask for a better alternative they just shrug and hastily excuse themselves from the conversation.
More recently, it seemed like WebDAV would be the answer. It uses a standard protocol (HTTP) and ties in nicely with the vision for a read/write web. It works perfectly with firewalls and can be encrypted with HTTPS. All major operating systems have out-of-the box support for mounting WebDAV shares. Most publicly readable svn repositories for open source projects, such as Rails plugins, use WebDAV.
And yet, this technology has not come into its own. It’s still a huge headache to configure. Apache requires several add-on modules, an obscure syntax that tends to break in mysterious ways, and user management which requires a combination of editing your Apache config and the curmudgeonous htpasswd command line tool. Nginx has a DAV module, but it doesn’t actually work. (I checked the source - it’s missing major portions of the protocol like PROPFIND and OPTIONS.) Apache is the only real option for serving DAV, and who wants to run Apache anymore?
And all of that isn’t even counting all the security issues of having write access to your source repositories controlled by the webserver. For example, a simple injection vulnerability in some PHP script running on another site hosted on the same server could grant the attacker immediate, full write access to the source repositories hosted there. (Some solutions: run a separate Apache process as a different user to serve DAV, or use virtualization to make a separate server instance for your source repos. Still, none of this is what I would call a cakewalk.)
Hmmm. Maybe the fact that people have been trying to build remote filesystem protocols for the last two decades and have yet to make a good one means something. Perhaps mounting remote filesystems is a fork in the tree of technology that should be abandoned. Great, but then what? We still need to host source control repos, share files, and manage content; and all of these things basically require filesystem (or filesystem-like) operations. The fact that there is such a profusion of protocols supported by FUSE proves the deep user thirst for remote filesystem mounting.
If a remote filesystem protocol is ever going to succeed, it needs to be simple. WebDAV was a good try but it’s not simple enough. Could we make another pass, again using HTTP, but taking some of the lessons learned from REST and further paring down the features and capabilities?
So let’s forget about permissions, symbolic links, and resource forks. We just want to get the list of files in a folder, and to read, write, and delete those files. Sound familiar? It’s CRUD.
How about:
GET http://example.com/repo/
<entries>
<folder>app</folder>
<file size="307">Rakefile</file>
<file size="8819">README</file>
</entries>
Many things would go under the axe in this hypothetical protocol. Any kind of linking, for example. Hard and symbolic links, certainly, but also those old standbys, ”.” and “..”. Both of these values can and should be determined from the path name.
There are two types of objects we’re operating on: folders and files. Filesystem tools often blur the line between these. In the unix shell, for example, rm -rf <dir>
will delete a directory and its contents, but rm <dir>
will not delete an empty directory. For that you need rmdir
. But rmdir
won’t delete the directory if it’s not empty. Consistency is overrated anyway, right?
I’d be in favor of further paring down the operations by removing the ability to do any direct operations on folders, and instead create and remove them dynamically based on pathnames. So if an operation tries to write to a file in a folder that doesn’t exist, it and all the parent folders will be created: the equivalent of mkdir -p
. Once the last file is removed from a folder, it no longer exists. Renaming a folder would become an expensive operation on a large tree, since what you’re really doing is renaming the paths of every contained file; but this may be a fair trade-off. Empty folders would not be possible. (I detect a hint of git’s philosophy creeping into my reasoning on this part.)
There are a few other types of operations that filesystems are good at, but stateless web requests aren’t. Two of these likely to affect developers are tailing a log and appending a log. If you have a log which is gigabytes in size, rewriting it every time you want to append a line is impractical. As is fetching it continually as you look for recently appended lines.
In this case I’d argue that this approach to logging should be abandoned, in favor of one that treats each log as a discrete object. I’ve been experimenting with this on Heroku by storing system logs into a database table. There are some challenges here in managing the large datasets produced; but dealing with huge logfiles isn’t fun either, just one that has more prior art (e.g. logrotate).
One final point: locks. This has been the bane of NFS and I think DAV has some unpleasant complexity in this department as well. I’m going to once again put this item up on the chopping block. Making remote filesystems is hard problem, and locking is a hard problem: I don’t see any reason to combine them and make things worse. Just because we’re used to being able to get locking from the filesystem doesn’t mean things should stay that way. Separation of concerns.
Maybe what I’m describing here is a document-oriented database. Since there’s progress being made in that department, maybe we should all just sit tight with our crufty Apache WebDAV setups and wait for the day when filesystems are no longer relevant.
Or maybe hierarchical filesystems are soon to be outmoded, and what we really want is a document database with lookup by tag. So when you run your Rails app it doesn’t look in app/models/.rb for your models, but instead just queries the document database for all documents with the tags “ruby”, “model”, and a tag for the project name.
Enough daydreaming for one day.