Two requirements answer your question:
- Are there several application servers that read binary files from a database server?
- Do you have a database connection that can binary write and read stream?
Multiple application servers that retrieve binary files from a single database server really interfere with your scalability. Consider that database connections usually - necessarily - come from a smaller pool than the application server request service pool. And, binaries with data volume will consume sending from the database server to the application server via the channel. The database server will most likely send requests to the queue because its connection pool will be consumed with binary delivery.
Streaming is important so that the file is not completely in the server’s memory when reading or writing (it looks like @Andrew’s answer about SQL Server 2008 FILESTREAM can talk about this). Imagine a file the size of a few gigabytes - if it is completely read in memory - it will be enough to crash many application servers that simply do not have physical memory to host. If you do not have connections to the streaming database, then storing in the database is really unsafe if you do not limit the file size so that your application server software is allocated at least as much memory as the maximum file size * the number of requests per connection maintenance + some additional overhead.
Now let's say that you do not put files in the database. Most operating systems are very well suited for caching frequently used files. Thus, from the very beginning you get additional benefits. Also, if you are a web server, they are pretty good at sending the right request headers, such as mime type, content length, electronic tags, etc., which you end up coding yourself. The real problem is replication between servers, but most application servers are pretty good at this, using HTTP streaming read and write, and another responder pointed out that the database and file system were synchronized for backup.
orangepips
source share