It is often not recommended to store images in the database yourself
See discussions. Is it better to store images in a BLOB or just a URL? and Files - to databases or not? . Keep in mind that these questions and their answers are not limited to PostgreSQL.
There are some specific PostgreSQL wrinkles. PostgreSQL does not have tools for incremental dumps * so if you use pg_dump , you must reset all this image data for each backup. The amount of memory and transfer time can be troubling, especially because you should keep backups for several weeks, and not just one last backup.
If the images are large or large, you may want to save the images in the file system if you do not need ACID compatible transactional access. Store the file names in the database or simply establish a file name convention based on the usable key. Thus, you can easily create incremental backups of the image catalog by managing it separately for the database itself.
If you save images to FS, you cannot easily โ access them through a PostgreSQL database connection. OTOH, you can serve them directly through HTTP directly from the file system much more efficiently than you would expect when you first need to request them from the database. In particular, you can use sendfile () from rails if your images are in FS, but not from the database.
If you really have to store images in DB
... then it is conceptually the same as in .NET, but the exact details depend on the Pg driver you used, which you did not specify.
There are two ways to do this:
- Store and load
bytea as you requested; and - Use the built-in support for large objects, which is often preferable to using
bytea .
For small images, where bytea is fine:
- Read image data from client in local variable
- Insert this into the database, passing the variable as bytea. Assuming you are using the
ruby-pg driver, the test_binary_values driver example will help.
For larger images (more than a few megabytes) use lo instead:
For large images, please do not use bytea . The theoretical maximum may be 2 GB, but in practice you need 3 times RAM (or more) as the image size will offer , so you should avoid using bytea for large images or other large binary data.
PostgreSQL has a dedicated lo (large object) type for this. Only 9.1:
CREATE EXTENSION lo; CREATE TABLE some_images(id serial primary key, lo image_data not null);
... then use lo_import to read data from a temporary file that is on disk, so you do not need to install everything in RAM immediately.
The ruby-pg driver provides workarounds for lo_create , lo_open , etc. and provides lo_import for local file access too. See this useful example .
Use large objects, not bytea .
* Incremental backups are possible with streaming replication, PITR / WAL archiving, etc., but again, increasing the size of the database can complicate things like WAL management. In any case, if you are not a specialist (or "brave"), you should take pg_dump , and not rely on replication and only PITR. Placing images in your database will also - by increasing the size of your database - significantly slow down pg_basebackup , which can be important in scenarios of switching to another resource.
โ adminpack offers local file access through a Pg connection for superusers. However, your webapp user should never have superuser rights or even ownership of the tables with which he works. Reading and writing a file through a separate secure channel, for example WebDAV .