Protecting Images in a Web Application

we maintain an image database in a large-scale web application. High resolution jpg is large (> 15 MB) and should not be available for download in any way. Now we need to provide access to the details (cultures) of images for clients (for example, the zoom function). The client should see a reduced version of the image and be able to select the area for viewing in full scale mode (100%).

How can this be implemented in the most efficient way (traffic and processor)? We are open to any solution as long as the high-resolution image file remains protected. The application is developed in C # and .net framework 3.5.

Any ideas? Thanks in advance!

+6
design web-applications image protection media
source share
7 answers

The first thing you need to do is compress and watermark the images before uploading them to the server. Then submit them to the user. This will require the least CPU resources since they will be static.

I personally would then crop the images for the full-sized versions and place them along with the compressed ones. Thus, the client may have an idea of ​​the full image (albeit compressed and watermark) along the side of a small sample of the full version of hi-res.

You will probably avoid manipulating images on the fly if you have few clients and a very fleshy server.

+1
source share

I would use the low resolution version of the image for the browser and hava client environment, which then sent a request to the server, which cuts the selection and sends it back with high resolution.

0
source share

As I tell my father (this does not understand how the Internet works), if you can see it on a web page, you can save it, it's just a question of how to do it.

0
source share

There is a version of ajax Deep Zoom that you may like - see "Dragon":

http://livelabs.com/seadragon-ajax/gallery/

The user is presented with a low resolution image; they can then enlarge any part they like.

0
source share

First, I would pre-visualize the watermarked versions of all full-size images, saving them in a compressed file format, as well as pre-displayed low-resolution versions.

I would work with low resolution images for viewing. Then a high-resolution image for watermarks for the user to adjust their cropping.

At the time of confirmation, I would have a second dedicated image processing server that cropped the image without watermarks, transferred the cropped image to the web server, which sent it to the client.

As said, it would be possible to create a client side script that handled the cropped parts and put them together to create a full-sized copy of a non-watermark.

0
source share

should not be available for download in any way.

does not match:

The client should see the downskaled version of the image and be able to select the area to view in full scale (100%).

... in the place where you allow viewing of all areas of the image in full view, the entire image can be stitched together. so that you are effective (if very inconvenient) to make a full-size image available.

none of this helps you achieve your goal.

as I would do, it would provide a 72dpi watermarked copy to use when selecting the image area to upload. you could scale this to% of the original if the problem with the screen was a problem. ask the user to select the upper left and lower right coordinates. then use something like imagemagick to copy this area from the original to serve the user.

if you need to save resources, you can load users from a predefined grid, so the first grid coordinate is 14:11, image_1411_crop.jpg is written to the file system, and the next time it is selected, the file already exists.

edit: read some of your comments on other answers ...

no matter how you plan to generate and cache servers, you will use the same bandwidth and traffic. 300ppi jpeg is 300dpi jpeg, regardless of whether it was just generated or sitting in the file system.

you need to find out if you need to save processor or disk space. if you have a million image images and only forty users, you can afford to hit the processor. if you have forty gigs of images and a million users, go to your hard drive.

-one
source share

I would use S3 for storage. Create two buckets (public), give the URL to the protected images bucket after you have authorized the user to download them. S3 Dates can be made calm with expiration.

With 15 MB images, you will probably realize that you need to pre-create a scaled / cropped version ahead of time.

I would use some kind of watermark besides the source file. (e.g. Google maps)

[Edit: added deep scaling for scaling]

Check Silverlight Deep Zoom to control moving and zooming ( Demo ). They even have a utility for generating all cropped images.

-2
source share

All Articles