Serve git-lfs files from express shared folder

I am using node.js (express) on Heroku, where the size is limited to 300 MB .

To save a small slug, I would like to use git -lfs to track my public express memory.

This way all my assets (images, videos ...) are uploaded to the lfs-store (e.g. AWS S3) and git -lfs leaves a pointer file (maybe it has an S3 URL in it?).

I want to transfer redirects to a remote S3 file when serving files from a shared folder.

My problem is that I do not know how to extract the URL from the contents of a pointer file ...

 app.use('/public/:pointerfile', function (req, res, next) { var file = req.params.pointerfile; fs.readFile('public/'+file, function (er, data) { if (er) return next(er); var url = retrieveUrl(data); // <-- HELP ME HERE with the retrieveUrl function res.redirect(url); }); }); 

Do not you think that it is not too expensive to do express reading and analyze potentially all public/* files. Maybe I can cache the url after parsing?

+8
amazon-s3 heroku express git-lfs
source share
2 answers

I finally did some middleware for this: express-lfs with a demo: https://expresslfs.herokuapp.com

There you can download the 400Mo file as evidence.

See here: https://github.com/goodenough/express-lfs#usage

PS: Thanks @fundeldman for the good advice in his answer ;)

0
source share

Actually, the pointer file does not contain any information about the URL (as you can see in the link or here ) - it just saves oid (Object ID) for blob, which is only its sha256 .

However, you can achieve what you are looking for using oid and lfs api , which allows you to load specific oid using a batch request .

You can specify what the endpoint is used to store your blobs from .git/config , which can accept non-default lfsurl tags, for example:

 [remote "origin"] url = https://... fetch = +refs/heads/*:refs/remotes/origin/* lfsurl = "https://..." 

or separate

 [lfs] url = "https://..." 

If there is no lfsurl tag, you use the GitHub endpoint (which, in turn, can be redirected to S3):

 Git remote: https://git-server.com/user/repo.git Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs Git remote: git@git-server.com:user/repo.git Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs 

But you should work against it, and not directly S3, as the GitHub redirect response probably also contains some authentication information.

Check the doc batch response to see the structure of the answer - you basically need to analyze the relevant parts and make your own call to extract blobs (which is what git lfs would do in your place during the check).

A typical answer (taken from the document I referred to) would look something like this:

 { "_links": { "download": { "href": "https://storage-server.com/OID", "header": { "Authorization": "Basic ...", } } } } 

So, you would get https://storage-server.com/OID so that headers was returned from the batch response - the last step is to rename the returned blob (this name will usually be just oid as git lfs uses storage based on the checksum) - the pointer file has the original resource name, so just rename the blob to this.

+10
source share

All Articles