Actually, the pointer file does not contain any information about the URL (as you can see in the link or here ) - it just saves oid (Object ID) for blob, which is only its sha256 .
However, you can achieve what you are looking for using oid and lfs api , which allows you to load specific oid using a batch request .
You can specify what the endpoint is used to store your blobs from .git/config , which can accept non-default lfsurl tags, for example:
[remote "origin"] url = https://... fetch = +refs/heads/*:refs/remotes/origin/* lfsurl = "https://..."
or separate
[lfs] url = "https://..."
If there is no lfsurl tag, you use the GitHub endpoint (which, in turn, can be redirected to S3):
Git remote: https://git-server.com/user/repo.git Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs Git remote: git@git-server.com:user/repo.git Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs
But you should work against it, and not directly S3, as the GitHub redirect response probably also contains some authentication information.
Check the doc batch response to see the structure of the answer - you basically need to analyze the relevant parts and make your own call to extract blobs (which is what git lfs would do in your place during the check).
A typical answer (taken from the document I referred to) would look something like this:
{ "_links": { "download": { "href": "https://storage-server.com/OID", "header": { "Authorization": "Basic ...", } } } }
So, you would get https://storage-server.com/OID so that headers was returned from the batch response - the last step is to rename the returned blob (this name will usually be just oid as git lfs uses storage based on the checksum) - the pointer file has the original resource name, so just rename the blob to this.