Why do large files in my Docker image get every time even when no changes have been made?

I have a docker image that I create using a Docker file.

The docker file contains some COPY statements. On one of them is a large file about 120 MB in size.

It is written in the form COPY myfile / data /

When I do a docker click on a remote registry, it takes a very long time each time. Despite the fact that this file has not changed. It seems to be downloading a little over 120 MB.

I don’t understand something about how the algorithm works to determine if files or something else have changed?

And how does docker assembly work to create wildcards? i.e.

COPY localdir / * / remotedir /

This is actually the amount of data. But I'm not sure if this is the best way to do this. Data revolutions are only encouraged, but I almost think that uploading files to the amount of data that sftp launches and then uploading files afterwards may be the best approach. This is a boot server, and these are initrd and Linux kernel files. I don’t have much, but I expect to stay close and delete the old ones.


Update: I think that maybe I found an error related to how docker assembly calculates file changes. See my github issue here .

+4
source share
1 answer

The docker documentation says:

ADD COPY (), , . , (), . - (-), , .

, . , , / docker.

+3

All Articles