If there is no way to make it embedded in Java (for your sake, I hope this is not the case, and someone answered this), then you will need to implement the algorithm yourself, just like the others comment here: therefore.
You do not have to understand the whole algorithm yourself. If you use a pre-existing algorithm, you can simply modify it to load the file as a stream of bytes, create a byte buffer to continue reading fragments of the file, and modify the algorithm to take this data as a piece at a time.
Some algorithms, such as jpg, may not be possible to implement with a linear stream of file fragments in this way. As @warren suggested, bmp is probably the easiest to implement this way, since this file format has only a header with so many bytes that it just uploads RGBA data directly in binary format (along with some additions). Therefore, if you had to upload your sub-images, which need to be combined, load them logically one at a time (although you could multi-user this thing and simultaneously download the following data to speed it up, since this process will take a long time), reading the next line of data, storing it in a binary output stream, etc.
You may even have to upload sub-images several times. For example, imagine a saved image consisting of 4 sub-images in a 2x2 grid. You may need to download image 1, read its first data line, save it in a new file, take image 1, download image 2, read its first data line, save, release 2, load 1 to read your second data line, etc. . Most likely, you will need to do this if you are using a compressed image format for saving.
To suggest bmp again, since bmp is not compressed, and you can simply save the data in any format you want (provided that the file was opened in a way that allows random access), you can skip in the file so you can read it completely 1 sub-image and save all your data before moving on to the next one. This can save runtime, but it can also provide horrible saved file sizes.
And I could continue. There will probably be many traps, optimizations, etc.
Instead of saving 1 huge file, which is the result of combining other files, what if you created a new image file format that was simply made up of metadata, allowing it to refer to other files in such a way that they could be logically combined, effectively creating 1 massive file? Regardless of whether or not the creation of a new image file format depends on your software; if you expect people to take these images for use in other software, then this will not work - at least unless you can get a new image file format to catch it and become standard.