Clarify a few things:
I understand that http2 makes optimization methods such as concatenating files obsolete, since a server using http2 just sends one request.
HTTP / 2 makes optimization methods, such as file concatenation, somewhat outdated, because HTTP / 2 allows you to download files in parallel over the same connection. Previously, in HTTP / 1.1, the browser could request a file, and then had to wait until this file was fully downloaded before it could request the next file. This leads to workarounds, such as file concatenation (to reduce the number of required files) and multiple connections (hack parallel parallel downloads).
However, there is a counter argument that there is still overhead with several files, including requesting them, caching, reading them from the cache ... etc. This has been greatly reduced in HTTP / 2, but has not gone completely. In addition, gzipping text files work better on large files than gzipping many small files individually. Personally, however, I think the flaws outweigh these problems, and I think that concatenation will die out as soon as HTTP / 2 is ubiquitous.
Instead, the advice I see is that itโs better to keep file sizes smaller so that browsers are more likely to be cached by the browser.
This probably depends on the size of the website, but how small should the website file be if it uses http2 and wants to focus on caching?
The file size does not depend on whether it will be cached or not (if we are not talking about really massive files larger than the cache itself). The reason that splitting files into smaller pieces is better for caching is because if you make any changes, any files that were not affected can still be used from the cache. If you have all your javascript (for example) in one big .js file and you change one line of code, then the whole file needs to be downloaded again - even if it was already in the cache.
Similarly, if you have an image sprite map, this is great for reducing the loading of individual images in HTTP / 1.1, but requires the entire sprite file to be downloaded again if you ever need to edit it to add, for example, an additional image . Not to mention that the whole thing is loading - even for pages that just use one of these image sprites.
However, having said all this, there is an idea of โโwhat they say about the benefits of long-term caching. See this article, and in particular, the HTTP caching section, which shows that most user browser caches are smaller than you think, and therefore it is unlikely that your resources will be cached for a very long time. This does not mean that caching does not matter - but more than that, which is useful for viewing in this session, and not in the long run. Therefore, each visit to your site is likely to download all of your files again, unless they are a very frequent visitor, have a very large cache, or search the Internet.
Is file concatenation good anyway for users who use browsers that do not support http2.
Maybe. However, apart from Android, the HTTP / 2 browser support is actually very good , so most of your visitors are probably already enabled for HTTP / 2.
Speaking about the fact that there are no additional links to file concatenation in HTTP / 2, which were not yet in HTTP / 1.1. Well, it can be argued that several small files can be downloaded in parallel through HTTP / 2, while a larger file needs to be downloaded as a single request, but I do not buy it, which slows it down. There is no evidence of this, but the guts feel that the data should still be sent, so you have a bandwidth problem anyway, otherwise you wonโt. In addition, the overhead of requesting many resources, although significantly reduced in HTTP / 2, still exists. Latency is still the biggest problem for most users and sites, not bandwidth. If your resources are not really huge, I doubt that you noticed a difference between loading 1 large resource in my browser or the same data, divided into 10 small files loaded in parallel in HTTP / 2 (although you would in HTTP / 1.1), Not to mention the gzipping issues discussed above.
Thus, in my opinion, there is still little harm to continue concatenation. At some point, you will need to conclude whether the downsides have shifted the benefits provided by your user profile.
Would it hurt to have large file sizes in this case And use HTTP2? Thus, it will benefit users running any protocol because the site can be optimized for both http and http2.
Absolutely no harm. As mentioned above, there is (basically) a lack of additional disadvantages for concatenating files over HTTP / 2, which were not already in HTTP / 1.1. This is simply not necessary over HTTP / 2 and has drawbacks (potentially reduces the use of caching, requires a build step, makes debugging difficult, since the deployed code is not the same as the source code ... etc.).
Use HTTP / 2, and you will still see great advantages for any site - except for the simplest sites, which most likely will not improve, but will also not have any negatives. And, since older browsers may adhere to the HTTP / 1.1 protocol, there are no drawbacks to them. When or if you decide to stop implementing HTTP / 1.1 performance improvements, such as concatenation, this is a separate solution.
Actually, the only reason not for using HTTP / 2 is that implementations are still pretty bleeding, so it may not be convenient for you to run your production site on it.
**** Edit August 2016 ****
This post from an image highly bandwidth-related has recently sparked interest in the HTTP / 2 community as one of the first documented examples of where HTTP / 2 was actually slower than HTTP / 1.1. This is underscored by the fact that HTTP / 2 technology and understanding are still new and will require some configuration for some sites. There is no such thing as a free lunch! It is worth reading, although it should be borne in mind that this is an extreme example, and most sites have a much greater impact on performance, as well as problems with the delay and limitations of the HTTP / 1.1 connection, and not bandwidth problems.