Here is a suggestion (I have not seen the mention in your question):
Create a Blob URL for each file object in the FileList object that will be stored in the URL Store browsers, saving their URL String.
Then you pass this URL string to the webworker (separate stream), which uses FileReader to read each file (access via the Blob URL string) in the divided sections, reuse a single fixed-size buffer (almost like a circular buffer), to calculate the hash file (there are simple / fast portable hashes like crc32, which can often be simply combined with vertical and horizontal checksums in the same loop (also portable pieces over pieces).
You can speed up the process by reading 32-bit (unsigned) values ββinstead of 8-bit values ββusing the corresponding "bufferview" (which is 4 times faster). System entity is not important, do not waste resources on it!
Upon completion, the webworker then transfers the hash of the file to the main thread / application, which then simply performs your comparison with the matrix [[fname, fsize, blobUrl, fhash] /* , etc /*] .
Pro
A reusable fixed buffer significantly reduces memory consumption (at any level that you specify), webworker improves performance by using an additional stream (which does not block your main browser stream).
Con
You still need a spam server for browsers with javascript disabled (you can add a hidden field to the form and set its value using javascript as a validation tool with javascript support to reduce server load). However .. even then .. you still need a server-side backup to protect against malicious input.
Utility
So ... no net profit? Well .. if the likelihood is that the user can upload duplicate files (or just use them in a web application) than you saved at the waist, just to check. This is a pretty (environmental / financial) victory in my book.
Extra
Hashes are prone to collision, period. To reduce the (realistic) chance of a collision, you should choose a more advanced hash algo (most of them are easily portable in chunked mode). The obvious tradeoff for more advanced hashes is a higher code size and lower speed (higher CPU utilization).