Is there a way to avoid duplicate files in mongo gridfs? Or should I do this using application code (I use pymongo)
The MD5 sum is already part of the Mongo gridfs metadata, so you can simply set a unique index on this column and the server will refuse to store the file. No need to compare on the client side.
You can use the md5 hash and compare the new hash with the existing one before saving the file.
Source: https://habr.com/ru/post/1413123/More articles:How to limit JVM creation? - javaGetting HTML files with images and style sheets from Amazon S3 using the AWS SDK for .NET or REST API - c #Download third-party Python libraries - pythonJmeter vs. LoadRunner in terms of vusers - performance-testinghttps://translate.googleusercontent.com/translate_c?depth=1&rurl=translate.google.com&sl=ru&sp=nmt4&tl=en&u=https://fooobar.com/questions/1413122/append-directories-to-ignored-list-in-svn&usg=ALkJrhjKDje7zkvLIpeVxIQaUw73DjlnjQDeploy published code? - asp.netWhat does the + operator do in cout? - c ++How to insert records using Entity Manager without defining a primary key field? - javaTrigger Read Only Property (WPF) - c #User fields for new PSI task not recorded - psiAll Articles