imgdupes | deleting near-duplicate images | Hashing library
kandi X-RAY | imgdupes Summary
kandi X-RAY | imgdupes Summary
imgdupes is a command line tool for checking and deleting near-duplicate images based on perceptual hash from the target directory. Images by Caltech 101 dataset that semi-deduped for demonstration. It is better to pre-deduplicate identical images with fdupes or jdupes in advance. Then, you can check and delete near-duplicate images using imgdupes with an operation similar to the fdupes command.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Generate a list of image hashes
- Stop the background
- Start the thread
imgdupes Key Features
imgdupes Examples and Code Snippets
Community Discussions
Trending Discussions on imgdupes
QUESTION
I have a bunch of poor quality photos that I extracted from a pdf. Somebody I know has the good quality photo's somewhere on her computer(Mac), but it's my understanding that it will be difficult to find them.
I would like to
- loop through each poor quality photo
- perform a reverse image search using each poor quality photo as the query image and using this persons computer as the database to search for the higher quality images
- and create a copy of each high quality image in one destination folder.
Example pseudocode
...ANSWER
Answered 2020-May-17 at 07:17Premise
I'll focus my answer on the image processing part, as I believe implementation details e.g. traversing a file system is not the core of your problem. Also, all that follows is just my humble opinion, I am sure that there are better ways to retrieve your image of which I am not aware. Anyway, I agree with what your prof said and I'll follow the same line of thought, so I'll share some ideas on possible similarity indexes you might use.
Answer
- MSE and SSIM - This is a possible solution, as suggested by your prof. As I assume the low quality images also have a different resolution than the good ones, remember to downsample the good ones (and not upsample the bad ones).
- Image subtraction (1-norm distance) - Subtract two images -> if they are equal you'll get a black image. If they are slightly different, the non-black pixels (or the sum of the pixel intensity) can be used as a similarity index. This is actually the 1-norm distance.
- Histogram distance - You can refer to this paper: https://www.cse.huji.ac.il/~werman/Papers/ECCV2010.pdf. Comparing two images' histograms might be potentially robust for your task. Check out this question too: Comparing two histograms
- Embedding learning - As I see you included tensorflow, keras or pytorch as tags, let's consider deep learning. This paper came to my mind: https://arxiv.org/pdf/1503.03832.pdf The idea is to learn a mapping from the image space to a Euclidian space - i.e. compute an embedding of the image. In the embedding hyperspace, images are points. This paper learns an embedding function by minimizing the triplet loss. The triplet loss is meant to maximize the distance between images of different classes and minimize the distance between images of the same class. You could train the same model on a Dataset like ImageNet. You could augment the dataset with by lowering the quality of the images, in order to make the model "invariant" to difference in image quality (e.g. down-sampling followed by up-sampling, image compression, adding noise, etc.). Once you can compute embedding, you could compute the Euclidian distance (as a substitute of the MSE). This might work better than using MSE/SSIM as a similarity indexes. Repo of FaceNet: https://github.com/timesler/facenet-pytorch. Another general purpose approach (not related to faces) which might help you: https://github.com/zegami/image-similarity-clustering.
- Siamese networks for predicting similarity score - I am referring to this paper on face verification: http://bmvc2018.org/contents/papers/0410.pdf. The siamese network takes two images as input and outputs a value in the [0, 1]. We can interpret the output as the probability that the two images belong to the same class. You can train a model of this kind to predict 1 for image pairs of the following kind: (good quality image, artificially degraded image). To degrade the image, again, you can combine e.g. down-sampling followed by up-sampling, image compression, adding noise, etc. Let the model predict 0 for image pairs of different classes (e.g. different images). The output of the network can e used as a similarity index.
Remark 1
These different approaches can also be combined. They all provide you with similarity indexes, so you can very easily average the outcomes.
Remark 2
If you only need to do it once, the effort you need to put in implementing and training deep models might be not justified. I would not suggest it. Still, you can consider it if you can't find any other solution and that Mac is REALLY FULL of images and a manual search is not possible.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install imgdupes
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page