In our last episode about online photo analysis tools, we took a look at a tool that broke down all kinds of information, and other one that broke out metadata in an easy to review format. In this article we’ll look at another serious online photo analysis tool, another two that provide interesting data sets, and a service that helps you keep an eye on the photos and videos out there. But let’s start with that analysis tool. It’s called Forensically.
Please note: I am not a digital forensics specialist, so if I missed something or got something wrong in my explanations, please educate me.
Forensically – https://29a.ch/photo-forensics/
Forensically notes that it’s in beta, but don’t let that fool you; there are still plenty of tools here to get more information about your photos. Use the Open File link to upload your photo, or just mess around with the default photo that’s in place. I uploaded my photo.
The magnifier allows you to specify a zoom level (from 2x to 8x) and gives you the option of three different kinds of enhancement: Histogram Equalization, Auto Contrast, or Auto Contrast by Channel. (You can also choose none.) The site’s help file says that Histogram Equalization was the most “robust” option, but when I tried using it I found it confusing to figure out what part of the picture I was magnifying. On the other hand, using Auto Contrast really brought out the edges and pixels of what I was magnifying.
When the site mentions clones, it is not indicating Parts: The Clonus Horror, or any other sci-fi movie. Instead, the tool looks for parts of the image that look similar to each other. Sometimes when working with an image you might “clone” part of it and add it to another part. I used to do that when I was designing something, and needed the graphic to be wider than it was. I would clone part of the background to extend the image.
This tool has several options, including minimal settings for similarity, detail, and cluster size, but I found that it didn’t work that well. (To be fair, the site’s help file says of this tool: “Note that this tool is a first attempt and not yet very refined.”) Here’s what the tool found of my photo, which is absolutely unretouched:
Those are all false positives. Meanwhile, I uploaded an image from an article on using clone tools properly (I can’t reproduce it here because I don’t have intellectual property rights, but it’s the “Rocket kid with leaves” picture from https://www.smashingmagazine.com/2010/03/the-ultimate-guide-to-cloning-in-photoshop/ ) and it found some false positives but did not note the cloned leaves. I’d use this tool with some caution.
Error Level Analysis
I mentioned this last week when looking at FotoForensics. FotoForensics also has a quick ELA tutorial. Basically you’re looking at the image in such a way as to spot any changes in compression levels to various parts of the picture, possibly indicating manipulation.
I find the Forensically ELA tool a little easier to use because it has an opacity setting, With the opacity setting you can change how much of the original image is showing when you do an ELA. Between that and the magnifier tool, Forensically lets you really zoom in while still being able to tell what you’re looking at with the magnifier. Here’s an example with the opacity of the ELA tool set at .49, using the magnifier.
The Forensically help page explains noise analysis like this: “This is tool is basically a reverse denoising algorithm. Rather than removing the noise it removes the rest of the image. It is using a super simple separable median filter to isolate the noise.” This ends up looking really weird.
Like the ELA tool, the Noise Analysis tool also offers an opacity setting if you’re having trouble using the magnifier to view small parts of the image.
Level Sweep increases the contrast of various light levels in the image. The idea is that if something has been copied and pasted into the image, it’ll show up more. Doing this with an unaltered image gave me a pretty creepy picture, but shows clearly where the highest light levels are.
From the help file: “The luminance gradient tool analyses the changes in brightness along the x and y axis of the image. It’s obvious use is to look at how different parts of the image are illuminated in order to find anomalies.” So you’re looking for odd lighting. For example, two items in a similar place in the image, but one looks like it’s being lit from a different angle.
Principal Component Analysis
Principal Component Analysis (PCA) is a statistical method for finding anomalies in an image. I went to its Wikipedia page to learn more about it and was almost mathed to death. Then I did a Google search for Principal Component Analysis images and same. Fortunately the brains behind Forensically published a short article on using PCA.
One thing to be aware of with this tool; there’s a slight delay between choosing the options and seeing the change reflected in the image. It’s true for many of the tools here, but for this one it’s the most obvious.
All these tools give metadata for an image, don’t they? Forensically is no exception.
And like other tools, Forensically offers GeoTag information. What makes Forensically different is that it also offers a link to a Flickr map with other photos near the one you took. An interesting way to take a quick look around.
Thumbnail Analysis looks at the “thumbnail” — the preview image — contained in the image. Apparently not all images have one. I went poking around for more information on this but I don’t think I got the keywords right.
This is for finding more metadata. But not regular image metadata — apparently some JPEG files have their own metadata. This tool also offers Quantization Tables. Forensically has an article explaining both, thankfully.
String Extraction could also be called “And the rest,” or “Metadata we haven’t defined yet.” The tool looks for strings of ASCII characters in the image. The idea is that there might be metadata or other information that Forensically doesn’t recognize. In the case of the picture I uploaded, there are recognizable strings that I know, thanks to other tools, to be image metadata. The rest of it as far as I can tell is just letters.
Wow! As you can see there’s a lot more to an image than meets the eye- and there are many ways to find it. I learned a lot just writing this article, I hope you learned something reading this far.
But let’s drop down out of the aether a bit and look at a few more tools, ones that don’t slice and dice a photo to bits but rather do one particular thing.
Image Color Summarizer – http://mkweb.bcgsc.ca/color-summarizer/?analyze
Hues you can use! Sorry. Image Color Summarizer breaks down the colors inside an image you upload. The analysis screen looks a bit scary, but a summary breaks down the options. What we want is HTML output. I also chose vhigh precision so I can get the most accurate color breakdown. (Note that precision level can also mean it takes several seconds to get an analysis.)
My result included the original image, but also the image described in words:
aluminium athens atom azure baltic big black blackjack blue blue/grey charade charcoal chicago cod dark double foundry french gauntlet gravel grayish green greenish grey half ironside jungle lattitude liquid masala metal mine mirage montana montoya nero quarter riverstone sand sea shadow shaft silver stone traffic triple yellowish
… as well as a list of colors by pixel frequency (including hex and RGB values) and image cluster partitions, breaking down where the colors are.
If you scroll down further you’ll get histograms and other technical information about the colors in the image.
This is overkill if you just want information about a color or two, but if you want to get an idea about the entire picture, along with things like hex values that you can use other places, this site is a wow.
(And what can you do with hex numbers for colors? You can get shades and tints, you can make color palettes, you can convert to Pantone colors, and more. Hex codes for colors are fun!)
Using this tool you can learn what colors are in your picture. And just by looking at it you can see what’s in the picture. But what does a computer think is in the picture? Microsoft Azure can tell you.
Microsoft Azure Computer Vision – https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
When you land on the page you’ll see a big link saying “Try the Computer Vision API,” and you might click on that. It’ll give you options for a free trial with no credit card required….
Scroll down a little, and look for the “See It In Action” section. Here you can upload an image or provide the URL for an image. I was concerned about the picture I was using because I thought the glare from the light on the water might distort things, but Microsoft Azure did pretty good!
The basic description is correct (“a bridge over a body of water”) though some of the words in the description don’t make as much sense (clock, man, body). Overall I was pretty impressed at how accurate it was. If you scroll down the description page a bit you’ll also get calculations for whether or not the picture is considered adult content, and some guesses about its categories.
The last site I’m looking at today is not for about getting information on an image per se, it’s about getting information about where a photo (or video!) might have come from.
Berify – https://berify.com
Berify is a reverse-image search, but more importantly than that, it’s a service that monitors for misuse of photos and videos.
The reverse image search works like TinEye or Google Images or other reverse image searching tools you might have seen: upload a photo and Berify looks for it on the Web.
The only place I’ve used the photo for this article is here so I won’t try that. Instead I’ll use a photo that’s been circulating around the Web a lot – it’s a picture that people have shared saying it’s of clouds over the California wildfires, but it’s not.
(Original photographer, if you’re out there and you want me to take this picture down, let me know.)
Before you’re able to see the results, you’ll have to sign up for a free account (the free account is the last option on the signup page.) Once you’ve gone through all that (Berify just wants your name, email, and a password) you’ll get taken to a page where you can upload photos, connect your social media accounts, etc. (Don’t go too wild if you have a free account — the site has pretty strict limits for freebies.) Your uploaded photos appear under the tab My Photos.
Here’s where I had a little glitch. When I uploaded the photo I got one of those spinning “hold on we’re doing stuff” wheels. And it spun. And spun. And spun. It updated my counts of photos it had found, but didn’t show any. I finally closed the browser tab and opened it in a new tab, and the photos turned up immediately. That might be me or my browser or my Internet connection.
Anyway, once I got the results I had a list of places where the identical image had appeared and a list of similar images. 26 exact matches were found with 114 similars (which were mostly sunsets.)
How does this compare with something like TinEye? Pretty favorably; TinEye found 22 results. (I tried searching this image on Google Image search and it labeled it “cloud” and found an estimated 25 billion results; not helpful.) Berify will also track if the picture is being used on other sites. With a free account you have the option to check biannually or annually.
When I got the idea to do a roundup of photo analysis tools I thought it would be one short article, not two really long ones. Even if it’s only to upload one of your own pictures, like I did, to see all the data hidden behind the image, it’s worth it to explore these tools.
Categories: Learning Search, News
Leave a Reply