Trending December 2023 # Spot Faked Photos Using Digital Forensic Techniques # Suggested January 2024 # Top 15 Popular

You are reading the article Spot Faked Photos Using Digital Forensic Techniques updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Spot Faked Photos Using Digital Forensic Techniques

If only all Photoshop jobs were this obvious, recognizing faked photos would be a lot easier. Stan Horaczek

We see hundreds or even thousands of images a day, and almost all of them have been digitally manipulated in some way. Some have gotten basic color corrections or simple Instagram filter effects, while others have received full on Photoshop jobs to completely transform the subject. It turns out humans aren’t very good at recognizing when an image has been manipulated, even if the change is fairly substantial. Hany Farid is a professor of computer science at Dartmouth College who specializes in photo forensics, and while he can’t share all of his fancy software tools for detecting editing trickery, he has shared a few tips for authenticating images on your own.

Try reverse image searching

A reverse image search in Google looks for images that are exact matches, as well as those that are thematically similar. Stan Horaczek

Before you start trying to CSI an image too hard, you can often debunk a faked photo by finding its source using a reverse image search. Google includes this function as part of its Images suite and looks for the exact image, as well as images that are similar in both subject matter and color aesthetics.

Another powerful tool is Tineye, which performs a similar function, but often returns fewer results that are closer to exact matches, which can make them easier to sort through.

“Often if you just do a reverse image search, you’ll find it right away,” says Farid. “You’ll see the original image that someone took from Getty Images and then added a UFO to the sky or something like that.”

Reverse image search can also be a useful tool if you suspect someone is stealing your social media photos and impersonating you. Upload your own photos to the tools and you can see where they appear on the web.

Look for weirdness

Fight the urge to zoom in too far to examine an image. This unedited image shows weirdness and artifacts when you’re up this close. You don’t have the CSI “enhance” tool. Stan Horaczek

The first step in analyzing an image involves a logical analysis, an area in which humans typically perform much better than computers—at least for now. “Computers are very good at measuring this fine grain details like compression artifacts and inconsistencies in geometry,” says Farid. “But if someone created a picture of a boat sailing down the middle of the road, a computer might not see anything wrong with that.”

Look at an image closely and examine objects that may have been inserted, or look for evidence that other objects may have been removed. Farid warns against zooming in too far, however, because that can introduce its own obstacles. “Sometimes if you zoom into an image up to 500%, it’s very easy to look at something that’s perfectly valid, like artifacts from lens distortions or noise, and start attributing that to manipulation,” says Farid. He recommends zooming to 200% or 300% maximum to avoid false positives.

This is also the time to look for errors in scale and perspective, which are some of the trickiest things to fix in a fake. Does one person in a group photo have an abnormally large head? Does an object look like it’s sitting at an odd angle? These are warning signs that warrant an even closer look.

Check the EXIF data

You can learn a lot about a photo by checking out the metadata associated with it. Stan Horaczek

When a digital camera captures an image, it appends a whole array of information called EXIF data to the image file. This data includes all the critical camera settings, as well as other info like GPS data if it’s available (which is typically the case with smartphone photos, unless the person has intentionally turned location settings off).

If you have the location of the photo, you can plug it into Google Maps and use Street View to get a general idea of what the location might actually look like. The Street View scene won’t necessarily be 100 percent accurate and up-to-date, but it can be a good starting point.

This analysis from chúng tôi shows the metadata attached to the JPEG file. Stan Horaczek

You can also sometimes find the original pixel dimensions of the image. This may not sound very useful, but you can easily look up the typical image dimensions of a photo from a particular camera and then compare them to the file you’re currently viewing. If the final version is smaller, that indicates that the photo may have been cropped to exclude information.

Also in the EXIF data is a software tag. “If an image is opened up in Photoshop and then saved, the metadata will then say “Photoshop” and then whatever version they used,” says Farid. He warns, however, that this tag doesn’t necessarily indicate that a photo is trying to trick you. Many photographs go through Photoshop or some other editing program for simple adjustments like color correction, or even just resizing.

Examine the shadows

The image has been edited to flip the man’s face, which creates a clear contradiction in the direction of the shadows. It was part of a study to determine how well people can recognize faked photos. Cognitive Research: Principles and Implications

We know that the shadow cast by an object will appear opposite the light that caused it. Using that information, investigators can actually map lines between shadows, objects, and the corresponding light sources to see if the image is physically possible.

“Out in the physical 3D world, I have a linear constraint on a shadow, an object, and a light source,” says Farid. “That means I can find all the objects that are casting shadows—as long as I can very clearly attribute a point on a shadow to a point on an object in the image.”

One of the original examples in the study about people’s ability to recognized edited photos showed a man whose face had been flipped so the light source was landing on the same side as the shadow. It can be easy to identify once you’re looking for it.

Mess with it in Photoshop

(The comparison above show two versions of the same image. The one on the right has been subjected to the levels adjustments that clearly show brush adjustments over the front license plate)

If you have access to Photoshop yourself, there are a few adjustments you can make to try and draw out artifacts that you might miss with your naked eye.

One tool Farid suggests using is Levels. You can access this by pressing Command + L (Mac) or Control + L (PC). “If you bring the white point all the way down really close to the black point, what’s going to happen is that the narrow range of black will expand out quite a bit,” says Farid. “If somebody has taken the eraser tool and erased something in a dark area, you can see the traces of the tool.” The same effect happens if you drag the black point all the way up to draw more detail out of the image highlights.

You can try a few other Photoshop tricks to shed some light on alterations. Cranking up the contrast or the sharpness will help emphasize hard edges in the photo, which can sometimes occur when an object is pasted in. Farid also suggests inverting the colors on an image (control + I or command + I) to get a new perspective on the photo, which could jolt your brain into drawing out some irregularities.

Look for patterns

There are some patterns you can recognize with your eyeballs. A novice Photoshop user may well leave repeating patterns behind when trying to clone out an object. Zoom out and look at the image from afar to see if your eye can pick up on any patterns, then zoom in closer to see if there might be some repeating objects in the scene.

Researchers also often look for patterns in artifacts left from JPEG image compression. JPEG is a “lossy” format, which means it jettisons some information from the original file to save space and make it readable by a wider array of machines. This causes artifacts, or changes in the data introduced over time—especially when you save it more than once. “Imagine you go out and you buy your brand new iPhone and even the packaging is beautiful. Everything fits just right down to the tape,” says Farid. “Try putting that back together and see what happens. It never works. The same thing is true of a digital file. When you unpack it in Photoshop, and then recompress it, you can’t get it perfectly right. It leaves artifacts that we can recognize.”

Be wary of online tools

A popular image validation tool says my photo of corn has been edited because the EXIF has been stripped and it was exported from Adobe Lightroom. This is, in fact, what the corn looked like. Stan Horaczek

There are places online where you can upload an image to check for warning signs of editing, but results can be very tricky to interpret. For instance, I uploaded this picture of corn to a popular site and it was flagged because it was “not an original camera image.” I exported a JPEG from a DSLR raw file with some color corrections myself, so I know it wasn’t faked, but it’s still flagged. It didn’t claim the photo was doctored, but it also casts doubt where there shouldn’t be any.

There are some websites that can read the software tags, like this one that can tell you exactly what actions were taken in Lightroom when editing a photo. That’s more useful, but you still need an understanding of the software itself to make an accurate interpretation.

There is software out there that can identify these more complex manipulations, but it’s typically only available commercially, for security and law enforcement operations.

“Making that stuff public is tricky because the more I make the information public, the easier it is to circumvent,” says Farid. “We release the details in scientific publications, but to really go back and implement all that technique would be really hard for somebody. That’s the compromise we have right now.

Don’t fall for false positives

The final step is realizing that sometimes things look altered when they aren’t. “Photographs just look weird sometimes,” says Farid.

You're reading Spot Faked Photos Using Digital Forensic Techniques

You’re Probably Terrible At Spotting Faked Photos

A picture is worth a thousand words. You have to see it to believe it. Pics or it didn’t happen. The trust we put into visual cues is all but encoded into our language. But what happens when the visual information itself is a lie? How effective are we at teasing out fact from optical fiction? Not very, according to a recent study in the journal Cognitive Research: Principles and Implications.

The study found that people could detect a false image only 60 percent of the time. And even when they knew an image was false, study subjects could only identify what was wrong with it 45 percent of the time.

The results matter because, as the study’s authors note, we live in a world where images are routinely altered. A generation ago, convincingly manipulating photos was difficult and labor intensive—the domain of experts. These days, with the rise of digital photography and cheap editing software, anyone with a bit of time and access to a computer can probably make something reasonably convincing. This means way more fun visual gags on Twitter, yes, but it can also shift our perception of reality—leading us to believe things that aren’t true.

Copious air brushing in the fashion industry, for example, has been well documented in its ability to alter our perception of what the typical human body looks like—sweat, stretch marks, pores and all. More recently, false images have become political memes, spreading disinformation (malevolently or otherwise).

A widely-shared, altered image of President Trump’s sons, Donald Jr. and Eric. The photo on the left is the real photograph. The one on the right has been altered to be less flattering. unknown/C. Allergri/Getty Images

This digitally altered image purports to be President Obama’s Columbia student ID, used to give credence to the unsubstantiated belief that the former president was not born in the United States. The card fails the sniff test: Obama attended Columbia in 1981, and ID’s with barcodes weren’t issued at the university until over a decade later. Unknown

The study, led by University of Warwick psychology researcher Sophie Nightingale, relied on an online test. The researchers started with 10 real (in other words, not manipulated) pictures from Google Images. Six of them were subjected to five different forms of data manipulation each—some producing physically possible images, some staying within the bounds of reason—to create an additional 30. These manipulations included air brushing, the introduction or subtraction of people or objects, changes in lighting, and changes in landscape geography. Participants, 707 in all, were each shown 10 random images—always including all five manipulation types and all five original images, but never a repeat of the same type of manipulation or base image—and asked to determine their authenticity. If you’re curious, you can take the test yourself here.

Below is one of the altered images—you can toggle back and forth with the original. Can you spot the difference? We’ll explain at the end of the post.

Participants were best at figuring out that a photo had been manipulated if something about the resulting image was physically implausible (geometric inconsistencies, shadow inconsistencies, or something implausible added to the picture, for example). But even then, subjects weren’t necessarily great at specifying what was wrong. It was as if those sorts of pictures triggered some spidey sense, but viewers still had a hard time figuring out what was making it tingle.

The study authors aren’t entirely sure why humans seem to be so shoddy at sussing out fact from fiction. In the paper, they speculate that perhaps we have the visual shortcuts that make our brains so speedy to blame: most of us understand how a shadow should fall, for example, but our brains aren’t designed to latch onto the position of a shadow when we look at an image. We gloss over a lot of what we see so that our brains can more quickly process the information that seems most important. In the conclusion of the paper, the study authors don’t sound entirely optimistic about the prospect of training individuals to be more discerning, but they point out that making a more manual effort of taking the image in might help.

“Future research might also investigate potential ways to improve people’s ability to spot manipulated photos. However, our findings suggest that this is not going to be a straightforward task,” they write. “We did not find any strong evidence to suggest there are individual factors that improve people’s ability to detect or locate manipulations. That said, our findings do highlight various possibilities that warrant further consideration, such as training people to make better use of the physical laws of the world, varying how long people have to judge the veracity of a photo, and encouraging a more careful and considered approach to detecting manipulations. What our findings have shown is that a more careful search of a scene, at the very least, may encourage people to be skeptical about the veracity of photos.”

In other words, don’t just look at a picture expecting it to be real. Look for things that might suggest it’s not. Of course, that has its potential downsides. Going into every interaction with a digital photo under the presumption that it’s fake-until-proven-real makes it easier to discount evidence that doesn’t support your personal beliefs.

“Increased skepticism is not perfect,” the study adds, “because it comes with an associated cost: a loss of faith in authentic photos.”

The whole prospect becomes even more chilling when you realize that the same digital manipulations can tweak video, too. Many of these alterations reveal themselves with a little digging, since the data contained in a digital photo usually leaves clues as to whether or not the file has been modified. But most of us don’t have hours to spend poring over viral images to figure out if they’re real.

“Images have a powerful influence on our memories,” study co-author Derrick Watson said in a statement. “If people can’t differentiate between real and fake details in photos, manipulations could frequently alter what we believe and remember.”

So take heed: Increasingly, the fact that you see it doesn’t mean you should believe it.

To see the difference, look at the tree line. This photo was provided courtesy of Sophie Nightingale, Cognitive Research, 2023

Can Digital Photos Be Trusted?

Around the same time, another image popped up on the forums of the conservative Web site chúng tôi Now the sign read “Lcpl Boudreaux saved my dad, then he rescued my sister,” and a debate raged. Other versions of the sign appeared-one was completely blank, apparently to show how easily a photo can be doctored, and another said “My dad blew himself up on a suicide bombing and all I got was this lousy sign.” By this point, Boudreaux, 25, was back in his hometown of Houma, Louisiana, after his Iraq tour, and he found out about the tempest only when a fledgling Marine brought a printout of the “killed my dad” picture to the local recruiters´ office where Boudreaux was serving. Soon after, he learned he was being investigated by the Pentagon. He feared court-martial. It would be months before he would learn his fate.

Falling victim to a digital prank and having it propagate over the Internet may seem about as likely as getting struck by lightning, but in the digital age, anyone can use inexpensive software to touch up photos, and their handiwork is becoming increasingly difficult to detect. Most of these fakes tend to be harmless-90-pound housecats, sharks attacking helicopters, that sort of thing. But hoaxes, when convincing, can do harm. During the 2004 presidential election campaign, a potentially damning image proliferated on the Internet of a young John Kerry sharing a speaker´s platform with Jane Fonda during her “Hanoi Jane” period. The photo was eventually revealed to be a deft composite of two images, but who knows how many minds had turned against Kerry by then. Meanwhile, politicians have begun to engage in photo tampering for their own ends: This July it emerged that a New York City mayoral candidate, C. Virginia Fields, had added two Asian faces to a promotional photograph to make a group of her supporters seem more diverse.

“Everyone is buying low-cost, high-quality digital cameras, everyone has a Web site, everyone has e-mail, Photoshop is easier to use; 2004 was the first year sales of digital cameras outpaced traditional film cameras,” says Hany Farid, a Dartmouth College computer scientist and a leading researcher in the nascent realm of digital forensics. “Consequently, there are more and more cases of high-profile digital tampering. Seeing is no longer believing. Actually, what you see is largely irrelevant.”

That´s a problem when you consider that driver´s licenses, security cameras, employee IDs and other digital images are a linchpin of communication and a foundation of proof. The fact that they can be easily altered is a big deal-but even more troubling, perhaps, is the fact that few people are aware of the problem and fewer still are addressing it.

It won´t be long-if it hasn´t happened already-before every image becomes potentially suspect. False images have the potential to linger in the public´s consciousness, even if they are ultimately discredited. And just as disturbingly, as fakes proliferate, real evidence, such as the photos of abuse at Abu Ghraib prison in Iraq, could be discounted as unreliable.

And then there´s the judicial system, in which altered photos could harm the innocent, free the guilty, or simply cause havoc. People arrested for possession of child pornography now sometimes claim that the images are not of real children but of computer-generated ones-and thus that no kids were harmed in the making of the pornography (reality check: authorities say CG child porn does not exist). In a recent civil case in Pennsylvania, plaintiff Mike Soncini tussled with his insurance company over a wrecked vehicle, claiming that the company had altered digital photos to imply that the car was damaged before the accident so as to avoid paying the full amount due. In Connecticut, a convicted murderer appealed to the state supreme court that computer-enhanced images of bite marks on the victim that were used to match his teeth were inadmissible (his appeal was rejected). And in a Massachusetts case, a police officer has been accused of stealing drugs and money from his department´s evidence room and stashing them at home. His wife, who has accused him of spousal abuse, photographed the evidence and then confronted the cop, who allegedly destroyed the stolen goods. Now the only evidence that exists are digital pictures shot by someone who might have a motive for revenge. “This is an issue that´s waiting to explode,” says Richard Sherwin, a professor at New York Law School, “and it hasn´t gotten the visibility in the legal community that it deserves.”

But Farid and other experts are concerned that they´ll never win. The technologies that enable photo manipulation will grow as fast as the attempts to foil them-as will forgers´ skills. The only realistic goal, Farid believes, is to keep prevention and detection techniques sophisticated enough to stop all but the most determined and skillful. “We´re going to make it so the average schmo can´t do it,” he says.

Such programs abound. Five million copies of Adobe Photoshop have been licensed, iPhoto is bundled with all new Apple computers, and Picasa 2 is available free from Google. This software not only interprets the original data; it´s capable of altering it-to remove unwanted background elements, zoom in on the desired part of an image, adjust color, and more. And the capabilities are increasing. The latest version of Photoshop, CS2, includes a “vanishing point” tool, for example, that drastically simplifies the specialized art of correcting perspective when combining images, to make composites look more realistic. Nor are these programs difficult to master. Just as word-processing programs like Microsoft Word have made the production of professional-looking documents a cakewalk, photo-editing tools make us all accomplished photo manipulators fairly quickly. Who hasn´t removed red-eye from family pictures?

Before the digital age, photo-verification experts sought to examine the negative-the single source of all existing prints. Today´s equivalent of a negative is the RAW file. RAWs are output from a camera before any automatic adjustments have corrected hue and tone. They fix the image in its purest, unaltered state. But RAW files are unwieldy-they don´t look very good and are memory hogs-hence only professional photographers tend to use them. Nor are they utterly trustworthy: Hackers have shown themselves capable of making a fake RAW file based on an existing photo, creating an apparent original.

But digital technology does provide clues that experts can exploit to identify the fakery. In most cameras, each cell registers just one color-red, green or blue-so the camera´s microprocessor has to estimate the proper color based on the colors of neighboring cells, filling in the blanks through a process called interpolation. Interpolation creates a predictable pattern, a correlation among data points that is potentially recognizable, not by the naked eye but by pattern-recognition software programs.

Farid has developed algorithms that are remarkably adept at recognizing the telltale signs of forgeries. His software scans patterns in a data file´s binary code, looking for the disruptions that indicate that an image has been altered. Farid, who has become the go-to guy in digital forensics, spends a great deal of time using Photoshop to create forgeries and composites and then studying their underlying data. What he´s found is that most manipulations leave a statistical trail.

Consider what happens when you double the size of an image in Photoshop. You start with a 100-by-100-pixel image and enlarge it to 200 by 200. Photoshop must create new pixels to make the image bigger; it does this through interpolation (this is the second interpolation, after the one done by the camera´s processor when the photo was originally shot). Photoshop will “look” at a white pixel and an adjoining black pixel and decide that the best option for the new pixel that´s being inserted between them is gray.

Each type of alteration done in Photoshop or iPhoto creates a specific statistical relic in the file that will show up again and again. Resizing an image, as described above, creates one kind of data pattern. Cutting parts of one picture and placing them into another picture creates another. Rotating a photo leaves a unique footprint, as does “cloning” one part of a picture and reproducing it elsewhere in the image. And computer-generated images, which can look strikingly realistic, have their own statistical patterns that are entirely different from those of images created by a camera. None of these patterns is visible to the naked eye or even easily described, but after studying thousands of manipulated images, Farid and his students have made a Rosetta stone for their recognition, a single software package consisting of algorithms that search for seven types of photo alteration, each with its own data pattern.

If you employed just one of these algorithms, a fake would be relatively easy to miss, says digital-forensic scientist Jessica Fridrich of the State University of New York at Binghamton. But the combination is powerful. “It would be very difficult to have a forgery that gets through all those tests,” she says.

provide information about the compressed and lower-quality photos typically found on the Internet.

Given those rather large blind spots, some scientists are taking a completely different tack. Rather than try to discern after the fact whether a picture has been altered, they want to invisibly mark photos in the moment of their creation so that any subsequent tampering will be obvious.

Jessica Fridrich of SUNY Binghamton works on making digital watermarks. Watermarked data are patterns of zeros and ones that are created when an image is shot and embedded in its pixels, invisible unless you look for them with special software. Watermarks are the modern equivalent of dripping sealing wax on a letter-if an image is altered, the watermark will be “broken” digitally, and your software will tell you.

The Canon kit won´t prevent self-made controversies, such as National Geographic´s digitally relocating an Egyptian pyramid to fit better on its February 1982 cover, or Newsweek´s grafting Martha Stewart´s head onto a model´s body on its March 7, 2005, cover, but it would have caught, and thus averted, another journalism scandal: In 2003 photographer Brian Walski was fired from the Los Angeles Times for melding two photographs to create what he felt was a more powerful composition of a British soldier directing Iraqis to take cover. Still, many media outlets remain dismissive of verification technology, putting their faith in the integrity of trusted contributors and their own ability to sniff out fraud. “If we tried to verify every picture, we´d never get anything done,” says Stokes Young, managing editor at Corbis, which licenses stock photos. As damaging mistakes pile up, though, wire services and newspapers may change their attitude.

Meanwhile, work is progressing at Fridrich´s lab to endow photos with an additional level of security. Fridrich, whose accomplishments include winning the 1982 Czechoslovakian Rubik´s Cube speed-solving championship, is developing a camera that not only watermarks a photograph but adds key identifying information about the photographer as well. Her team has modified a commercially available Canon camera, converting the infrared focusing sensor built into its viewfinder to a biometric sensor that captures an image of the photographer´s iris at the instant a photo is shot. This image is converted to digital data that is stored invisibly in the image file, along with the time and date and other watermark data.

Lawyers are just beginning to grasp the technology and its ramifications, but the bench is especially ignorant. “Trial judges have not been adequately apprised of the risks and technology,” says New York Law School´s Sherwin. “I can recount one example where in order to test an animation that was being offered in evidence, the judge asked the attorney to print it out. What we really have is a generation gap in the knowledge base. Courts are going to have to learn about these risks themselves and find ways to address them.”

One bright spot is that for now, at least, we only have to worry about still images. Fredericks says that to modify video convincingly remains an incredibly painstaking business. “When you´re dealing with videotape, you´re dealing with 30 frames per second, and a frame is two individual pictures. The forger would have to make 60 image corrections for each second. It´s an almost impossible task.” There´s no Photoshop for movies, and even video altered with high-end equipment, such as commercials employing reanimated dead actors, isn´t especially believable.

Digital-forensics experts say they´re in an evolutionary race not unlike the battle between spammers and anti-spammers -you can create all the filters you want, but determined spammers will figure out how to get through. Then it´s time to create new filters. Farid expects the same of forgers. With enough resources and determination, a forger will break a watermark, reverse-engineer a RAW file, and create a seamless fake that eludes the software. The trick, Farid says, is continuing to raise the bar high enough that most forgers are daunted.

The near future of detection technology is more of the same, only (knock wood) better: more-secure photographer-verification systems, more tightly calibrated algorithms, more-robust watermarks. The future, though, promises something more innovative: digital ballistics. Just as bullets can be traced to the gun that fired them, digital photos might reveal the camera that made them. No light sensor is flawless; all have tiny imperfections that can be read in the image data. Study those glitches enough, and you recognize patterns-patterns that can be detected with software.

Still, no matter what technologies are in place, it´s likely that top-quality fakes will always elude the system. Poor-quality ones, too. The big fish learn how to avoid the net; the smallest ones slip through it. Low-resolution fakes are more detectable by Farid´s latest algorithm, which analyzes the direction of light falling on the scene, but if a photo is compressed enough, forget about it. It becomes a mighty small fish.

Which brings us back to Joey Boudreaux, the Marine who found himself denounced by his local paper, the New Orleans Times-Picayune, as having embarrassed “himself, the Marine Corps and, unfortunately, his home state.” The Marines conducted two investigations last year, both of which were inconclusive. Even experts with the Naval Criminal Investigative Services couldn´t find evidence to support or refute claims of manipulation.

Boudreaux has taken the incident in stride. “My first reaction, I thought it was funny,” he said in a telephone interview. “I didn´t have a second reaction until they called and said, â€You´re getting investigated.´ ” He insists that he never gave the Iraqi boy a sign with any words but “Welcome Marines,” but he has no way to prove it. Neither he nor anyone he knows still possesses a version of the image the way he says he created it, and no amount of Internet searching has turned it up. All that exists are the low-quality clones on the Web. Farid´s software can´t assess Boudreaux´s claim because the existing images are too compressed for his algorithms. And even Farid´s trained eye can´t tell if either of the two existing images-the “good” sign or the “bad” one-are real or if, as Boudreaux claims, both are fakes.

An unsatisfactory conclusion, but a fitting one. Today´s authentication technology is such that even after scrutiny by software and expert eyes, all you may have on your side is your word. You´d better hope it´s good enough.

Steve Casimiro is a writer and photographer in Monarch Beach, California.

by Courtesy of chúng tôi

How well did you spot the phonies? REAL Plane landing at the St. Maarten airport, located about 40 feet from the beach

by Courtesy of chúng tôi

How well did you spot the phonies? FAKE China lands on moon! Not really. But making it look authentic is easy, for a forger.

by Courtesy of chúng tôi

How well did you spot the phonies? FAKE chúng tôi

by Courtesy of chúng tôi

How well did you spot the phonies? FAKE A skyscraperâ€Jenga game merger, from chúng tôi

by Courtesy of chúng tôi

How well did you spot the phonies? FAKE chúng tôi

by Courtesy of chúng tôi

How well did you spot the phonies? FAKE From chúng tôi which hosts Photoshop contests

by Courtesy of chúng tôi

How well did you spot the phonies? REAL An F/A-18´s sonic boom. Experts are unsure what creates the cloud; it may be caused by water-droplet condensation

by Courtesy of chúng tôi

How well did you spot the phonies? FAKE A composite that hit the Internet as a purported National Geographic photo

by Courtesy of chúng tôi

How well did you spot the phonies? REAL Nine-foot, 646-pound catfish recently caught in Thailand

Idphotostudio: Create Passport Sized Photos From Your Digital Photos

Passport-sized pictures are usually scary and ugly. I am not sure about everyone, but I look very bad, rather awful, in my passport-sized photos. Thankfully there is a way out to combat the ugly passport shot. I recently came across software that helps to convert our crystal clear digital pictures into a valid size passport photos. I am talking about IDPhotoStudio, a simple tool that can turn any of your digital photographs into a valid size passport photo. Even if you don’t have a printer at home, you can create your passport-size photo from any of your good and attractive digital photos and take it to some studio to get the prints done.

Create Passport Sized Photos From Digital Photos

IDPhotoStudio is a freeware that comes as a perfect solution to get a good passport size photo at home. This lightweight application lands and installs in your computer system in just a few minutes. It is simple enough to be handled by anyone.

The tool has an absolutely simple and clean interface with all options clearly visible in its main overview. With no specific steps or guidelines this tool is very simple to use. You just have to load any of your images stored on your computer system, and you can get the passport size photos in next few minutes.

You can also rotate your photo by 90 degrees to bring it in the right position. The tool re-sizes the photo to the required proportions automatically. However, the tool lacks the options of cropping and editing the image – but it still works well.

The best part about IDPhotoStudio is that it provides the standard dimensions for many countries, and you can get the passport size photos complaint to so many countries. The countries/states are compiled in alphabetic order, and you can easily select the state you want your photo to be compliant with.

Once you load your image and select the dimensions you can then set the number of copies you want for your photo. The program is set to print an A4 sheet, and you can select the number of copies accordingly.

Pros of IDPhotostudio

You can preview the entire sheet of images before you get them printed actually.

You can save the entire sheet in your computer system or can transfer it into pen drive also.

You can choose to apply greyscale or sepia effect to the photo.

You can select the number of copies you want to get printed.

Features 27 different languages and various countries.

Can rotate the image by 90 degrees.

Cons of IDPhotoStudio

We expected a lot from the program, but overall it does a great job. The developer however can add value to the program by adding a few basic photo editing options like red-eye remover, cropping, editing, adding effects, adjusting brightness/contrast . It also lacks the Undo option, and if you have selected sepia effect for the picture accidentally, you will have to start it all again.

IDPhotoStudio free download

IDPhotoStudio is a nice and useful freeware, but it might land a few unwanted software on your computer. To avoid the installation of unwanted adware install the Lite version of the software. 

In a nutshell, if you have a good quality digital picture of yourself and want to print out a set of passport-sized photos quickly, IDPhotoStudio can help you out. It is a freeware that installs and uninstalls without issues. Go get it here.

Music Genres Classification Using Deep Learning Techniques

This article was published as a part of the Data Science Blogathon


In this blog, we will discuss the classification of music files based on the genres. Generally, people carry their favorite songs on smartphones. Songs can be of various genres. With the help of deep learning techniques, we can provide a classified list of songs to the smartphone user. We will apply deep learning algorithms to create models, which can classify audio files into various genres. After training the model, we will also analyze the performance of our trained model.


We will use GITZAN dataset, which contains 1000 music files. Dataset has ten types of genres with uniform distribution. Dataset has the following genres: blues, classical, country, disco, hiphop, jazz, reggae, rock, metal, and pop. Each music file is 30 seconds long.

Process Flow:

Figure 01 represents the overview of our methodology for the genre classification task. We will discuss each phase in detail. We train three types of deep learning models to explore and gain insights from the data.

Fig. 01

First, we need to convert the audio signals into a deep learning model compatible format. We use two types of formats, which are as follows:

1. Spectrogram generation:

A spectrogram is a visual representation of the spectrum signal frequencies as it varies with time. We use librosa library to transform each audio file into a spectrogram. Figure 02 shows spectrogram images for each type of music genre.

Fig. 02

2. Wavelet generation: –

The Wavelet Transform is a transformation that can be used to analyze the spectral and temporal properties of non-stationary signals like audio. We use librosa library to generate wavelets of each audio file. Figure 03 shows wavelets of each type of music genre.

Fig. 03

3, 4. Spectrogram and Wavelet preprocessing

From Figure 02 and 03, it is clear that we treat our data as image data. After generating spectrograms and wavelets, we apply general image preprocessing steps to generate training and testing data. Each image is of size (256, 256, 3).

5. Basic CNN model training:

 After preprocessing the data, we create our first deep learning model. We construct a Convolution Neural Network model with required input and out units. The final architecture of our CNN model is shown in Figure 04. We use only spectrogram data for the training and testing.

Fig. 04

We train our CNN model for 500 epochs with Adam optimizer at a learning rate of 0.0001. We use categorical cross-entropy as the loss function. Figure 05 shows the training and validation losses and model performance in terms of accuracy.

Fig. 05

6. Transfer learning-based model training

We have only 60 samples of each genre for training. In this case, transfer learning could be a useful option to improve the performance of our CNN model. Now, we use the pre-trained mobilenet model to train the CNN model. A schematic architecture is shown in Figure 06.

Fig. 06

The transfer learning-based model is trained with the same settings as used in the previous model. Figure 07 shows the training and validation loss and model performance in terms of accuracy. Here, also we use only spectrogram data for the training and testing.

Fig. 07

7. Multimodal training

We will pass both spectrogram and wavelet data into the CNN model for the training in this experiment. We are using the late-fusion technique in this multi-modal training. Figure 08 represents the architecture of our multi-modal CNN model. Figure 09 shows the loss and performance scores of the model with respect to epochs.

Fig. 08 Fig. 09


Figure 10 shows a comparative analysis of the loss and performance of all three models. If we analyze the training behavior of all three models, we found that the basic CNN model has large fluctuations in its loss values and performance scores for training and testing data. The multimodal model has shown the least variance in performance. Transfer learning model performance increases gradually compared to multimodal and basic CNN models. Validation loss value shot up suddenly after the 30 epochs. On the other side, validation loss decreases continuously for the other two models.

Fig. 10

Testing the models

 After training our models, we test each model on the 40% test data. We calculate precision, recall, and F-score for each music genre (class). Our dataset is balanced; therefore, the macro average and weighted average of precision, recall, and F-score are the same.

1. Basic CNN model

 Figure 11 presents the results of our CNN model on the test data. CNN model was able to classify “classical” genre music with the highest F1-score. CNN performed worst for “Rock” and “reggae” genre music. Figure 12 shows the confusion matrix of the CNN model on the test data.

Fig. 11

Fig. 12

2. Transfer learning based model

We used the transfer learning technique to improve the performance of genre classification. Figure 13 presents the results of the transfer learning-based model on test data. F1-score for “hiphop”, “jazz”, and “pop” genres increased due to transfer learning. If we look at overall results, we have achieved only a minor improvement after applying transfer learning. Figure 14 shows the confusion matrix for the transfer learning model on the test data.

Fig. 13

Fig. 14

3. Multimodal-based model: We have used both spectrogram and wavelet data to train the multimodal-based model. In the same way, we perform the testing. We have found very surprising results. Instead of improvement, our performance reduced drastically. We have achieved only 38% of F1-score while using a multi-modal approach. Figure 16 shows the confusion matrix of the multimodal-based model on the test data.

Fig. 15 Fig. 16


In this post, we have performed music genre classification using Deep learning techniques. The transfer learning-based model has performed best among all three models. We have used the Keras framework for the implementation on the google Collaboratory platform. Source code is available at the following GitHub link along with spectrogram and wavelet data on google drive. You don’t need to generate spectrograms and wavelets from the audio files.

GitHub Link. . Spectrogram and wavelets data link.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.


How To Spot Misleading Statistics In The News

“Handy bit of research finds sexuality can be determined by the lengths of people’s fingers” was one recent headline based on a peer-reviewed study by well-respected researchers at the University of Essex published in the Archives of Sexual Behavior, the leading scholarly publication in the area of human sexuality.

And, to my stats-savvy eye, it is a bunch of hogwash.

Just when it seems that news consumers may be wising up—remembering to ask if science is “peer-reviewed,” the sample size is big enough or who funded the work—along comes a suckerpunch of a story. In this instance, the fast one comes in the form of confidence intervals, a statistical topic that no layperson should really ever have to wade through to understand a news article.

But, unfortunately for any number-haters out there, if you don’t want to be fooled by breathless, overhyped, or otherwise worthless research, we have to talk about a few statistical principles that could still trip you up, even when all the “legitimate research” boxes are ticked.

What’s my real risk?

One of the most depressing headlines I ever read was “Eight-year study finds heavy French fry eaters have ‘double’ the chance of death.” “Ugh,” I said out loud, sipping my glass of red wine with a big ole basket of perfectly golden fries in front of me. Really?

Well, yes, it’s true according to a peer-reviewed study published in the American Journal of Clinical Nutrition. Eating french fries does double your risk of death. But, how many french fries, and moreover, what was my original risk of death?

The study says that if you eat fried potatoes three times per week or more, you will double your risk of death. So let’s take an average person in this study: a 60-year-old man. What is his risk of death, regardless of how many french fries he eats? One percent. That means that if you line up 100 60-year-old men, at least one of them will die in the next year simply because he is a 60-year-old man.

Now, if all 100 of those men eat fried potatoes at least three times per week for their whole lives, yes, their risk of death doubles. But what is 1 percent doubled? Two percent. So instead of one of those 100 men dying over the course of the year, two of them will. And they get to eat fried potatoes three times a week or more for their entire lives—sounds like a risk I’m willing to take.

This is a statistical concept called relative risk. If the chance of getting some disease is 1 in a billion, even if you quadruple your risk of coming down with it, your risk is still only 4 in a billion. It ain’t gonna happen.

So next time you see an increase or decrease in risk, the first question you should ask is “an increase or decrease in risk from what original risk.”

Plus, like me, could those men have been enjoying a glass of wine or pint of beer with their fried potatoes? Could something else have actually been the culprit?

Eating cheese before bed equals die by tangled bedsheets?

Finland’s infant mortality rate decreased at a rapid rate with the introduction of these baby boxes, and the country now has one of the lowest infant mortality rates in the world. So it makes sense to suppose that these baby boxes caused the infant mortality rate to go down.

But guess what also changed? Prenatal care. In order to qualify for the baby box, a woman was required to visit health clinics starting during the first four months of her pregnancy.

In 1944, 31 percent of Finnish mothers received prenatal education. In 1945, it had jumped to 86 percent. The baby box was not responsible for the change in infant mortality rates; rather, it was education and early health checks.

This is a classic case of correlation not being the same as causation. The introduction of baby boxes and the decrease in infant mortality rates are related, but one didn’t cause the other.

However, that little fact hasn’t stopped baby box companies from popping up left, right, and center, selling things like the “Baby Box Bundle: Finland Original” for a mere $449.99. And U.S. states use tax dollars to hand a version out to new mothers.

So the next time you see a link or association—like how eating cheese is linked to dying by becoming entangled in your bedsheets—you should ask “What else could be causing that to happen?”

When the margin of error is bigger than the effect

Recent numbers from the Bureau of Labor Statistics show national unemployment dropping from 3.9 percent in August to 3.7 percent in September. When compiling these figures, the bureau obviously doesn’t go around asking every person whether they have a job or not. It asks a small sample of the population and then generalizes the unemployment rate in that group to the entire United States.

This means the official level of unemployment at any given time is an estimate—a good guess, but still a guess. This “plus or minus error” is defined by something statisticians call a confidence interval.

What the data actually says is that it appears the number of unemployed people nationwide decreased by 270,000—but with a margin of error, as defined by the confidence interval, of plus or minus 263,000. It’s easier to announce a single number like 270,000. But sampling always comes with a margin of error and it’s more accurate to think of that single estimate as a range. In this case, statisticians believe the real number of unemployed people went down by somewhere between just 7,000 on the low end and 533,000 on the high end.

This is the same issue that happened with the finger length defining sexuality study—the plus or minus error associated with these estimates can simply negate any certainty in the results.

The most obvious example of confidence intervals making our lives confusing is in polling. Pollsters take a sample of the population, ask who that sample is going to vote for, and then infer from that what the entire population is going to do on Election Day. When the races are close, the plus or minus error associated with their polls of the sample negate any real knowledge of who is going to win, making the races “too close to call.”

So the next time you see a number being stated about an entire population where it would have been impossible to ask every single person or test every single subject, you should ask about the plus or minus error.

Liberty Vittert is a Visiting Assistant Professor in Statistics at Washington University. This article was originally featured on The Conversation.

Update the detailed information about Spot Faked Photos Using Digital Forensic Techniques on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!