Why We Get Scammed and What to Do About It

Article originally posted to The Wall Street Journal.

Would you invest with someone who guarantees a 50% annual return with no risk of loss? Would you reply to an email offering you a share of a lost treasure in a far-away country, in exchange for sending just a little bit of money to kickstart the recovery effort? Would you buy a Picasso or a Dali from a late-night infomercial? We didn’t think so. But many people do fall for scams like these. Why? Are the victims uneducated, unintelligent or constitutionally naive?

Unfortunately for all of us, the answer is no. Even people at the top of their professions can be taken in. Several former cabinet secretaries were convinced to join the board of Theranos, whose founder, Elizabeth Holmes, was later convicted of criminal fraud against investors. Experienced scientific journal editors had to retract scores of fraudulent papers by the Dutch psychologist Diederik Stapel. Wealthy art collectors bought phony Rothkos and Pollocks from Manhattan’s tony Knoedler Gallery.

According to the FBI, phishing scams led to losses of more than $43 billion between 2016 and 2021.

Frauds are ever evolving and can be complex and sophisticated, but even simple ploys can take us in. Everyone knows not to click on dubious links in emails, but when we’re busy and the fake messages resemble the legitimate ones we get every day, it’s easy to be fooled. A test by the Dutch Ministry of Economic Affairs found that 22% of people who received a suspicious work email about password recovery clicked through and typed in their password. Phishing for passwords is the first step in “business email compromise,” a fast-growing fraud by which scammers use their access to corporate accounts to steal money and valuable information. According to the FBI, such scams led to losses of more than $43 billion between 2016 and 2021.

Con artists succeed by hijacking what are usually effective and efficient mental habits, turning our shortcuts into wrong turns. Some psychologists and philosophers argue that our brains have a “truth bias”: We automatically tag incoming information as true and must exert extra effort to remain uncertain or to relabel it as false. This bias is not a bug, but a feature. If we were unremitting skeptics, questioning everything we heard or saw, major decisions would be impossible, and everyday life would be a nightmare. We couldn’t get out of the supermarket without checking every price on the receipt and verifying the accuracy of the ingredients listed on every package.

Truth bias turns seeing into believing, so it is a prerequisite for any act of deception. In 2018, the startup Nikola released a video called “Nikola One in Motion” that appeared to show its prototype self-driving truck tooling down a highway. The company went public in 2020 via a reverse merger, and it was briefly worth more than Ford Motor. But later that year Nikola admitted that the truck lacked a fuel cell and motors: The video was created by rolling it down a shallow grade and tilting the camera to make the terrain seem flat. In 2022, Nikola founder Trevor Milton was convicted on federal fraud charges.

As psychologist Daniel Kahneman has written, people tend to assume that “what you see is all there is.” Just as a magician might draw your attention to their right hand while they pocket a coin with their left, Milton used misdirection to mislead investors. Similarly, when Enron’s direct-energy sales operation wasn’t up and running in time for a visit from Wall Street analysts in 1998, the firm created a Potemkin sales room and borrowed employees from other divisions to play the necessary roles.

The fact that we can fail to see what we don’t focus on has been demonstrated by hundreds of scientific studies.

The fact that we can fail to see what we don’t focus on has been demonstrated by hundreds of scientific studies. In one experiment of our own, participants were asked to watch a video of a group of people passing basketballs and to count the number of passes by players in white shirts. When a person in a gorilla suit sauntered through the scene, many viewers completely failed to notice. This tendency to focus narrowly usually benefits us by constraining the amount of information we need to consider before making a decision. But it also means that people who want to deceive us can withhold or distract us from the most critical information and count on us not to notice.

To overcome this tendency, we need to curb our enthusiasm and ask ourselves “what’s missing?” When a company shows just one demo of its technology doing wonders, we should wonder whether it works every time and under a variety of conditions. If a consulting firm is seeking your business, interrupt their litany of success stories to ask about their unsuccessful engagements, and dig deeper into whether they are giving you the full story about their successes.

And if you’re scrolling through your social media feeds, keep in mind that masters of disinformation don’t need to spread actual falsehoods. They can create a misleading narrative by sending out a stream of true but unrepresentative stories, as long as they can count on their audience not bothering to look for counterexamples or think about the broader context. As a recent study in the journal Judgment and Decision Making showed, if someone tells you college is irrelevant to business success and mentions the noted dropouts Bill Gates, Mark Zuckerberg and Steve Jobs, you might believe them—if you don’t think about the unheralded fact that the vast majority of CEOs and billionaires have college or even graduate degrees.

Our tendency to focus on the information we already have can be amplified further by our preference for consistency. In almost every field, from investing to medicine to science, we don’t intuitively appreciate how much variability should exist in numbers that describe human experiences, decisions and actions. We’re seduced by the simplicity of smoothness. It’s easier to interpret and remember lines that go straight up than complicated, jagged, up-and-down swings. But “noisy” patterns are more realistic, which means we should expect and prefer them.

Bernard Madoff brilliantly exploited investors’ taste for consistency by devising a new type of Ponzi scheme, one that didn’t guarantee outlandish short-term returns but instead provided its investors smooth growth year after year, with nary a down month. A study by Harvard Business School researchers that asked which of a set of hypothetical mutual funds people would invest in found that years after Madoff’s fraud was revealed, a fund that reported Madoff’s own impossibly consistent performance was preferred over funds with similar cumulative returns but much greater month-to-month volatility.

Similarly, in the late 1990s and early 2000s one of the most prolific perpetrators of scientific fraud, the superconductivity researcher Jan Hendrik Schön of Bell Labs, managed to publish study after study in the very best journals with graphs that showed the exact same results. For a while at least, expert scientists mistakenly saw the repetition as a sign of a robust scientific discovery rather than of fakery.

Smoothness and simplicity appeal to us because perfect patterns sometimes do reflect insight. Someone who possessed a complete and accurate model of the world would be able to make consistently accurate predictions. But we should stop to ask which is more likely—that the consistent pattern before us came from mastery or from manipulation?

When Madoff explained his consistent annual returns of 8% to 12% by claiming he could read the mood of the market and time his trades accordingly, rather than taking his word for it, investors (and SEC investigators) should have wondered whether such god-like insight was possible. If you’re playing online chess and your opponent always makes the best possible move or takes the same amount of time for every decision, you should consider the possibility that they’re using a computer to cheat. When a scientist produces perfect results time after time, we should be a bit skeptical. Our habit should be to treat the absence of noise as a warning to dig deeper.

Just as we are overly enticed by consistency, we tend to treat precise claims as more believable than vague ones. All else being equal, precision and concreteness are indeed superior to vagueness and abstraction; knowing there will be a thunderstorm at 2 p.m. is more useful than knowing it will rain sometime in the afternoon. Genuinely precise measurements, like the ones scientists make of fundamental physical constants, often require deep understanding and years of incremental improvements and technological advances.

But like our taste for consistency, our preference for precision can be exploited. An analysis of over 16,000 home sales in South Florida and Long Island found that houses listed with a price specified to the hundreds of dollars, like $367,500, wound up selling for more than houses listed with a rounder price, like $370,000—even when the rounder starting price was slightly higher. The more specific numbers carried no more meaning than the rounded ones, but buyers might have assumed that those prices were based on more objective information. The precise prices might have created stickier “anchors,” giving buyers the impression that there was less room for negotiation.

Some claims are too precise to be possible, yet we still don’t think to question them. In 2005, the psychologists Barbara Fredrickson and Marcial Losada announced their discovery of a “critical positivity ratio” for success in life. A model developed by Losada predicted that people who have fewer than 2.9013 positive experiences for each negative experience would flounder, but people exceeding that threshold would flourish. Verifying such a precise ratio would require perfectly classifying hundreds of thousands of emotional experiences, something currently beyond the capabilities of psychological science. Yet the claim survived expert peer review, and the paper was cited more than 1,000 times in the scientific literature before a close inspection in 2013 showed that the mathematics behind the positivity ratio were nonsensical.

We can escape the lure of precision by asking how such precise numbers could have been calculated and by imagining how attractive a claim, prediction or proposal would be if it were expressed as a more approximate value. If a business guru tells you that 13.5% of your customers are “early adopters” of new technologies, you should wonder how they determined that it wasn’t 12.2%, 14.9% or “about one in six.” Had Fredrickson and Losada dropped their overly precise ratio and simply claimed instead that people who have more positive experiences are more likely to be happy, their paper likely would not have been as influential—but it would have been correct.

Most of the scams people fall for today are not really new; they are remixes and mashups of tricks that have worked for generations. As technology advances, we will likely see increasingly sophisticated versions that target the same habits of thought in more potent ways. One current scam involves a caller who tells the mark that their child has been hurt or is in trouble and asks urgently for money to help. The time pressure and fear rev up the victim’s truth bias and undermine their usual suspicion about requests for money. Imagine how much more effective this tactic will be once AI models can synthesize a child’s voice to make it seem like they are making the call.

Anticipating these dangers can help us take precautions before we’re deceived—for instance, by establishing a family passphrase to ensure you’re talking to the person you think you are. Advance planning minimizes your risk in the moment and can help you avoid the need for constant skepticism. But taking steps to avoid deception means we have to abandon the myth that only the gullible can be taken in. There are scams out there waiting for each of us, no matter how sophisticated we think we are. Rather than “it can’t happen to me,” your mantra should be “accept less, check more.”

Daniel Simons is a professor of psychology at the University of Illinois, and Christopher Chabris is a cognitive scientist who has taught at Union College and Harvard University. This essay is adapted from their new book, “Nobody’s Fool: Why We Get Taken In and What We Can Do About It,” which will be published on July 11 by Basic Books.

Leave a Comment

Your email address will not be published. Required fields are marked *

Translate »