If credit cards were a person, they’d be retired already.
You can generally trace the birth of the industry to the Fresno drop of 1958. They have greatly increased in importance to the global economy, and become a pervasive bit of economic infrastructure, but still owe a lot of how they function to decisions made in the 1960s and 1970s. Some of those decisions are less than optimal given how the world has changed in the interim.
In engineering, we’d describe this as a “legacy system.” Important? Certainly. Difficult to change? Very. Not quite what we would build today? Inescapably.
Many of the original architects have retired, few people truly understand how it works, responsibility for the system is distributed to many places with few unitary actors capable of committing to change, and too many other systems at other organizations are dependent on the bugs staying consistent to rapidly iterate on it.
Let’s take a look at some concrete examples.
My usual disclaimer: I work at Stripe, which is of course heavily involved in modernizing the experience of taking credit cards, and do not necessarily speak for them.
Card testing: an attack done at scale
Every system begins life with some sort of planning about what its core motivating use cases, and supported variants on those, will be. The core motivating use case for credit cards, more than 50 years ago, rhymed with: “A business traveler, in a city far from home, wants to credibly promise a restaurant that he will pay for dinner, even if he has never been there before and will never be again, even if he has no banking locally, even if he and the server are very different people and may not otherwise feel an abundance of mutual trust automatically.”
The ecosystem poured billions of dollars of effort, both engineering and finance but also no small amount of marketing, into saying that this very human problem was solvable by small plastic rectangles. Ignore the human in front of you and focus on the plastic rectangle. If he has the right rectangle, you will almost certainly get paid for dinner, and so he should be able to eat. I have long thought that pulling this off was one of the great miracles of commerce. My appreciation for it has not dimmed since coming to work in the industry.
But this is not the only use case for credit cards today. Far from it. Most transactions, and most interesting transactions, are so-called “card not present”, which was once an interesting sideline in transactions happening over the phone or (postal!) mail. It is now dominated by the Internet. Credit cards did not successfully anticipate the Internet. And so their infrastructure needs adaptation to the opportunities and problems caused by the existence of the Internet, but suffers from the general difficulties of changing legacy systems.
Consider credit card fraud.
A necessary thing that fraudsters need to do, prior to using a card they have stolen (or purchased; there is an ecosystem of evil that I’ve covered before, and so will be glib here), is to check whether it is still valid. Stolen cards are turned off by banks relatively quickly as cardholders complain or as banks become aware of the fraud. Every time a casher (someone in a position to put their hands directly on the value extracted from the fraud) runs a card they take incremental risk; they don’t want to spend that risk on cards which will never be successfully exploitable.
So, prior to attempting to extract value, cards are tested. This usually means finding an online organization with relatively weak fraud controls and running a small transaction, designed to look routine and be unlikely to be noticed. Fraudsters conduct card testing by the thousand and tens or hundreds of thousands of cards, trying to sift through a list (or even random numbers!) and find the cards which are still active.
The organizations most frequently targeted by card testing are charities. This is partially because most charities do not have strong anti-fraud protections; who would try to defraud a charity by giving it a donation?
It is also because charitable donations look plausible to banks, even if they come out of the clear blue sky. A cardholder was reading books about whales with their daughter and then decided to make their first donation to an environmentally-focused charity? Extremely plausible! It happens far, far more frequently than a long-established cardholder deciding to make their first purchase of e.g. software.
Software companies also get card tested. In the days before my current employer existed, I once spent weeks playing whackamole with a gang of criminals who were running $29.95 purchases of my software as a stepping stone to draining the bank accounts of people who didn’t know I existed. It was immensely frustrating since I had little I could do as an entrepreneur. I eventually spent dozens of hours writing code to outsmart the code Evil, Inc. had written to abuse me. (Most businesses on the Internet cannot rely on having an underemployed Japanese salaryman with a computer security hobby also attached to their customer support inbox and accounting records, ready to spring into action and regex the heck out of predictably patterned hotmail addresses.)
Anyhow, back to charities.
The donation is entirely incidental to the fraudster; they just want to see the Thanks for Donating screen which will confirm it went through successfully. In a few days or weeks, after the fraud is noticed, the donation will be clawed back from the charity, and the charity will likely be made to pay a penalty fee. The fraudster intends to be far away and already counting their profits by that point.
So let’s return to how our industry once conceived of the problem of trust: theft is not a novel feature of the human experience! If you gave people important plastic rectangles, thieves might e.g. mug them to take those plastic rectangles, or perhaps they would counterfeit those plastic rectangles, and either of those would be a dire threat to the new plastic rectangle-based economic system. And so the architects of that system designed countermeasures.
For example, credit card terminals for a long time would give back opaque error codes to cashiers like “04: Pick Up Card” and “07: Pick Up Card, Special Condition.” You as a business were supposed to train your cashiers on these: both of them meant that the credit card system wanted you to physically seize the card and return it to the bank, who had printed their address on the reverse side for this purpose. The first was in the benign case, where the card had simply expired. The second was worded obliquely so that the terminal would not inform overlooking fraudsters that the bank was aware of the fraud in progress. This might give the cashier e.g. time to summon the authorities.
Notice how this design puts the onus on the bank for detecting fraud. This made sense in the original threat model: the bank has the customer relationship, the bank has the data, and the bank has the expensive team of professionals. The diner in a city far from home has none of these things. Clearly, the bank never needs to hear from the diner.
So is there a way for a charity to report “I think I am being victimized by card testing. Ten thousand people, none of whom I have previously had contact with, have run their cards on my website today. I have not recently done a marketing campaign to cause this. I have not sent out a solicitation to everyone who has ever donated to me. My site is not viral on Twitter due to our ongoing good work. This situation is extremely odd and I hope someone will look into it for me.” Reader, there is not. Why would our industry have built that? Can you even imagine running ten thousand cards in an hour? Your arm would fall off from all the swiping. And so this (sensible and entirely innocent!) failure of the imagination from decades ago echoes down to the present day.
Which doesn’t mean the problem is hopeless. Stripe has spent substantial effort recently working on card testing, because the pandemic saw an explosion of it. (Why? Probably partially due to a substitution effects of professionalized fraud operations no longer being able to easily do frauds with IRL components, and partially due to a general explosion of online commerce. More commerce both increases the count of fraud events if the base rate stays flat and also provides camouflage for bold and scaled opportunistic attacks on existing infrastructure.)
It is very important to be able to detect that a particular organization is under attack. Because these attacks are scripted, they often target a particular organization or set of organizations using a predictable software topology. Engineers working for Evil, Inc. are like engineers anywhere; they include a mix of people inclined to doing quick hardcoded scripts or perhaps writing Fully General Frameworks For Exploitation Of The Global Economy. The first group ends up doing far more damage because the second group, predictably, doesn’t ship much usable software.
In principle one could wait for the charity to notice it was under attack and call... someone, who would hopefully eventually call you. In practice, it’s much more effective for experts (not experts working for the charity, but experts who see a charity fighting a fraud operation and know instantly where their sympathies lie) to do this work for them.
Stripe uses a combination of machine learning models, heuristics, and human fraud analysts to identify organizations that are being used for card testing. Then, we can manually or automatically deploy countermeasures in close to real-time.
One countermeasure is introducing tiny amounts of friction. The attack has an economic model: it is worthwhile because the costs (machine time on compromised machines that they rent from other entrepreneurs of evil, operator attention, etc) of trying a card at the margin rounds to zero. Making that very-slightly-nonzero breaks the economic logic of the attack.
In the usual case, it is better to remove tiny amounts of friction from purchasing pathways! There is an entire industry of consultants built around this. (I used to charge B2B SaaS companies $30,000 a week to, among other things, A/B test their checkout flows to wring out a percentage point in uplift by e.g. removing form fields that were there for no particular reason. 1% increase to topline revenue of a company justifies a $30,000 a week consultant very quickly.)
But contingent on knowing that one is at this very moment being attacked by card testers, introducing a tiny amount of friction is helpful. In addition to being pro-social it will quickly stop the attack.
This means, directly, that the targeted organization will not receive phantom revenue which will be swiftly clawed back prior to the assessment of large monetary penalties. (“Why does the credit card industry assess penalties in this case?” Another legacy decision. Clearly, being defrauded once or twice could happen to anyone, and in that case the penalty was just a minor cost of doing business, but if you were defrauded 10,000 times you had to be in on it. You would have needed to intentionally bring in extra staff to run the stolen credit cards. This logic made a lot of sense… in 1970. And so the penalty was designed to discourage businesses of negotiable moral character from renting out their credit card terminals to criminals. Credit card networks had to anticipate businesses which would abuse the public trust because they operate at the scale of the entire economy and the entire economy includes at least some businesses which will abuse the public trust.)
The second-order effects of stopping testing are larger, and not confined to the originally targeted business (or even to customers of Stripe). The card testing attack is an instrumentality of larger abuse of the financial system. By blocking the card testing, the later fraud, the one that actually extracts value for the fraudster, becomes less likely to work. This changes the economic calculus which brings into being the entire fraud supply chain. Hackers won’t hack, skimmers won’t skim, and carders won’t card if there is not ultimately a way for cashers to turn those intermediate outputs into money.
How does it become less likely to work? A good intuition is that running bad cards is “loud” and likely to draw notice; this is why you want to do that at a charity, which is less likely to notice and far less likely to be able to do anything about it, than at e.g. a large retailer which sells Playstation 5s by the pallet and also has a highly professionalized anti-fraud operation that keeps the FBI on speed dial. Denying the fraudsters the ability to conduct low-cost recon prior to taking a run on getting free-to-them easy-to-resell hardware makes extracting that hardware far more risky.
Someone, somewhere, has to touch that Playstation 5 in the actual physical world, and any hand that can move a Playstation 5 can have handcuffs placed around it just as easily. Running twelve cards at a retailer, attached to a shipping address in some proximity to a member of one’s organization, flags to the experts that this is probably not a benign-intent user who wants to surprise their children with Spiderman this Christmas.
And so actions taken at Stripe to directly protect charities (and businesses) who process cards with us also protect unaffiliated businesses who might use someone entirely different to process their credit cards. That is the way society works; we're all in this together.
Card-accepting businesses have ultimate liability for fraud losses on credit cards, as a matter of regulation, commercial agreements, and longstanding practice. But it’s worth it to understand that the customer whose card was defrauded, even if they are eventually made whole by a complex chain of offsetting transactions, will feel like they have been victimized. They will wonder whether they did anything wrong (frequently, no). They will spend a few hours with their bank and (potentially) authorities prior to resolving all of the downstream consequences. And so pro-social action helps consumers, too, by preventing downstream victimization, even if a purely financial calculation of that victimization would read $0 lost.
Moving beyond plastic rectangles for auth/auth
Credit cards have long conflated possession, authentication, and authorization. Possession you probably understand. Authentication is demonstrating that someone is who they say they are. Authorization is demonstrating that that person has legal/moral right to commit to a transaction and in fact intends to commit to this transaction. (We commonly say auth/auth, because people who work in credit cards have so few opportunities to sound hip.)
These are extremely hard problems! And so for decades, the industry had one solution which it knew to be other-than-optimal but which worked well enough to achieve almost global ubiquity of credit cards while delighting hundreds of billions of people.
That solution: see this plastic rectangle? To a first approximation, you are who the rectangle says you are, and you are allowed to spend all the money the bank written on the rectangle thinks it can spend. Possession equals authentication and authorization. Easy peasy.
This is wrong, obviously. All models are wrong; some models are useful. This wrong model built one of the most important financial infrastructure networks in the world. It was very useful.
And then the Internet came.
Prior to good cell phone cameras being ubiquitous, it was extremely difficult to demonstrate possession of plastic rectangles over the Internet, but we still wanted to make it possible for people to buy things. So instead of relying on possession, we relied on knowledge. You needed to know a very small bit of information about the cardholder and bank relationship to demonstrate presumptive authentication and authorization. Most of that was physically printed on the plastic rectangle.
Knowledge is easy to copy, far easier than plastic rectangles (which are also, unfortunately, within the capabilities of well-resourced criminals to copy!). Computers are built around persisting knowledge forever and copying it very efficiently.
So the industry came up with a rule: you could copy certain bits of the knowledge, sure, everyone knew you would need to do that. But one special bit of the knowledge, the security code on the back of the card: please, please don’t remember that. If you don’t remember it, no one can copy it from you.
Asking every programmer of every computer system worldwide to please, please not remember a three or four digit number was obviously a losing proposition. And, in fact, the industry frequently lost on it, despite decades of enforcement of e.g. PCI-DSS requirements. But the optimal amount of fraud is not zero.
Many people interject at this point: for goodness sake, you folks are smart people. Could you please ask for information that is not printed on the freaking plastic rectangle?!
And lo, the industry did in fact think of that obvious solution. And it turns out that, for legacy reasons, this is extremely difficult to do at scale, because those plastic rectangles were issued by tens of thousands of banks to hundreds of millions of customers for decades. It would be very bad for them to suddenly stop working, and the banks knew huge amounts of interesting things about many customers but very little in a consistent fashion about every customer.
And so there were security measures like the Address Verification System (AVS). AVS bootstraps off regulatory requirements that banks must “know your customer” (KYC), and one thing KYC programs nearly always require is a physical address. The customer’s address was not on the plastic rectangle and was a secret known to the bank and the customer but not to a fraudster.
Why didn’t AVS solve credit card fraud? So many reasons. One is that addresses make poor secrets because the purpose of an address is to tell it to a lot of people. We knew that and we used them anyway! In a world where finding someone’s address required dedicated sleuthing or the assistance of law enforcement, forcing fraudsters to look up addresses one at a time was a good rate-limiter for fraud.
In an increasingly networked world, many peoples’ addresses are one totally-legitimate-publicly-available database lookup away. (Many privacy advocates harumph loudly at this and blame the tech industry. Fairly few privacy advocates remember that we used to print out fairly comprehensive lists of addresses and phone numbers in a really large book, perhaps on yellow paper, and then distribute it for free to everyone in a city. Those books predate computers, to say nothing of the Internet.)
In fact, sometimes databases are better than the people they hold data about at remembering addresses. Any engineering team which has ever attempted to compare human-entered addresses will happily tell you about this at substantial length. Be prepared to lubricate the conversation with large quantities of alcohol because this is cursed knowledge you will not want to remember in the morning.
And so AVS, as it is actually employed, is generally only used to reject transactions if there is a failure to match the zip code. (This is one reason so many sites you use dispense with everything but the zip code, since it is the only part that pulls its own weight in the fraud versus legitimate commerce tradeoff.)
And so the industry is, by fits and spurts, moving away from demonstrating authorization and authentication with something you know and moving to something you have. This time, it is something harder to duplicate or steal than plastic rectangles.
So called two-factor authentication, where the first factor is the thing you know (your credit card number and similar bits of information) and the second factor might be something you have (say, a cell phone known to your financial institution), has been a hot topic for almost two decades now. It is effectively mandatory by regulation in Europe, where it is called Strong Customer Authentication (SCA).
The rest of the world also has cell phones and banking apps. Why doesn’t every business in the world require something like this to transact with them? Well, that is a choice. It would create winners and losers, and the composition of those groups is somewhat hard to predict.
In particular, the original pitch for credit cards to businesses is as good as it ever was: credit cards exist to get you more customers, bring them back more frequently, and sell them more things per visit. They are not solely a payment instrument; credit card acceptance is partially a marketing decision.
The right amount of fraud to tolerate, or the right amount to inconvenience all legitimate customers to prevent fraud, is also partially a marketing decision. Some polities may decide that this should not be a privately made decision, and use their political process to mandate certain actions. Others may not make that same call.
And so business in much of the world face a reality like this: many good-intent customers will be unable to make meaningful use of 2FA, because their financial institution doesn’t support it, or because they’ve lost their 2FA token, or because their phone number changed recently, or because their financial institution doesn’t have an accurate phone number on file. (Alex Stamos, who once had to do this for Facebook, has accurately remarked at that the scale of the entire world, 2FA lifecycle management is an extremely hard problem. One unexpected consequence: many of your frustrations with the DMV are downstream of your government's decision to put the DMV in charge of issuing presumptively valid 2FA tokens that demonstrate identity to all functions of the government. They also put their tokens on plastic rectangles.)
2FA problems are particularly acute for customers who are near socioeconomic margins! I sometimes wish that regulators, in their benevolent wisdom, would talk to some immigrants about challenges encountered with the banking system prior to e.g. requiring banks to only work with customers who have a cell phone number in e.g. a nation we do not presently live in.
So there are heady crosscutting considerations which constrain the abilities of individual actors to innovate here. Which doesn’t mean innovation is impossible!
One thing you can offer businesses is a carrot which combines stronger guarantees about auth/auth with a built-in conversion boost. There are a variety of very interesting projects which do this. Apple Pay and Google Pay, for example, use a lot of very difficult engineering to make transactions much easier than reentering numbers for the five hundredth time. This makes businesses quite a bit of money, for which Apple and Google take a cut. As a very-planned side effect, this binds secrets within a secure (for Apple) or at least secure-ish hardware device which is much less likely to be lost or forgotten than the old 2FA “dongles.” It's still a problem at the scale of the economy; "new phone, who is this" is a meme for a reason. But it is less of a problem. Innovating at the margins creates a huge amount of value because so much of life is lived near margins.
Stripe has a fun entry here, too.
Acting at the right level
Society has many different models for attempting to change the behavior of complicated multi-layered networks with minimal centralized points of control. Regulatory oversight and/or legislation are obvious approaches, but it turns out they’re not the only ways to make things work.
Various well-distributed actors, be they OSS projects or hardware manufacturers or credit card processors, also are in a position to upgrade the functionality of a network without needing disruptive changes in how it operates. Sometimes, they can even make those upgrades so incentive-compatible for various nodes in the network that it is less a case of “a spoonful of sugar makes the medicine go down” and more “it turns out that sugar was, in fact, the medicine.”
This is a fun and endlessly interesting topic, which I can promise we will return to again.
Want more essays in your inbox?
I write about the intersection of tech and finance, approximately biweekly. It's free.