Hmmm. Some comments inline:
A malicious actor attempts to phish a user’s Monzo account
I think we are in agreement that by eliminating credentials that can be phished we have mitigated the risk of this.
I’m not sure you’ve eliminated everything that can be phished. When a user gets a new phone, how do they move their monzo account to it? How do you distinguish betweeen “my old phone died and I’m setting up a new phone” from “I’ve just phished the user and I’m setting up my phone to access their account?” What will the legit user bring to the table that a phisher can’t get? Assuming I can get whatever email they use for Monzo, what is the thing I need to phish from them to set up my phone to access their account?
A malicious actor attempts to phish another account belonging to a Monzo user
I think it is important to make the distinction here that we only send these emails when the user has explicitly requested it, just like password verification/reset links that users will be accustomed to from the vast majority of websites.
Are you saying that users will think to themselves “I didn’t request that link. I won’t click it”? That seems unrealistically optimistic.
Banks change their mind all the time. My retirement funds were invested at a bank whose login web page used to show me a special image that I had selected. For years they said “Never type your password unless you see your special image.” Then one day I went to login and their official web site said “We’ve done away with those images, go ahead and type your password to login.” It was legit. The bank had genuinely changed its mind about the utility of those images. (Rightly, they add little to the security of logins).
My point is that we can’t rely on a principle like “We’ll never mail you a link to click on unless you requested it” into the security regime. That’s wishful thinking. Marketing will want to send people stuff that tells them to click on things. So the “I didn’t request it” principle isn’t useful.
If you go back to Saltzer and Shroeder’s 1974 paper on security design principles, an important one is “psychological acceptability”. I would argue that we all know that more secure things are harder to use. We WANT our bank to be more secure than a garden variety web site. We know that greater security comes with a usability tax. Not only do we accept that tax when we’re talking about the bank that holds all our money, the difficulty gives us some assurance psychologically that “it’s not easy to get my money”. It isn’t a comforting thought to me to think that my money is as easy to get as my Dropbox documents.
Let me give a final example. Years ago I was working with a big banking network. They were asked to accredit some WiFi-based point of sale terminals that accepted credit cards. These PoS boxes did really strong end-to-end crypto. The devices, however, lacked any ability to do WPA or WEP or join any password-protected WiFi network. For a merchant to use these terminals, they had to put them on an open WiFi network. The bank was reluctant to accredit the PoS terminals because running open WiFi networks is unsafe. The PoS vendor was saying “but we do everything right!” The PoS vendor was correct: sniff their traffic all you like, you’d never do anything to the transactions. But the bank is worried about the merchants’ big picture, not just the PoS terminals but the fact that merchants will have open WiFi networks. Merchants will probably put other things on that network (they’re not gonna run 2 and 3 WiFi networks at the local newsagent) and the other things are not as secure as these uber-crypto PoS terminals.
Monzo is the PoS vendor from my story: it’s not enough that you do everything right. Telling customers to do dangerous things because their bank is secure is a bad idea. Telling people to turn on remote image loading in their email client because Monzo does things right is dangerous because their email is stuffed full of bad emails trying to abuse remote image loading. Telling people to tap on links to login is a bad idea because they’ll get comfortable doing that and will tap on other, insecure links.
It’s not that I doubt the security of auth0 links (thanks for that, I learned a lot). It’s that using them causes users to get used to tapping on links in emails for very security-sensitive things (like banking). And all the flipping from mail to Safari to app store will seem normal to them. But that’s what malware does, too. And they’re not gonna notice when they’re being attacked, because the attacks will look just like ordinary stuff their bank does. And the bank says it’s secure.