Follow

Info

All opinions on this site are solely those of the author unless specified otherwise. All affiliations and endorsements will be disclosed if present. If no disclosure, no affiliation exists.

Match.com's Fake Problem

Match.com's Fake Problem

Match.com has a fake problem. That is, they have a problem with fake accounts and there is a clear reason why they have, for years, refused to do a single thing about it.

A Little About Me

Before I dive in, let me note that I'm a software architect. I've been designing systems and writing code for almost 30 years (no, I'm not that old, I started in junior high school - if you want to get serious, I've been a professional developer since about 1990). The design of secure systems is something I know more than a little about. I'm not above admitting that in my youth I was what you'd call a hacker. Seen War Games or Hackers? It's not like that, but you get the idea. So when I see systems that have flaws, I tend to geek out on them. When those flaws affect me directly, I geek out even more. This issue has become the fingernails-on-the-chalkboard of my geek cred. I'll own that. Let me also start by saying that I met my girlfriend on Match.com, so I have no gripe with the idea of online dating and Match's business in general. Indeed, I'm a shareholder. But getting that out of the way, I need to blog about a problem the site had when I was active, and appears to still have. And I have to comment on the absolute lack of concern the site's administration seems to have regarding the problem, to the point of appearing to actively ignore it. So... what's got me all frothy?
 

The Problem

The problem I have is with the vast number of fake accounts and fake activity, and how Match profits from this and, thus, has no incentive to remedy the situation even in the face of obvious steps that could be taken. Let's dive in. When one first signs up at Match, the activity and interactions begin. Presuming you've actually gone to the trouble to create a reasonable profile and filled out the demographics, you will begin to show up in the searches that others do. Once you start looking at other profiles and liking their photos or stating that you're interested by clicking on the checkmark of your "daily matches," you will start to interact with others. Unfortunately, many of these interactions come from fake accounts. Why fake accounts? Simple - those looking for love are vulnerable. Strike up a conversation with someone and you have a motivated target that is much more liable to fall for whatever pitch you're throwing. This avenue has a much higher success rate for the scammer than does simple spam. So if you're looking to profess love and then ask for money ("I need $500 for a plane ticket to come see you!"), plea hardship ("I'd love to come to the United States but I need $750 for a visa"), or even make a few bucks peddling porn ("I have sexy pictures, but they're on a site that requires you pay $20 to prove you're an adult."), you've got a much higher chance of success on a dating site. Scammers know this, so they make tons of fake accounts and lure people in. It's a thing.
 

The Analysis and Solution

The source of my angst is that it's dead simple to spot these accounts both through their content as well as activity, and Match seems to make no effort to remove them short of customer complains. After this analysis, I'll show why this policy is actually a money-maker for them and then also allows them to state that they do their best based on complaints, a position that is somewhat disingenuous. So how easy is it to spot these fake accounts? Blindingly-so. First, let's take the easy attributes. Given a decent match on these, one could filter out fake accounts based on this alone (note that I consider fake females, since that's what I see):
  • The age being picked lately is 29. While fake accounts use many ages, this one is picked most often.
  • The profile has one paragraph. It is comprised of a few sentences, typically picked randomly from a list of about 30 as far as I can tell.
  • The profile has one picture.
  • The age range of the men the profile is looking for is typically in the early 30s to 50s. This clearly gets it in the right searches for its purpose.
  • The requirements for the profile's match are never filled in except the height, which is set at the maximum range. I suspect this is because the bots only fill in the first field.
But it gets even easier when you actually pattern match on the written profile. As I pointed out, they're typically just one paragraph. Given that, one could find duplicate sentences and create candidate filters based on that alone. But the real kicker comes in that all of these fake accounts have the same sentence embedded, which is a call to email. The email is split up to apparently avoid a pattern match that doesn't exist (if it did, Match would be using it on the known patterns). In all cases, the emails look like "username g mail com" or some broken variant thereof. A simple regular expression match of the known patterns would have 100% of the fake accounts identified as they are created. Here's an actual example:
Unfortunately I am unable to read messages on this site so you can emal me at nnak06 a gmal and send me a wink so I know who I'm taking to.
So let's presume for sake of argument that Match decided to get serious and implemented a solution based on my above observations. As a developer, I can tell you that I could code this up in a weekend. That's not hyperbole. And that's not an idle note - Match? I'll come into your San Francisco offices any weekend you like and do it. Free. So let's imagine that Match did this and the fake account folk got wise. That means they'd have to have humans mixing it up, which is more work than they want to do. But let's further presume that they did. What then? Simple - Any account that doesn't fill out all the fields, or at least go through the clicks to choose a "decline to answer" with appropriate human-necessary interaction (use the ReCaptcha x/y algorithm, guys) can't send winks or likes until they do. They can do everything else. They can even receive interactions, so in the rare case that they're a real person, that creates more incentive to finish their profile or even pay for a subscription. One other clear solution would be to throttle notifications. Many times a member will receive an email telling them that they got a wink or a like, only to find, when clicking through, that the profile no longer exists. Match did, indeed, remove it after the abuse happened. But why wait until after? (I answer this, below). When the account sends a lot of winks and likes (and thus gets reported in a spike of activity), it is removed. So why not just throttle those notifications for a small period of time and trigger a warning when an account goes over a threshold. Watching the activity would clearly identify an automated system as opposed to a human looking at profiles and liking lots of them. If this pattern is seen, the account is suspended and flagged for further scrutiny.
 

The Smoking Gun: Top Spot

Another metric is sheer site activity. Match has a feature called "top spot" that artificially places a profile in the top search results. You pay for this, of course. I was curious when I was using Match last year, so I paid for a couple tries at it to see how it worked. Sure enough, the views on my profile went way up and, with that, so did the activity from fake accounts. One benefit of "top spot" is that it shows you who has viewed your profile in an interesting real-time timeline. The difference here is that whereas you usually see who has viewed you in a grid of accounts, in the case of "top spot," you see the timeline which includes duplicate views. So if someone clicks to view you and then does it again 30 seconds later, you see them twice. Sure enough, fake accounts come up ten, twenty or even thirty times in a few-minute span. Clearly it's automated, scraping the search results multiple times per second. When you pay for the top spot, you artificially show up at the top and these automated scripts pick you up each time. If I, as a customer, can see this, Match's code could see it even better. There is simply no way that Match cannot see, based on usage metrics, when automated scripts are being used. It's just not possible that they don't know that this goes on and could prevent it if they chose.
 

My Plea

Yes, I wrote Match about this. I even went as far as to state, specifically, that I would like my mail sent to senior management and not handled by a customer service representative. Of course that was ignored and I got a canned response, including (apparently to pacify me?) an offer of free subscription time. As you can see, my concern was not addressed at all, but the hand-waving is pretty good:
I appreciate the time you've taken to contact Match about your general concerns with the site. Please be assured, Match.com does not send members misleading notifications, e-mails or winks professing romantic interest. We have too much respect for our members to ever compromise their trust. I can assure you that we are absolutely interested in pursuing any situation involving those who attempt to use our site in dishonest ways. We have a dedicated team that works diligently to identify and remove these kinds of members. Unfortunately, though, some of them still manage to get a few emails out, which is why we appreciate it so much when you take the time to let us know about the situations you see that we may not have caught. In the future, you're welcome to streamline your reports by using the "Report a Concern" link on the member's profile. This will send your report directly to our security team that can open a case immediately and take the right action. Unfortunately, privacy policies stop us from being able to share with you what actions we take, but this really is the fastest way to ensure that the situation is addressed appropriately. Thank you so much for what you are doing to help us in this area. For more information, feel free to review our Online Dating Safety Tips.
I didn't expect otherwise, frankly. For all the protestations to the contrary, Match doesn't really seem to care or listen to their paying customers.
 

The Reasons

So why, if this problem is so easy to solve, does it persist? The reason is likely clear - metrics and activity and, ultimately, paying subscribers. These fake accounts still increase the number of members. From a sheer numbers game, Match can say, "Hey, we remove them when we can, so don't worry about it." Indeed, I've gotten this response from them when I've brought it up. The point remains that these fake accounts artificially increase the membership numbers. But the real heft comes when you realize that these fake accounts are sending winks and likes and even emails. Why is this important if they're clearly fake? Because if you don't pay for Match, the notification you get tells you that "She is interested!" and asks you to subscribe (read: pay) to see who she is. You plunk down your $60 for three months of subscription and find that the love of your life is a fake. You complain. Match sends a canned response saying that they're removing fake accounts as they find them, and hey, check out these other profiles. But the bottom line is that you paid. They have your money and you're now a customer.
 
The fake accounts generate revenue for Match. It's that simple. They have no incentive to remove them, and thus, they never will.
Continue reading
24556 Hits
0 Comments

It's been 25 years, I guess I can come clean

It's been 25 years, I guess I can come clean

By now you're aware that there's yet another security bug, this time in "bash," a "shell" used on many servers. For the non-geeks, the gist of the issue is that a very common and absolutely necessary part of the operating system could, in some reasonable circumstances, allow a malicious user to run any code they want on a server to which they should not have access. This is, of course, a bad thing. The bug, now identified, has been fixed and system operators are rushing to patch their systems with newer versions that don't exhibit the flaw.

It's been over 25 years, so I think I can come clean. I knew of such a bug when I was in college that gave me 100% read access to any file on any system. I couldn't modify them, and this bug didn't let me execute arbitrary code, but if I noticed that you had a file in your home directory called "ChrisIsADoodyHead.txt," I could read it. Even if it was in a closed-off directory and locked down, itself. While I never had a need to, I could have looked at all of your code for the computer science class we shared and cheat on my homework. And I mean every file on the file system.

I could read all of your email.

After about a year, the bug was discovered, and I was actually beta testing a version of UNIX (SCO - remember SCO?) that had it and I reported it. It took about another year to move through production and be deployed. Remember, these were the days before automatic patching. Most installs were done from a stack of floppy disks and new versions came out yearly. Maybe quarterly, at best.

The point I'm making is twofold. First, these bugs are everywhere and will always be around. Don't be shocked when they're reported. They happen, they get fixed, and the next one comes along. You're going to get burned by them. And yes, evil douchebags are going to exploit them to, say, illegally download nude pictures of celebrities. There's no victim-blaming when I say that you should acknowledge this reality and do what you can to protect yourself.

And my second point, which is the takeaway here, and the reason I've "come clean" after 25 years to make the point: These bugs are in the wild and known right now. Please stop and think about that. Someone, somewhere, is almost surely reading or copying your stuff if it's online. These bugs don't live in obscurity until someone discovers them and immediately fixes them. Someone finds them and uses them for years until someone else discovers them in a more public way. Remember the speculation and then confirmation that the NSA was exploiting a bug for years before it was ever discovered in public? You don't need to take my word for this.

And please don't shoot the messenger.

Full disclosure: I never shared this bug with anyone else in college as far as I remember. I never found anything illegal, and only once found something that, if disclosed, could have caused problems (someone was cheating something seriously in a number of classes). I never said anything. I honestly can't remember ever seeing anything on anyone that was even remotely bad. Email, back then, also was only something shared among geeks, for the most part. There was pretty-much no private social online usage. I mostly poked around administrative stuff. This being a time before digital photography, I never even saw any nude selfies :-) Some people may not believe this disclosure, and I'm okay with that.

1641 Hits
0 Comments

It's time to nuke password security questions

It's time to nuke password security questions

I'll come right out and say it - password security questions are not only insecure, they're a blatant security hole. They're worse than not being there at all, and for any of a number of reasons.

First, they're all the same. How many times have you been asked your mother's maiden name, the make or model of your first car, what city you were born in, or the name of your first pet? These answers, if given truthfully, are easy to find out. You've likely blogged the answer at some time in the past.

If I know your Uncle's last name, odds are I also know your mother's maiden name (50/50 shot there, and if I know he's your maternal uncle, I've got it).

At this point, these security questions are no better than a second, easy-to-guess password. And in cases where they're used to recover a password, they become more of a risk than anything else.

The only thing to do here if these questions are mandated is to make up a unique and incorrect answer. Yet another password. Yet another password to remember, and many password managers don't realize that these question fields are password fields to store and protect.

The immediate solution is two-factor authentication. When you log in to a site, the site sends you a one-time code to your phone and you must enter that number. The password is simply to keep people from causing the code to be spammed to your phone and interrupting you while you're in the bathroom. Since everyone has a smart phone these days (a generalization I'm prepared to make), this requires someone who wishes to hack you to have access to your phone. Sure, if they get your phone they get everything, but they still need to know your password to cause the two-factor to fire. It's not perfect, but it's close.

The real solution is an un-replayable biometric solution. A fingerprint reader on every keyboard, implemented in such a way as to make storing and replaying of biometric data impossible. That's a tough nut and might also have to include physical two-party, but I suspect it would work.

If you want into a site, you don't need to give it a name or password. You simply place your finger on the scanner and then wait for your phone to give you the access code which you then type in. The code expires the moment it's used (or in 60 seconds if it is unused). Thus, storing the biometric data isn't really all that useful. And if the biometric data is somehow hashed with an expiring timestamp, storing it won't do much good after a few minutes anyway.

Either way, passwords are dead and password security questions are worse than dead.

(Image: my first pet, "Nonyabizness" - not his real name)

Continue reading
4530 Hits
0 Comments
Follow on Facebook

@TheDogberry

Facebook

Follow on Twitter

@TheDogberry

Twitter

Connect on LinkedIn

Christopher

LinkedIn