Facial Recognition Plan from IRS Raises Big Concerns


Government agencies are tapping a facial recognition company to prove you’re yoU.

Facial Recognition Plan from IRS Raises Big Concerns

The following essay is reprinted with permission fromThe Conversation, an online publication covering the latest research.

The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.

The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government

The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.

As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well.

ID DOT WHO?

ID.me is a private company that formed as TroopSwap, a site that offered retail discounts to members of the armed forces. As part of that effort, the company created an ID service so that military staff who qualified for discounts at various companies could prove they were, indeed, service members. In 2013, the company renamed itself ID.me and started to market its ID service more broadly. The U.S. Department of Veterans Affairs began using the technology in 2016, the company’s first government use.

To use ID.me, a user loads a mobile phone app and takes a selfie—a photo of their own face. ID.me then compares that image to various IDs that it obtains either through open records or through information that applicants provide through the app. If it finds a match, it creates an account and uses image recognition for ID. If it cannot perform a match, users can contact a “trusted referee” and have a video call to fix the problem.

A number of companies and states have been using ID.me for several years. News reports have documented problems people have had with ID.me failing to authenticate them, and with the company’s customer support in resolving those problems. Also, the system’s technology requirements could widen the digital divide, making it harder for many of the people who need government services the most to access them.ADVERTISEMENT

But much of the concern about the IRS and other federal agencies using ID.me revolves around its use of facial recognition technology and collection of biometric data.

ACCURACY AND BIAS

To start with, there are a number of general concerns about the accuracy of facial recognition technologies and whether there are discriminatory biases in their accuracy. These have led the Association for Computing Machinery, among other organizations, to call for a moratorium on government use of facial recognition technology.

A study of commercial and academic facial recognition algorithms by the National Institute of Standards and Technology found that U.S. facial-matching algorithms generally have higher false positive rates for Asian and Black faces than for white faces, although recent results have improved. ID.me claims that there is no racial bias in its face-matching verification process.

There are many other conditions that can also cause inaccuracy—physical changes caused by illness or an accident, hair loss due to chemotherapy, color change due to aging, gender conversions and others. How any company, including ID.me, handles such situations is unclear, and this is one issue that has raised concerns. Imagine having a disfiguring accident and not being able to log into your medical insurance company’s website because of damage to your face.ADVERTISEMENT

Facial recognition technology is spreading fast. Is the technology—and society—ready?

DATA PRIVACY

There are other issues that go beyond the question of just how well the algorithm works. As part of its process, ID.me collects a very large amount of personal information. It has a very long and difficult-to-read privacy policy, but essentially while ID.me doesn’t share most of the personal information, it does share various information about internet use and website visits with other partners. The nature of these exchanges is not immediately apparent.

So one question that arises is what level of information the company shares with the government, and whether the information can be used in tracking U.S. citizens between regulated boundaries that apply to government agencies. Privacy advocates on both the left and right have long opposed any form of a mandatory uniform government identification card. Does handing off the identification to a private company allow the government to essentially achieve this through subterfuge? It’s not difficult to imagine that some states—and maybe eventually the federal government—could insist on an identification from ID.me or one of its competitors to access government services, get medical coverage and even to vote.

As Joy Buolamwini, an MIT AI researcher and founder of the Algorithmic Justice League, argued, beyond accuracy and bias issues is the question of the right not to use biometric technology. “Government pressure on citizens to share their biometric data with the government affects all of us—no matter your race, gender, or political affiliations,” she wrote.ADVERTISEMENT

TOO MANY UNKNOWNS FOR COMFORT

Another issue is who audits ID.me for the security of its applications? While no one is accusing ID.me of bad practices, security researchers are worried about how the company may protect the incredible level of personal information it will end up with. Imagine a security breach that released the IRS information for millions of taxpayers. In the fast-changing world of cybersecurity, with threats ranging from individual hacking to international criminal activities, experts would like assurance that a company provided with so much personal information is using state-of-the-art security and keeping it up to date.

Much of the questioning of the IRS decision comes because these are early days for government use of private companies to provide biometric security, and some of the details are still not fully explained. Even if you grant that the IRS use of the technology is appropriately limited, this is potentially the start of what could quickly snowball to many government agencies using commercial facial recognition companies to get around regulations that were put in place specifically to rein in government powers.

The U.S. stands at the edge of a slippery slope, and while that doesn’t mean facial recognition technology shouldn’t be used at all, I believe it does mean that the government should put a lot more care and due diligence into exploring the terrain ahead before taking those critical first steps.

Facebook Just Pushed Its Facial Recognition Into a Creepy New Future


You can’t hide.

It happens all the time. A little red blip in Facebook directs you to a notification that a friend tagged you in a photograph somebody uploaded.

The company rolled out the tagging feature in 2010, and all the photos since – the thousands, millions, even billions of them – were slowly building something. What that was, Facebook turned on this week. Now, when somebody uploads a photo of you, Facebook will identify you all by itself, with no human input.

“Now, if you’re in a photo and are part of the audience for that post, we’ll notify you, even if you haven’t been tagged,” the company explained in a blog post this week announcing its new facial recognition features.

“You’re in control of your image on Facebook and can make choices such as whether to tag yourself, leave yourself untagged, or reach out to the person who posted the photo if you have concerns about it.”

649 facebook facial recognition 1Facebook

From one perspective, you could argue this hasn’t entirely come out of left field.

After all, Facebook has already used facial recognition technology for years for features like suggested tagging and photo organisation, which other tech giants like Google and Apple have their own versions of in their own photo apps.

But from another vantage point, this has an altogether different significance, because Facebook doesn’t just deal in photo apps – it’s the biggest social network in the world, with over 2 billion monthly active users as of this year.

So cutting the human element out of something as personal as identifying people’s faces is guaranteed to have a significant impact on how people use the site – and it also tells us how powerful Facebook’s facial recognition now is, fuelled by the billions of images of ourselves we’ve voluntarily submitted to the company (and dutifully tagged, up until now that is).

“Facebook’s face-recognition technology is now so powerful that it can recognise you in any photo, anywhere, even if it has no other reason to expect to find your face in that photo,” Kashmir Hill explains at Gizmodo.

“It’s easy to identify your face if Facebook is only looking for you among the photos your friends have uploaded. It’s harder if the possible pool is more than a billion people, a.k.a. Facebook’s entire user base.”

Not that the new system is perfect.

According to project manager Nipun Mathur from the company’s applied-machine-learning group, the facial recognition won’t identify you if your face appears in a full 90-degree profile, although it can recognise you without a full glimpse of your face, if it has enough data to go off.

The way the new feature works is this. If you’re automatically identified in a photo that you have access to see (depending on the uploader’s photo settings), Facebook will send you a notification that there’s a new photo that “might include you”.

From there, you can choose whether to tag yourself, or ignore the photo.

If you want to opt out of the facial recognition entirely, it will be easy to do, the company says, with a simple on/off switch in your account settings that will let people deactivate the tool, which is otherwise on by default – unless you’ve previously opted out of existing facial recognition features.

But facial recognition won’t be available everywhere. It’s currently not offered in Canada or the European Union, presumably due to privacy regulations. In most other places, your face is fair game.

For most people, the new feature will probably turn out to be a convenient way of finding photos of themselves that they might otherwise have missed, simply because nobody tagged them in the shot. Now, the algorithm has your back, and those shots won’t go unnoticed.

Still, it’s worth bearing in mind that if you decide to opt out and reclaim your face from Facebook, you’ll lose access to other associated identity management options the company also announced this week.

Those include being notified when somebody uploads a photo featuring you as their profile picture – a move designed to prevent people from impersonating others, which is actually a pretty great idea.

That could be very useful for security purposes, but if you opt out of facial recognition, you won’t be afforded the protection – which some criticise as a Catch 22.

“It’s a dilemma that’s baked into the Tuesday announcement itself,” Jacob Brogan explains over at Slate.

“[I]f you want to protect yourself on the platform – whatever that means to you – you have to open yourself up to its own surveillance tools.”

If you don’t like the sound of those terms, friends, you know what to do.

Can Microsoft’s facial-recognition tool guess how old you are?


Microsoft thinks its facial-recognition technology can guess your age and gender, and to prove it, the company set up How-Old.net to let anyone take its algorithm for a spin.

The results vary, but after testing a handful of photos taken on the same day, I can safely say that Microsoft has no idea how old I actually am.

After you type in your name, How-Old.net tries to find your picture online and offers you a handful of results to choose from, but it also lets you upload your own if you prefer. The site then uses facial-recognition application programming interfaces (APIs) to predict your age and whether you’re male or female.

The site is part of an effort to promote the Azure Machine Learning Gallery, a collection of open datasets and projects that can help people experiment with machine learning in speech, facial recognition, vision, and text analytics. Engineers used the data provided by one component of the gallery, the Face API, to figure out how old people are.

Unfortunately, it’s not entirely accurate. I tried three times with three different photos and each one produced a different age suggestion. In fairness, it did get my gender right each time.

Facial recognition: End of anonymity for all of us?


From 2008 to 2010, as Edward Snowden has revealed, the National Security Agency (NSA) collaborated with the British Government Communications Headquarters (GCHQ) to intercept the webcam footage of 1.8 million Yahoo users.

 

Facial recognition: End of anonymity for all of us?

The agencies were analysing images that they downloaded from webcams and scanning them for known terrorists who might be using the service to communicate, matching faces from the footage to suspects with the help of a new technology called face recognition.

The outcome was pure Kafka, with innocent people being caught in the surveillance dragnet. In fact, in attempting to find faces, the Pentagon’s Optic Nerve program recorded webcam sex by its unknowing targets – up to 11 per cent of the material the program collected was “undesirable nudity” that employees were warned not to access, according to documents. And that’s just the beginning of what face-recognition technology might mean for us in the digital era.

Over the past decade, face recognition has become a fast-growing commercial industry, moving from its governmental origins into everyday life. The technology is being pitched as an effective tool for securely confirming identities. To some, face recognition sounds benign, even convenient. Walk up to the international check – in at a German airport, gaze up at a camera and walk into the country without ever needing to pull out a passport – your image is on file, the camera knows who you are. Wander into a retail store and be greeted with personalised product suggestions – the store’s network has a record of what you bought last time. Facebook already uses face recognition to recommend which friends to tag in your photos.

But the technology has a dark side. The US government is in the process of building the world’s largest cache of face-recognition data, with the goal of identifying every person in the country. The creation of such a database would mean that anyone could be tracked wherever his or her face appears, whether it’s on a city street or in a mall.

Face-recognition systems have two components: an algorithm and a database. The algorithm is a computer program that takes an image of a face and deconstructs it into a series of landmarks and proportional patterns – the distance between eye centres, for example. This process of turning unique biological characteristics into quantifiable data is known as biometrics.

Together, the facial data points create a “face-print” that, like a fingerprint, is unique to each individual. Some faces are described as open books; at a glance, a person can be “read”. Face‑recognition technology makes that metaphor literal. “We can extrapolate enough data from the eye and nose region, from ear to ear, to build a demographic profile” including an individual’s age range, gender and ethnicity, says Kevin Haskins, a business-development manager at the face-recognition company Cognitec.

Face-prints are collected into databases, and a computer program compares a new image or piece of footage with the database for matches. Cognitec boasts a match accuracy rate of 98.75 per cent, an increase of more than 20 per cent in the past decade. Facebook recently achieved 97.25 per cent accuracy after acquiring biometrics company Face.com in 2012.

So far, the technology has its limits. “The layman thinks that face recognition is out there and can catch you anytime, anywhere, and your identity is not anonymous any more,” says Paul Schuepp, the co-founder of Animetrics, a decade-old face-recognition company based in New Hampshire. “We’re not that perfect yet.”

The lighting and angle of face images must be strictly controlled to create a usable face-print. “Enrolment” is the slightly Orwellian industry term for making a print and entering an individual into a face-recognition database. “Good enrolment means getting a really good photograph of the frontal face, looking straight on, seeing both eyes and both ears,” Schuepp says.

How face recognition is already being used hints at just how pervasive it could become. It’s being used on military bases to control who has access to restricted areas. In Iraq and Afghanistan it was used to check images of detainees against al-Qa’ida wanted lists. The police department in Seattle is already applying the technology to identify suspects on video footage.

A businessman using a face recognition system A businessman using a face recognition system (Alamy) The technology’s presence is subtle and as it gets integrated into devices that we already use, it will be easy to overlook. The most dystopian example might be NameTag, a start-up that launched in February promising to embed face recognition in wearable computers such as Google Glass. The software would allow its users to look across a crowded bar and identify the anonymous cutie they are scoping out. The controversial company also brags that its product can identify sex offenders on sight.

As the scale of face recognition grows, there’s a chance that it could take its place in the technological landscape as seamlessly as the iPhone. But to allow that to happen would mean ignoring the increasing danger that it will be misused.

By licensing their technology to everyone from military contractors to internet start-ups, companies such as Cognitec and Animetrics are churning a global biometrics industry that will grow to $20bn (£12bn) by 2020, according to Janice Kephart, the founder of Siba (Secure Identity and Biometrics Association). With funding from a coalition of face-recognition businesses, Siba launched in February 2014 to “educate about the reality of biometrics, bridging the gap between Washington and the industry”, says Kephart, who previously worked as a legal counsel to the 9/11 Commission. Kephart believes biometric technology could have prevented the 9/11 attacks (which, she says, “caused a surge” in the biometrics industry) and Snowden’s NSA leaks. She emphasises the technology’s protective capabilities rather than its potential for surveillance. “Consumers will begin to see that biometrics delivers privacy and security at the same time,” she says.

It’s this pairing of seeming opposites that makes face recognition so difficult to grapple with. By identifying individuals, it can prevent people from being where they shouldn’t be. Yet the profusion of biometrics creates an inescapable security net, with little privacy and the potential for serious mistakes, with dire consequences. An error in the face-recognition system could cause the ultimate in identity theft, with a Miley Cyrus lookalike dining on Miley’s dime or a hacker giving your digital passport (and citizenship) to a stranger.

This summer, the FBI is focusing on face recognition with the fourth step of its Next Generation Identification (NGI) programme, a $1.2bn initiative launched in 2008 to build the world’s largest biometric database. By 2013, the database held 73 million fingerprints, 5.7 million palm prints, 8.1 million mug shots and 8,500 iris scans. Interfaces to access the system are being provided free of charge to local law enforcement authorities.

Jennifer Lynch, staff attorney for the privacy-focused Electronic Frontier Foundation (EFF), notes that there were at least 14 million photographs in the NGI face-recognition database as of 2012. What’s more, the database makes no distinction between criminal biometrics and those collected for civil-service jobs. “All of a sudden, your image that you uploaded for a civil purpose to get a job is searched every time there’s a criminal query,” Lynch says. “You could find yourself having to defend your innocence.”

In the private sector, efforts are being made to ensure that face recognition isn’t abused, but standards are vague. A 2012 Federal Trade Commission report recommends that companies should obtain “affirmative express consent before collecting or using biometric data from facial images”. Facebook collects face-prints by default, but users can opt out of having their face-prints collected.

Technology entrepreneurs argue that passing strict laws before face-recognition technology matures will hamper its growth. “I don’t think it’s face recognition we want to pick on,” Animetrics’s Schuepp says. He suggests that the technology itself is not the problem; rather, it’s how the biometrics data [is] controlled.

Yet precedents for biometric surveillance must be set early in order to control its application. “I would like to see regulation of this before it goes too far,” Lynch says. “There should be laws to prevent misuse of biometric data by the government and by private companies. We should decide whether we want to be able to track people through society or not.”

What would a world look like with comprehensive biometric surveillance? “If cameras connected to databases can do face recognition, it will become impossible to be anonymous in society,” Lynch says. In the future, the government could know when you use your computer, which buildings you enter on a daily basis, where you shop and where you drive. It’s the ultimate fulfilment of Big Brother paranoia.

But anonymity isn’t going quietly. Over the past several years, mass protests have disrupted governments in countries across the globe, including Egypt, Syria and Ukraine. “It’s important to go out in society and be anonymous,” Lynch says. But face recognition could make that impossible. A protester in a crowd could be identified and fired from a job the next day, never knowing why. A mistaken face-print algorithm could mark the wrong people as criminals and force them to escape the spectre of their own image.

If biometric surveillance is allowed to proliferate unchecked, the only option left is to protect yourself from it. Artist Zach Blas has made a series of bulbous masks, aptly named the “Facial Weaponisation Suite”, that prepare us for just such a world. The neon-coloured masks disguise the wearer and make the rest of us more aware of how our faces are being politicised.

“These technologies are being developed by police and the military to criminalise large chunks of the population,” Blas says of biometrics. If cameras can tell a person’s identity, background and whereabouts, what’s to stop the algorithms from making the same mistakes as governmental authorities, giving racist or sexist biases a machine-driven excuse? “Visibility,” he says, “is a kind of trap.”