The End of Anonymity? Ensuring Facial Recognition Is Used for Good

15 octobre 2020 | Andrew Schrumm


Partager

Our faces are travelling everywhere, even when we're not.

Issue

Facial recognition technology is now a part of our daily lives, personalizing our services and making identity verification easier. Yet a lack of clear restrictions on its usage creates ambiguity for Canadian businesses that are constantly seeking data insights to drive growth and maintain relevance with consumers in a platform-based world.

POV

The pandemic has fuelled an explosion in the use of video, as we try to stay connected during the global lockdown. Workers are using Zoom and Webex daily, families are catching up over FaceTime or Skype, and we’re turning ever more frequently to social media like Instagram and TikTok for entertainment. Our faces are travelling everywhere, even when we’re not.

"Our faces are travelling everywhere, even when we're not."

At the same time, Artificial Intelligence is becoming ever present in our lives as we spend more time online; sending us shopping and podcast recommendations, predicting our upcoming bills, and learning which shows we like to watch. Our faces have become a central part of this data wave, as we teach our phones to recognize us, sort our photos and even interpret our emotions. But facial recognition creates a different sort of data tool from web traffic and credit card histories – one that can assess identities, behaviours and social interactions. We’re no longer anonymous, whether sitting at our computers or taking a walk downtown.

When paired with AI, facial recognition offers incredible commercial applications that could increase the personalization of services and reduce friction in the verification of payments, health records or even voting. How will Canadian firms – big and small – choose to employ the potential of facial recognition, as we all strive to leverage technology and consumer insights? Are there clear regulations on the use of this data? On these questions, we don’t operate in a vacuum; the technology is being developed and the data put to use in various ways around the world. The drive for innovation in this space will test our resolve to ensuring AI is used for good.

To do this right, Canadian businesses should avoid working in isolation. Canada is home to the world’s leaders in developing ethical AI. It’s here that the Montreal Declaration for the Responsible Development of AI was signed, the Privacy by Design certification was developed, and CIFAR’s AI & Society program was born. In this spirit, RBC and Borealis AI have launched RESPECT AI, a hub for firms to gain practical solutions for the responsible adoption of AI.

 

Key Numbers


Rate of patents accelerating, with a record of 244 in 2019

The number of global patents referencing “facial recognition” stands at 1,617, with over 100 new patents so far in 2020. Tech giants Google, Samsung and IBM dominate the filings (Apple ranks 7th). Only 27 of these patents are held by Canadian applicants.

Canadian firms are divided on the use of AI and data insights

Despite the data revolution, 53% of Canadian companies aren’t using AI to inform their business decisions, and among them, 6 out of 10 have no plans to do so soon. The other half feel that AI is central to their business growth, and most plan to expand their usage over next 2 years.

Social media is generating oceans of data for facial recognition

Each day, over 100 million hours of video are streamed on Facebook, while more than 95 million uploads are made to Instagram. These images are tagged with names and locations, providing better training data for algorithms. Google is building software that can crawl all social media sites to identify a person’s face (and their associated activity) across all platforms.

Accuracy of facial recognition is near a human level

Apple FaceID claims a 1 in 1,000,000 chance someone else could unlock your device with their face. Google’s FaceNet achieved 99.63% accuracy against a benchmark image data set, surpassing Facebook’s DeepFace at 97.35%. By comparison, the human eye is accurate 97.53% of the time.

Market for this technology could double within five years

The facial recognition market had revenue of about US$3.2 billion in 2019, with some forecasts calling for it to reach US$7 billion by 2024. Key sectors of growth will continue to be government and security, with rising usage among retail and ecommerce.

Key Questions

What is facial recognition good for?

As a tool, facial recognition provides rapid identification of an individual. This can help companies provide a personalized experience to consumers, reduce friction points on verification to access secure materials, or assist law enforcement by rapidly identifying possible suspects from video.

Similar image recognition technology is already used to reduce pedestrian accidents in cities by monitoring traffic patterns. It’s also being applied in agriculture to distinguish weeds from crops for precision pesticide use. Yet, we’re likely most accustomed to using facial recognition on our phones to verify banking or email passwords, or to automatically sort our photos.

Increasing confidence in the accuracy of this technology has furthered its public uses. Between 2014 and 2018, the US agency NIST estimates a 20x improvement in accuracy, from a 4.0% failure rate to 0.2%, respectively.

We can expect the list of potential applications to grow:

  • Attendance tracking at school and work could be automated by face scans, and could provide added verification in exam halls and polling stations.
  • Loyalty members of a retailer could receive virtual coupons and recommendations by text message upon entering a store.
  • Security access to buildings or bank machines using facial recognition could reduce barriers for persons with disabilities in the workplace.

A sub-set of research is developing measures to counter the misuse of people’s images, such as deep fakes or identify fraud. And liveness detection software is aiming to identify whether an image or video is true to the subject involved; in essence, a good AI that can identify bad AI.

Does facial recognition equate to surveillance?

Many countries have no explicit legal or regulatory requirements related to facial recognition within their privacy regimes. In some places, this leaves interpretation open to discretion or abuse by government and business; as such, surveillance has become synonymous with facial recognition.

Here in Canada, we’ve seen pushback when its use has gone exceeded public comfort. When the RCMP’s association with Clearview AI – a firm with a database of 3 billion photos from Facebook and Instagram – became public, the Mounties had to set limits on its use. When Vancouver police attempted to use driver’s license photos to identify suspects in the Stanley Cup riot in 2011, the privacy commissioner required a court order on future use of such technology.

As this technology develops and pushes the limits of privacy, countries are navigating these challenges in real time. The adoption of Canada’s Digital Charter in 2019 – the federal government’s statement of intentions on digital security – suggests individuals can anticipate increased control over personal data and images under its “control and consent” principles. However, the roadmap remains unclear. In July 2020, 77 Canadian civil society groups called on the Trudeau government to; (i) ban use of the tech by federal law enforcement for surveillance and intelligence, (ii) launch public consultations on use of facial recognition, and (iii) update PIPEDA protections to specifically cover biometric data.

China, with its estimated 626 million surveillance cameras, has perhaps the strictest restrictions on private business use of biometric data. The central government, however, is exempt. Facial recognition is a key tool in its emerging national “social credit” system that scores personal public behaviour and penalizes “bad” practices (e.g. jaywalking or smoking in the wrong spot). These major investments have made China the world’s capital of facial recognition; since 2015, the majority of related patents on facial recognition and surveillance have come from Chinese applicants.

The European Union’s GDPR is the most advanced data privacy regime. It classifies the data harvested from facial recognition technology as biometric, a category that requires explicit consent from the subject prior to its collection.

In the US, four states – Washington, Illinois, Texas and California – have adopted laws on the protection of biometric data including explicit opt-in clauses, while numerous cities have banned the use of facial recognition in public services, including policing. The most recent to do so was Portland, amid civil unrest. However, the Trump administration has sought rulings in federal court to proceed with facial scans at airport entry for all passengers, including non-US citizens. Currently being piloted at Los Angeles International Airport and Dallas-Fort Worth International Airport, facial recognition is used to identify potential criminals, passport fraud and people on no-fly lists

How is this different than other personal data, like a fingerprint?

"Think about it: how many times have you provided your fingerprint? Now how many images of your face are online?"

Facial recognition differs from other biometrics – DNA, fingerprints – for two big reasons. First, it’s easy to collect; your image can be captured on video by anyone, anywhere. Second, it’s increasingly easy to verify against online images and with deep learning tools.

Think about it: how many times have you provided your fingerprint? Now how many images of your face are online? Governments alone have an enormous trove of reliable, labelled images, from your health card to your passport photos. Meantime, our penchant for posting images to social media – with our names, friends and locations – have created inadvertent, massive datasets for facial recognition.

A couple of years ago, Google stunned many observers by announcing development of an algorithm to track people’s social media activity across all platforms, simply by following their face. Google can do this with its propriety “reverse image search” combined with its massive scale in crawling millions of websites at once. The ease of access to people’s faces makes all of this possible.

Despite advances in the technology, facial recognition wears a mask of mistrust, particularly along racial lines. Some of the original facial recognition systems and algorithms were proven to contain ethnic bias with high levels of inaccuracy for non-white faces, due largely to the input data that skewed to white males.

Any application of this technology must appreciate the potential for bias in the underlying data — particularly given its potential to negatively impact people. Trust in these tools remain divided; when asked about personal verification methods to access health records, 58% of White respondents in the US were comfortable using facial recognition. But this figure fell to 50% and 41% among Hispanic and Black respondents, respectively.

What does this mean for Canadian businesses?

Put simply, tread lightly. Misuse of personal information can carry massive reputational and legal risks.

An RBC Borealis AI survey of Canadian businesses revealed their top motivations to invest in AI programs were (i) to reduce costs, (ii) to increase productivity and (iii) to increase sales. This is increasingly relevant amid the current economic recovery, as firms look to leverage any data advantages to create consumer relevance and new revenue.

Tech giants have inherent advantages to developing this technology, and have staked their claims in hundreds of global patents. In turn, they are seeking consumers of these data tools; for example, a retailer looking for information on shoppers who browse, but don’t buy; a restauranteur aiming to track frequency of visits to grant loyalty points; or a construction firm interested in gathering insights on worker activity on job sites.

Only 36% of US consumers trust tech companies to use facial recognition software responsibly.

Comfort with all these uses, however, is not yet widespread. A 2019 Pew research study found that only 36% of US consumers trust tech companies to use facial recognition software responsibly, and just 17% trust advertisers. When individuals are unsure of how their data is being used, firms risk running afoul of privacy and ethical practices.

Any organization or entrepreneur should consider:

  • Educating themselves and business leaders on the risks from resources like RESPECT AI.
  • Understanding and creating an awareness of the biases that may be present in their practices.
  • Conducting due diligence on vendors that supply personal data to their business.
  • Supporting business councils, and other advocacy groups, that call for clear, federal regulatory guidance on the use of data from facial recognition in business.
  • Making a public commitment to the ethical use of AI and respect for people’s autonomy, by following the principles of Privacy by Design or signing onto the Montreal Declaration.

Uncertainty about how to use AI responsibly could account for the stark divide among Canadian firms adopting it. Six in ten Canadian businesses feel that AI is mostly for larger organizations.

Key Stakes

Consumers should have the right to know why and how firms use their likeness, and governments are responsible for ensuring it is done legally. Businesses that engage in facial recognition applications without appreciating the associated ethical questions risk strong blow-back from consumers.

Canada has been a leader in supporting AI for good. How facial recognition technology is deployed will be an important test of adherence to such ideals. The pandemic has accelerated how Canada moves from conversation towards action around digital ethics.


Andrew Schrumm is a Senior Manager, Research in RBC’s Thought Leadership group.

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.

Categories

La technologie