By Niels Wouters & Jeannie Paterson
TikTok is hugely popular. But its latest decision to capture unique digital copies of your face and voice is a cybersecurity threat to your identity and privacy.
With more than one billion users since 2017, TikTok is one of the fastest-growing social networks. Its audience base is rapidly expanding beyond Gen Z’s early adopters, now including boomers and retirees.
Despite its growth, TikTok has been unable to stop a stream of rumours and leaks.
Logging our lives and identities
TikTok has attracted further controversy by collecting considerable data about its users. Not too dissimilar from other social networks, powerful algorithms scan videos uploaded to TikTok to detect objects and scenery.
And personal identifiers entered when joining TikTok, often linked to our other social media accounts, are connected with information inferred about our behaviour to create comprehensive digital profiles of users.
These are the basis for serving users with micro-targeted advertising.
Other practices stretch far beyond those of other data-hungry social media providers. Speech, for instance, is automatically transcribed to generate automatic captions for video.
These biometrics are unique and personal digital replicas of appearance, behaviour and expression. They are comparable to fingerprints as they can help others identify, surveil and profile people of interest.
Biometrics have become a new battleground between social media platforms and privacy regulators. Facebook has been caught out in the past for capturing facial recognition data without user consent. Only a few months ago, a settlement was reached.
TikTok recently agreed to pay $US92 million to settle a lawsuit accusing it of wrongfully using facial recognition to distinguish age, gender and ethnicity in user videos for ad targeting. The case didn’t go to court, and TikTok rejected the allegations.
However, the new policy change suggests TikTok is making these practices explicit to avoid future legal action.
Cause for alarm
With terabytes of video data uploaded to TikTok daily, the global user base of TikTok should be alarmed by this policy update. There are a few reasons.
Access control and security are significant concerns. While some may consent to give TikTok first-party access to their biometrics, users also grant TikTok the right to share data with third-party service providers and business partners.
What if data security is breached? Suddenly your identity and biometrics may be harnessed by criminals that create deep fake videos that could be used to blackmail, extort and cyberbully.
It is also reasonable to expect TikTok, its parent company ByteDance and its in-house research group AI Lab or its AI reseller BytePlus, to harness the “fire hose” of biometric data to drive developments in deep fake technology.
Soon, you may be able to create deep fake videos of yourself – living the life of celebrities, or acting out heroic scenes in blockbuster movies. New features may also enable automatic tagging of others in uploaded videos – even including those who haven’t consented or people not even on TikTok.
But as deep fake technology becomes mainstream, so does its potential weaponisation and harm. We’ve recently seen public outcry over the undisclosed use of Anthony Bourdain’s deep fake voice in a documentary about his life.
It lifts the lid on security risks and threats to democracy. Think disinformation campaigns, political influencing, geopolitical damage and causing personal harm – beyond those that live in the spotlight.
Lastly, we should be alarmed by the precedent this policy sets. It normalises the capture of our biometrics through entertainment apps.
For now, the policy change creates clarity for TikTok’s US-based users.
However, TikTok’s privacy policies for other jurisdictions are equally concerning: there’s no detail about how TikTok processes user content.
“Processing” and “user content” have many meanings, fuelling suspicion that faceprints and voiceprints are already being captured from Australian users.
Don’t sacrifice your face
TikTok is not the only social network operating in a largely unregulated environment where personal data drives corporate profits through advertising. It’s also not the only social network that leverages user data to enable technology developments by cloud machine learning platforms.
But we should be vigilant about the growing number of apps that take faces and voice as input to create entertaining content.
Sure, they may make us look like cartoon characters or boost perceptions of attractiveness, but these apps also make the public accustomed to surveillance technologies. Ongoing subjection to surveillance will reduce public scrutiny and awareness of undesired consequences, while increasing willingness to participate in murky data collection and exchange practices.
This is of concern to adults, but there is a particular concern for young people.
We already know that real-time monitoring of online and physical spaces that children inhabit impacts how they behave and think of themselves. This affects how they form identities and shape personalities.
With social and entertaining features embellishing privacy-invasive technologies, we risk normalising the unpleasant consequences of facial recognition, automated decision-making and their inherent privacy risks for generations of young people.
Inaccurate evaluations and applications of young people’s faces and voices could have real impact on their lives, as these techniques become used to monitor school performance and suitability for employment.
The chilling effects of these technologies on free expression could also constrain their emotional and intellectual development.
Young people should be encouraged to challenge today’s concerning technology developments. They should also be provided with tools, techniques and thinking to respond to the risks to privacy, autonomy and identity that biometric surveillance represents.
And we should consider the need for laws to prevent the loss of our private selves to corporate entities with little interest in individual well-being or human thriving under the guise of entertainment. Purporting to find consent to such strategies in the fine print of privacy policies just doesn’t cut it.
In Australia, as recently recommended by the Australian Human Rights Commission, we need legal protection that places limits on ubiquitous technological surveillance, so as to preserve our rights to be our authentic selves today and into the future.
Niels Wouters is a Research Fellow in the Melbourne University School of Engineering.
Jeannie Marie Paterson is a Professor in the Centre for Artificial Intelligence and Digital Ethics at Melbourne University.
Disclaimer: The ideas expressed in this article reflect the author(s) views and not necessarily the views of The Big Q.
You might also like: