Is FaceApp Safe to Use?

This site may earn affiliate commissions from the links on this page. Terms of use.

Over the past couple of weeks, FaceApp—the AI-driven photo augmentation tool for smartphones—became the source of a major data privacy controversy that appears to have been greatly overstated. Nevertheless, it points to a clear and common issue about the rights we may give up with potentially any app we allow on our devices.

What Happened With FaceApp?

On July 14th, developer Joshua Nozzi tweeted an accusation (since removed) stating that FaceApp seemed to be upload all photos in a user’s library and not only the photos a given user selects for use with the app’s services. He also pointed to Russian involvement with the company, emboldening common concerns over illicit Russian involvement in US data-related matters. Within a couple of days, pseudonymous security researcher Elliot Alderson responded to 9t05Mac’s coverage of Nozzi’s accusation with evidence to the contrary. FaceApp also responded with a statement to 9t05Mac with similar intent. Here is the abridged version:

We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.

FaceApp performs most of the photo processing in the cloud. We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud.

Even though the core R&D team is located in Russia, the user data is not transferred to Russia.

Although 9t05mac jumped the gun by publishing Nozzi’s accusation, as his claims were proven false, Chance Miller—the article’s author—raises an important point:

It’s always wise to take a step back when apps like FaceApp go viral. While they are often popular and can provide humorous content, there can often be unintended consequences and privacy concerns.

Nozzi’s false accusation seems more like an honest mistake than a malicious act and Miller’s point illustrates why we’re likelier to panic when unrelated circumstances paint a picture of danger. While we should always take a moment to find evidence of our claims before publishing, in order to avoid inciting a widespread panic unnecessarily, it’s not hard to see how someone could make this mistake when people are on high alert for this type of activity.

Is Any App Truly Safe to Use?

Although FaceApp hasn’t tricked anyone into providing ownership of their photo library in order to build a massive database of US citizens for the Russian government—or whatever conspiracy theory you prefer—this incident highlights how easily we provide broad permissions without considering the consequences each time we download an app.

When an app requests access to data on your smartphone, it casts a wide net out of necessity. Photo apps don’t request the right to save photos or access only the photos you explicitly present, but your photo library on the whole.  You can’t provide microphone and camera access, or really anything else, with granular permissions that give you control over what the app can do.  Furthermore, smartphones don’t provide a simple way for people to see what apps do. Logs of any kind, or a means of monitoring network activity, are not made available to the average user.

For this reason, most users don’t have the ability to discover if an app breaks their trust or not. Until we have better control over what apps can and cannot access on our devices we have to consider the worst-case scenario with every download. Unless a person has the knowledge and willingness to regularly monitor app activity, as well as read (and understand) each app’s terms of service in their entirety, that person cannot rule out the possibility of malicious use of their data.  After all, Facebook was just fined $5 billion for allowing the highly non-consensual leak of user data (not that it mattered) and much of that occurred through a person’s association with a user who downloaded the problematic app.

While most commonly used apps don’t find themselves in controversial situations like this, data leaks occur with enough frequency that we need to remember what we risk with every contribution of our personal information. Every access granted, every photo uploaded, and every bit of information we provide an app—whether it identifies us directly or indirectly—provides a company with new information about us that they often claim ownership of through their terms of service. They may or may not use the collected data for disagreeable purposes but they afford themselves the right through a process they know almost everyone will ignore. Companies need broad language in their legal agreements to protect themselves. Unfortunately, this legal necessity also cultivates a framework for taking advantage of users when a company publishes an app for the purposes of data collection.

Granular permissions on smartphones take a step toward solving this problem, but it won’t prevent apps from continuing to request broad permissions and requiring access as the price of admission. At this point, most of us know that we’re paying with our data when we’re not paying with our dollars but the problematic difference lies in the exact cost.  Most people probably wouldn’t mind if FaceApp used their selfies to improve the quality of the service but might feel differently if that data were used for another reason. Even without supplying our entire photo libraries, and even if FaceApp deletes the images 48 hours later, they’ve still provided themselves with more than enough time to gain value from the data users willingly provide. While it appears they have no malicious intent, it’s unclear what our provided data costs us because we don’t know how they use it.

The same applies to nearly every single app we download.  Without transparency, we’re paying a cost determined in secret. With repeat action over many, many apps, it becomes very difficult to pinpoint the source of any problems that result. FaceApp appears to operate like every other app: requesting broad data permissions out of necessity and reducing liability through a terms-of-service agreement. With every app, we need to ask ourselves if the service it provides is worth the gamble of an unknown cost.

Now read:



[ad_2]