Thursday , May 6 2021

How to sell privacy and make changes – Smashing Magazine

About the author

Joe Toscano is an award-winning designer and former Google consultant who left in 2017 due to ethical concerns. Since then he started a non-profit, called …
More Joe

Privacy is a fundamental human right that has become one of the most illusive and least understood Internet topics. However, the time for change is coming, and it is up to us to decide whether this will happen voluntarily or through regulation. This article will explain exactly why making these changes is so critical to the success of your business and how you can make the changes that need to be made in a way that also has a positive impact on profits.

Privacy is a fundamental human right that allows us to be our true self. It is what allows us to be strange without shame. It allows us to have dissenting opinions without consequences. And, ultimately, it is what allows us to be free. This is why many nations have strict privacy laws. However, despite this common understanding, Internet privacy is one of the least understood and undefined topics to date because it embraces a wide range of problems, taking shape in many different forms, which makes it incredibly difficult to identify and discuss. However, I would like to try to resolve this ambiguity.

In the United States, it is a federal crime to open someone's mail. This is considered a criminal violation of privacy that could land someone in prison for up to five years. Metaphorically speaking, every piece of data we create on the Internet – whether photos, videos, text or something else – can be thought of as a mail package. However, unlike the opening of our mail in real life, Internet companies can legally open every piece of mail that is delivered through their system without legal consequences. In addition, they can also make copies. What these companies are doing would be comparable to someone who opens our mail, copies it on Kinkos, then stores it in an archive with our name and shares it with anyone willing to pay it. Do you want to open the file or delete some copies? Sin. Our mail is currently considered to be their property, e we have almost no control over how it is used.

Could you imagine the outrage that the public might try if they found out that the postal service held the mail hostage and sold it to anyone willing to pay? What is happening with data on the Internet is no different, and it is time for this to change.

It is more than a question of ethics that this happens, it is a question of fundamental human rights.

The problem with making the changes that need to be made (without the changes being forced by the regulation) is putting the dollar signs on the problems. What is the financial performance of an engineering investment of 20,000 hours to improve consumer privacy standards? Do consumers ask for these changes? Because if it does not produce a fiscal return and consumers do not ask it, then why should changes be made? And even if they are and there is a return, how are the 20,000 hours of investment? What will be included in the product roadmap and when? These are all the valid concerns that need to be addressed to help us move forward effectively. So, let's discuss.

Suggested reading: Use of ethics in web design

Do consumers want it?

The answer to this question is a difficult yes. The results of the Pew Research Center show that 90% of adults in the United States deem it important to control the collection of information about them, 93% believe it is important to control who has access to this information and 86% have taken measures to remove or mask their fingerprints. Similar numbers were discovered on Europeans in the 2018 Digital Attitudes report by doteveryone. Despite these numbers, 59 percent still believe it is impossible to remain anonymous online, 68 percent believe that current laws do not do enough to protect their privacy, and only 6 percent are "very confident" that government agencies can keep them safe.

Now, I know what you're thinking. This is consumer demand, and as long as consumers do not abandon old products, there is no fiscal reason to make changes. And (even if they do not agree with your logic) you're right. Right now there are few tax reasons to make any changes. However, when consumer demand reaches a critical mass, things always change. And companies that open the way before change is required always win in the long run. Those who refuse to make a change until they are forced to always feel the maximum pain. History shows that this is the truth. But what will happen in the legislation that will change the business so much? Great question.

What is going to happen to data protection and privacy standards around the world, through regulation, will not be so different from what happened less than a decade ago when consumers demanded protection from spam e-mails, which it resulted in the CAN-SPAM law in the United States – but on a much larger scale and with an exponentially greater impact. This legislation, created because consumers were fed up with receiving spam e-mails, setting the rules for commercial e-mails, setting requirements for commercial messages, giving recipients the right to have people and businesses stop sending them via e-mail and to specify severe penalties for violations. As we enter a period in which consumers are beginning to understand how they have been deceived (For years, giving people personal control of their data will undoubtedly be the future of data collection), both through free will and legislation. And those who choose to move first will win. You do not believe me?

Consider the fact that engineers may have legal problems with the code they write. Data from Apple Watch, Alexa and FitBit, among others, were used as evidence in court, changing consumers' perceptions of their data. Microsoft and the US Supreme Court went to court at the start of this year to define where physical boundaries extend into cloud-based criminal activities, the beginning of what will be a long battle . These examples are just a peek at what's coming. People are asking for more and we are reaching the turning point.

The first to take action to answer this question is the EU, which established the GDPR, and now policymakers in other countries are starting to follow the example, working on the laws in their country to define our IT future. For example, the vice-president of the US Senate Intelligence Committee, Mark Warner, has recently put forward some ideas in a summary report a couple of months ago, demonstrating where legislation could soon head to the United States. But it is not just the progressives who believe that this is the future; even right-wing influencers like Steve Bannon think we need regulation.

What we are seeing is a human reaction to an incredible manipulation. No matter how we can be tamed compared to previous generations, people will always come back when they feel threatened. It is a natural reaction that has allowed us to survive for millennia. Today, technology has become much more than an industry that looks to the consumer. And now also becoming a matter of national security. And for this reason, there will be a reaction, whether we like it or not. And it will be better if we go out with a strategy to prepare instead of being swept under the carpet. So, what is the financial return you ask for? Well, how much is your business worth? That's all.

Suggested reading: How GDPR will change the way you develop

For a simple framework of what needs to be addressed exactly and why, we can keep some fundamental truths for creating digital systems:

  1. Privacy must be proactive, non-reactive and must anticipate privacy issues before they reach the user.
    These problems are not problems we want to tackle after a problem has become a reality, but instead are issues that we want to avoid altogether, if possible.
  2. Privacy must be the default setting.
    There is no "best for business" option in terms of privacy; this is a problem that concerns what is best for the consumer, which will be better for the business in the long run. We can see what happens when the coercive defects are exposed to the public through what happened to Paypal and Venmo in August 2018 when Public by Default was released to the public, bringing a smattering of bad press to the brand. More than that it is safe to come to companies waiting for something bad to happen before making a change.
  3. Privacy must be a positive sum and should avoid dichotomies.
    There is no binary relationship with privacy; it is a forever malleable problem that requires constant supervision and perpetual iteration. Our work does not end in terms and in the service contract, it lasts forever and must be considered a fundamental element of the product that evolves with the product and allows consumers to protect themselves – not one that makes use of their lack of understanding.
  4. The privacy rules must be visible, transparent, open, documented and independently verifiable.
    There is no good way to define a litmus test for your privacy standards, but a couple of questions we should all ask ourselves as entrepreneurs are: first, if the press published your privacy agreement, it would be understandable? Secondly, if it were understandable, would consumers appreciate what they read? And last but not least, if not, what do you need to change?

These principles will be a foundation of great value to keep in mind as the products are built and evolved. They are quick and easy questions to ask yourself and your team that will allow you to have a good foundation of ethics, but for a longer piece on a legal basis you can read more from Heather Burns, who outlined several additional principles last year on Smashing. And for a complete list of things to inspect during a privacy impact assessment (PIA), you can also check how ratings are performed based on:

But before rushing to make changes in your product, we first point out some of the current flaws in nature and talk about how changes might look when implemented correctly.

How to make the change

One of the biggest problems with US privacy protection practices is how difficult it is to understand terms and service agreements (T & S), which play an important role in defining privacy, but they tend to do it very badly. Currently, users are forced to read long documents full of legal language and technical jargon if they hope to understand what they are accepting. One study has actually shown that it will take about 201 hours (almost ten days) for the average person read each privacy policy they meet on an annual basis. The researchers estimated that the value of this lost time would amount to about $ 781 billion a year, which is unacceptable considering that these are the rules that should protect consumers – rules that are touted to be easy and digestible. This puts consumers in a position where they are forced to join without really understanding what they are going to meet. And in many cases it is not even legal language that is coercive, it is the way in which options are provided, in general, as clearly demonstrated in various experiences:

This image shows what many sites currently do to gather consensus in a way that presupposes consent and leads the consumer to agree in a dangerous way.

When consent is collected in this way, it is assumed. (Great preview)

The example above is a generic wireframe, but I chose to do this because we have all seen models like this and others similar that are related to the collection of more specific data types. I could list specific examples, but the list would go on and on and there is no reason to list specific companies that demonstrate manipulative models because these models (and other very similar models) can be found on almost every single website or app on the Internet. There is a big problem asking for consent in this way: consumers are not allowed do not accept terms and services without many additional steps, many readings and much more. This is a fundamental flaw that must be addressed because asking for consent means that there must be an option to say no, and to know if "no" is the best option, consumers need to understand what they are agreeing to. However, the products are not built this way. Because? Well, it's better for business.

If we sit down and really think about this, what is easy to see but leave it unrecognized is that companies spend more time creating splash pages to explain how to use the app rather than explaining what data is collected and why. Because? Simple changes to the way in which the T & S agreements are signed will not only make consumers more aware of what they are recording, but also allow them to be more responsible consumers. We can see some of these changes already made because of the impact that the GDPR has had all over the world. In many European countries, it is not uncommon for consent to be requested through modals like these:

In this image, the benefits of consent have been recognized, but there is still room for improvement.

In this image, the benefits of consent have been recognized. (Great preview)

This first example is a good step forward. It indicates to the consumer the data for which they will be used, but there is still no transparency on where the data will go and will give priority to the agreement without a denial option. Furthermore, it confuses everything in a single body of text, which makes information much less digestible.

A better example of how this could be designed is something like the modal below, which is now common in many European sites:

This image shows how consent is required at many sites in Europe, in order to comply with the General Data Protection Regulation (GDPR) and to offer consumers better options. However it is not yet what they need.

After GDPR compliance has become a problem, many more options have been provided, but improvements could still be made. (Great preview)

This provides consumers with a complete understanding of what their data will be used and does so in a digestible manner. However, there is still no significant information on where the data will go after they consent. There is no single clue as to where their data will be shared, who will be shared and what limitations exist within those agreements. While this is much better than most options on the web, there are still improvements to be made.

Request for third-party access

For example, when using a third-party service to access their platform, consumers should be well aware of the following:

  1. What data will be taken by the third party?
  2. What is used and how it could affect their experience if you do not have access to it?
  3. Who else has or could have access to it.

To implement it in a way that gives the consumer control, this experience should also allow consumers to opt for the individual parts of the collection, not being forced to accept all or nothing.

This mock-up shows a generic version of how a site may request consent when using third-party access on its site. It shows how to list a group of variables is not good and why we have to ask for consent on each point individually.

Forcing consumers to come out at every point, adds friction to the process, yes, but also ensures that the content is digestible. (Great preview)

This would make digestible T & S and would allow consumers to opt for what they really are agreeing with, not for what the company wants them to agree to. And to make sure it's really opt-in, the default setting should be set to opt-out. This would be a small change that would make a dramatic difference in the way consent is required. Today, most companies cover this content in legal terms to hide what they are really interested in, but the days in which consent is sought in this way are rapidly coming to an end.

If you offer consumers a meaningful service and do it ethically, these changes should not be a problem. If there is a real value for the service, consumers will not resist your request. They just want to know who they can and can not trust, and this is a simple step that can help your company prove its reliability.

Single-point and multi-point data collection requests

Later, when it comes to creating understandable T & S agreements for your platform, we need to consider how this could be done more contextually – within the application experience. Keep in mind that if everything is abandoned, it is not digestible. For this reason, the request for data collection should take place at the same timewhen the consumer is going to use part of your service that requires an additional level of data to be collected.

To demonstrate how this question can occur, here are a couple of examples of how a single-point and multi-point data collection request could be:

These prototypes show how it could appear when permission is required to use specific data points. This includes showing people what their data is used for, why it is important for that process and allowing them to opt-in or -out every single point, without having to read a long list of legal terms in the terms and service agreement .

Single and multi-point data requests can be designed to reduce the complexity of current service contract terms. (Great preview)

Breaking the T & S into digestible interaction points within the experience instead of asking the user all in advance allows them to better understand what is happening and why. If you do not need the data to improve the experience, why is it collected? And if it is collected for futile reasons that only benefit the company, then be honest. This is only basic honesty, which unfortunately is considered a revolutionary and progressive customer service in the modern world.

The biggest key to these initial questions is that none of these should be active by default. All initial triggers should allow people who use the opt-in tool if they choose and use without opting for the option. Forced opt-in (or, worse still, opt-in forced) times are abruptly interrupting, and those who drive the road will stay ahead for a long time.

Data control center

In addition to asking for consent significantly, it will also be important to give consumers the opportunity to check their post-hoc data. Consumer access to control their data it should not end up with the terms and the service contract. Somewhere in account checks, there should also be a place (or places) where consumers can check their data on the platform after spending some time with the service. This area should show them what data is collected, who it is shared with, how it can be removed, and much more.

This mockup shows how to create data controls that inform the consumer not only of what data is being used, but in what direction it is going, and allows consumers to intimately control those data streams in a way that they feel safe.

Although we can often download our data now, we generally do not have much or no control. This must change. (Great preview)

The idea of ​​complete data control may seem incredibly liberal, but it is undoubtedly the future. And as a consumer property that creates data, it should be considered a fundamental human right. There is no reason why this should be a debate at this historic moment. Data represents the history of our lives – collectively – and combined creates a great deal of power against those who create it, especially if we allow systems to remain black boxes. So, in addition to giving consumers access to their data, as we discussed in the previous sections, we will also need to make the experience more comprehensible so that consumers can defend themselves.

Create an explicable AI

While it is incredible to get a suggested result that shows us the things we want before we even know we wanted it, it also puts the machines in a powerful position that they are not yet ready to support themselves. when the machines are positioned as experts and perform at a level that is smart enough to pass as such, the public generally trusts them until they fail. However, if the machines fail in ways that the public can not understand, they will remain experts despite their failure, which is one of the biggest threats to humanity.

For example, if someone were to use a visual search tool to identify the difference between an edible fungus and a poisonous mushroom, and they did not know that the machine told them that a poisonous mushroom was safe, that person could die. Or what happens when a machine determines the outcome of a judicial proceeding and is not required to provide an explanation for its decision? Or worse yet, what about when these technologies are used for military purposes and have the right to use lethal force? This last situation might seem extreme, but it is a question currently being discussed at the UN.

To ensure that the public is able to understand what happens behind the scenes we must create what DARPA calls explicable artificial intelligence (XAI) – tools that explain how the machines make their decisions and the accuracy with which these tasks have been achieved. It is not about giving trade secrets but about allowing consumers to feel able to trust these machines and to defend themselves in case of an error.

Although not based on artificial intelligence, a good example of how this could be is CreditKarma, which allows people to have a better understanding of their credit score – a system that was previously hidden as the algorithms today. This tool allows consumers to have a better understanding of what goes on behind the scenes and to discuss the legitimacy of their results if they believe the system has failed. Similar tools have been created with systems like Google's Match score on Maps and Netflix Percent Match on the show, but these systems are just starting to scratch the surface of the explainable AI.

The picture includes screenshots of Google Maps and Netflix to demonstrate how a system could appear that explains its decisions. Both are good first steps, but there are many improvements to be made.

Here we see systems that attempt to explain the decision of the machine to a very superficial level. This is a good start, but we need something better. (Great preview)

Despite these efforts, most algorithms today dictate our experience based on what a company thinks we want. But consumers should no longer be invisibly controlled by large publicly traded companies. Consumers should have the right to control their algorithm. This could be something as simple as letting them know which variables are used for which parts of the experience and how the change in the weights of each variable will impact their experience, giving them the ability to change it until it fits their algorithm needs completely disabled, if that is what they prefer. Whether this would be a paid feature or a free feature is still under discussion, but what is not debatable is whether this freedom should be offered.

The mockup shows how we can create artificial intelligence that is explained in a way that consumers are able to use and control themselves, rather than control is left in the hands of private or publicly traded companies.

The algorithm checks will be the future of the business. Could this be a way to generate revenue from the service rather than relying solely on ads? Should it be free? (Great preview)

While the example above is a generic proposal, begin to imagine how we could experience in more specific situations. By giving consumers the ability to understand their data, the way it is used and how it affects their lives, we will have designed a system that puts consumers in control of their freedom.

However, no matter how well these changes are made, we must also realize that giving people better control over their privacy does not automatically imply a safer environment for consumers. In fact, it could make things worse. Studies have shown that giving people better control over their data actually makes it more likely to provide more sensitive information. And if the consumer is not aware of how these data can be used (even if they know where it is shared), this puts them in danger. In this sense, giving consumers better control over their data and expecting them to make the Internet safer is like putting a nutrition label on a Snicker and expecting it to detract from the candy bar. He will not do it, and people will still eat it.

Although it believes that consumers have a fundamental right to better privacy controls and greater transparency, I also believe that it is our job as IT technicians not only to build better systems, but also to help the public understand the security of Internet. So, the last step in putting all this together is bringing awareness to the fact that control is not necessary for all consumers. They also need to understand what's happening on the backend – and why. This does not necessarily mean providing them with source code or giving away their IPs, but at least providing them enough information to understand what is happening at the basic level, as a security issue. And to achieve this, we will have to push ourselves beyond our screens. We will have to extend our work in our communities and help create that future.

Suggested reading: Designing ethics: changing ethical understanding in design

Incentive change

Give up privacy is something in which the population has been locked up because of the monopolies that exist in the world of technology, the misunderstanding of consumers why this is so dangerous inside and the lack of tactical solutions associated with income tax. However, this is a problem that must be solved. As Barack Obama noted in the summary of the concerns of his administration on Internet privacy:

"One thing should be clear: even if we live in a world where we share personal information more freely than in the past, we must reject the conclusion that privacy is an outmoded value.It has been at the center of our democracy since its inception and we need it now more than ever ".

Creating reliable and safe data sharing experiences will be one of the biggest challenges our world will have to face in the coming decades.

We can see how Facebook stocks have dropped 19 percent in a day after announcing they will once again focus on privacy efforts as evidence of how difficult these changes are. This is because investors who have recently focused on short-term revenue growth know how companies need to implement better strategies, but they also realize the cost involved if the public begins to question a company – and the Facebook public statement that admits this made the sheep jump.

Although the process will not be easy (and often it can be painful), we all know that privacy is the soft underbelly of technology and it's time to change it. The decisions taken today will pay great long-term; una netta differenza rispetto alla mentalità trimestrale a breve termine che è arrivata a dominare il business nell'ultimo decennio o giù di lì di crescita. Pertanto, la scoperta di modi creativi per rendere queste problematiche una priorità per tutte le parti interessate dovrebbe essere considerata essenziale sia per le imprese che per i responsabili politici, il che significa che il nostro lavoro come tecnologi deve estendersi oltre la sala del consiglio.

Ad esempio, un ottimo modo per incentivare questi cambiamenti oltre a discutere i numeri e le questioni sollevate in questo articolo sarebbe attraverso agevolazioni fiscali per le aziende che allocano grandi quantità del loro budget per migliorare i loro sistemi. Le interruzioni potrebbero essere concesse alle aziende che decidono di farlo fornire formazione o workshop regolari per il proprio personale per contribuire a rendere la privacy e la sicurezza una priorità nella cultura aziendale. Potrebbero essere dati a società che assumono hacker professionisti per trovare scappatoie nei loro sistemi prima che si verifichino degli attacchi. Potrebbero essere dati a coloro che assegnano grandi quantità di ore alla ristrutturazione delle loro pratiche commerciali in un modo che avvantaggia i consumatori. In questo senso, tali incentivi non sarebbero così diversi rispetto alle agevolazioni fiscali concesse alle imprese che attuano pratiche eco-compatibili.

L'idea delle agevolazioni fiscali può sembrare oltraggiosa per alcuni, ma incentivi come questi rappresenterebbero una soluzione più proattiva rispetto al modo in cui le cose vengono gestite ora. Anche se può sembrare piacevole leggere un titolo che afferma "Google ha multato $ 5 miliardi di dollari dall'UE per violazioni antitrust di Android", dobbiamo tenere presente che multe come questa rappresentano solo una piccola parte delle entrate di tali società. Combinate questo con il fatto che la maggior parte dei casi richiede diversi anni o decenni per concludersi, e questa percentuale diventa solo più piccola. Con questo a titolo di considerazione, l'idea delle agevolazioni fiscali può essere affrontata da una prospettiva diversa, ovvero che non si tratta di premiare comportamenti precedentemente negligenti, ma di aumentare la sicurezza pubblica in un modo che sia nel migliore interesse di tutte le persone coinvolte. Mantenere il nostro attuale sistema, che consente alle aziende di estrapolare i casi giudiziari mentre continuano le loro pratiche scorrette, è altrettanto, se non di più, pericoloso che non avere alcuna legge.

Se ti è piaciuto leggere questo articolo e pensare che anche gli altri dovrebbero leggerlo, per favore aiutali a diffondere la notizia.

Questo articolo è l'inizio di una serie di articoli che scriverò dedicati alla sicurezza di Internet, in cui lavorerò per inserire i numeri fiscali in schemi di progettazione etica in modo che noi, come i tecnologi, possiamo cambiare le attività che stiamo costruendo e creare un migliore cultura che circonda lo sviluppo di esperienze connesse a Internet.

Editoriale favoloso
(il, ra, yk)

Source link

Leave a Reply

Your email address will not be published.