Tech Companies: Neglect Privacy Training And Go Broke
Today's newsletter is sponsored by Mine PrivacyOps, the #1 Highest-rated Data Privacy Management Platform for Request Handling, Data Mapping, ROPA and Risk Assessment on G2. Go live in 30 minutes.
Read time: 5 minutes.
_
Non-legal teams, as a rule, are not expected to have any privacy knowledge. In today's newsletter, I explain why the lack of specialized privacy training might put a company's privacy compliance at risk, resulting in privacy infringements.
In tech companies, privacy is usually the domain of the privacy and data protection lawyers in charge. Except for security engineers, who will probably have technical privacy knowledge, other teams, such as product, design, marketing, human resources, sales, customer services, and so on, are usually not expected to understand the possible privacy implications of their job.
Tech companies still treat privacy in an "incident prevention" mode. With that, I mean that they plan their products and services with whatever metrics and standards they consider important. Then, they might invite privacy lawyers to the room, especially when a new product is being launched. Sometimes these lawyers will inform the need to conduct a thorough Data Protection Impact Assessment or similar evaluations. On other occasions, they might propose specific recommendations and changes to the product. In the day-to-day activities of the company, there will be times lawyers will not be called, as the leaders in charge will assume that not every product update or adjustment will need special legal oversight.
The issue is that privacy risk, today, is everywhere. Companies collect and process data from multiple sources, for multiple purposes, on an ongoing basis. Every tech company is a data company. As a consequence, with the rise of privacy oversight and regulation - design, marketing, sales, human resources, engineering, operations, and so on - each one of these teams can potentially commit a privacy infringement that can lead to penalties and fines.
The old way of dealing with privacy in an "incident prevention" mode might have worked in the last years and helped companies avoid public backlash, regulatory penalties, and fines. However, privacy is undergoing massive changes in three main ways:
1- Regulatory changes: the GDPR brought a new pace to privacy and data protection legislation and regulatory oversight around the globe. Privacy lawmaking and regulatory efforts have been specially accelerated in the last few months. Regulators are partnering with academics, advocates, and experts to propose more precise and effective measures, starting to break with the old dogma that the law is always much behind the technology.
2- Changes in people's privacy expectations: studies have shown how people’s privacy expectations are changing and becoming higher. People want transparency. People will prefer a company that they trust, and privacy practices will have a central role in building trust.
3- Competition: a privacy market has been formed in the last few years. You can now find a privacy-oriented version of almost every product or service. Companies have realized that privacy sells and that people trust more companies that offer better privacy practices. Neglecting privacy is a risk.
As a result of these three main waves of changes, it is becoming clear that if a company processes DATA, it must also be PRIVACY-focused. To be privacy-focused, its teams must be privacy-aware and understand how to implement privacy within their domain. If they want to succeed in terms of competition, public trust, and regulatory compliance, they must see privacy as an essential element of their business model and embed privacy in the company's culture.
Do you think I am exaggerating? I will give you two examples:
1- Dark patterns in Privacy
I have spoken multiple times in this newsletter about dark patterns that affect privacy and wrote an academic article about it (which was cited in the EU report about the topic).
Dark patterns in privacy are design tricks made to collect more or more sensitive data. Product, marketing, and design teams are the ones usually in charge of a website or app’s design. It’s not the usual practice to call a privacy lawyer to review every piece of design before it goes live. Even if it was the usual practice, not every privacy lawyer understands well dark patterns in privacy, as it is a relatively new discussion. Despite all that, regulation is starting to tackle dark patterns. There have been fines connected to dark patterns. And in my view, the fines for privacy dark patterns will get even higher in the next years, as these practices will be seen by people and regulators as completely inadmissible.
It is clear to me that it is impossible to internally tackle dark patterns without the involvement of designers and product teams, helping them understand how their work can harm privacy, what privacy means, and what are privacy best practices.
And we are not talking about privacy-by-design, as despite the name, this framework does not contain design guidelines, heuristics, or any specific content for designers. This is a new “science” that aims at bridging privacy law and design. I proposed Privacy-Enhancing Design (PED) in this newsletter, and you can get in touch to learn more about PED training.
2- Algorithmic Bias
One of the topics of my Ph.D. is algorithmic bias, which I contextualize as a type of unfair data practice that happens in the processing phase of the data cycle. We will have a deep dive into the topic in another newsletter article. What I want to highlight here is that regulation on the topic is on the rise, as this article summarized.
“Predictions for 2022 often discuss a more widespread adoption of explainable AI, stronger AI-centered risk management strategies and increased monitoring of AI systems. Discussions about AI ethics will be central to the development of more regulations as more calls for ethical hiring practices, bans on facial recognition and much more continue to grow across the globe.”
Are engineers being trained on the negative privacy impact of the machine learning systems they build? Do engineers receive broad privacy education to understand the social implications of different types of data processing and algorithmic decision-making?
Clearly, lawmakers and regulators are determined to tackle Artificial Intelligence (AI) bias. Companies should train their engineering teams on how to build algorithmic systems that adopt an accountable, transparent, preemptive, and responsible approach to fairness and privacy.
I strongly believe in the power of education and awareness (that is why this newsletter exists). My final take is: do you want to improve privacy compliance in your organization? ALL TEAMS must have privacy awareness and understand how their work can impact privacy. Tech companies must see themselves as privacy-focused companies.
Education, awareness, and specialized training are the future of privacy.
_
What are your thoughts on that? What other steps can tech companies take to increase the level of their privacy compliance? I would love to read your opinion in the comments below.
Privacy needs critical thinkers like you. Share this article and start a conversation about the topic.
See you next week. All the best, Luiza Jarovsky