Responsible AI: Why Privacy Is An Essential Element


Today, people often talk about “responsible” AI use, but what do they really mean?

Generally speaking, being responsible means being aware of the consequences of our actions and making sure they don’t cause harm or put anyone in danger.

But there’s a lot we don’t know about AI. It’s very hard to say, for example, what the long-term consequences will be of developing machines that can think, create and make decisions on our behalf. It will impact human jobs and lives in ways that no one can be certain about yet.

One of the potential dangers is infringement on privacy, which is generally accepted as a fundamental human right. AI systems can now recognize us by our faces when we’re in public and are routinely used to process highly sensitive information such as health and financial data.

So what does responsible AI look like when it comes to privacy, and what are the challenges businesses and governments face? Let’s take a look.

Consent and Privacy

AI often involves using data that many of us consider private – such as our location, financial affairs, or shopping habits – to offer us services that make life simpler. This might be route planning, product recommendations, or protection from financial fraud. In theory, this is all possible because of consent – we consent for our information to be used; therefore, using it doesn’t constitute a breach of our privacy.

Acting with respect for consent is one way that businesses can ensure they are using AI responsibly. Unfortunately, this doesn’t always happen!

For example, the Cambridge Analytica scandal saw personal data collected from millions of Facebook users without their consent and used for political modeling.

Businesses and even law enforcement agencies have faced backlash from the public for using facial recognition technologies without taking appropriate steps to gain consent.

An important question is when does consent becomes invalid because the scope is so broad it could be interpreted in ways that the consenter might never have imagined? Or if the terms and conditions given when soliciting consent are so complex that they will frequently be misinterpreted?

Systems and processes for obtaining clear, informed consent must be baked into the core of any AI system – not simply bolted on as an afterthought – if privacy is to be handled responsibly.

An example is the generative AI tools provided by software company Adobe. These differentiated themselves from those of their competitors (such as OpenAI’s ChatGPT) in that they are only trained on data where the creators had explicitly given their consent.

Data Security

Data also has to be kept safe and secure, whenever there is a responsibility to uphold privacy. We can collect all the consent in the world when we gather data, but if we fail to protect it, we’ve let our customers down when it comes to protecting their privacy. Which is pretty irresponsible!

Data thefts and breaches are getting bigger and more damaging all the time. At the end of 2023, the sensitive healthcare records of nearly 14 million people were compromised by an attack on transcription service provider PJ&A. And nearly nine million were affected by a ransomware attack targeting MCNA Dental.

In another incident, hackers gained access to feeds from over 150,000 security cameras gathered by software company Verkada, which was involved in training facial recognition technology. The footage showed activity in jails, hospitals, clinics and private premises.

Taking responsibility here means ensuring security measures are up to the task of defending against today’s most sophisticated attacks. As well as predicting and preventing the threats and attack vectors that are likely to emerge tomorrow.

Personalization Versus Privacy

One of the big promises of AI is more personalized products and services. Rather than tailored to groups of people that are similar to me, I’ll buy insurance that specifically covers my own needs and risks. Or a car that understands my own driving habits and my likes and dislikes when it comes to in-car entertainment and climate control.

This sounds great, but customized experiences obviously come at a cost to our privacy. This means companies collecting data for this purpose must develop a clear understanding of where to draw the line.

One way of tackling this is through on-device (edge computing) systems that process data without leaving it in the owner’s possession. These systems can be tricky to design and build because they have to run within the comparatively low-power environment of a user’s smartphone or device rather than on a high-performance cloud data center. But it’s one way of responsibly handling privacy when delivering personalized services.

We also have to be careful not to be too personal – customers can easily get “creeped out” if they get a sense that AI knows too much about them! Understanding what level of personalization is genuinely helpful to the user, and what crosses the line into intrusiveness, is key here.

Privacy By Design

Balancing consent, security, and the line between personalization and invasion of privacy are the cornerstones of building responsible AI that’s respectful of privacy. Getting it right requires a nuanced understanding of our own processes and systems as well as the rights, feelings and opinions of customers.

Getting it wrong means eroding the trust users place in our AI-enabled products and services, ultimately undermining the likelihood of them achieving their potential.

I have no doubt that we will see plenty of both the good and the bad as companies anticipate and adapt to society’s changing standards and expectations. Legislation will have a role to play, and we’ve seen steps towards this in measures such as the EU AI Act. But at the end of the day, it will be down to those who develop and sell these tools – as well as us as users – to define what it means to be responsible in the fast-paced world of AI.





Source link

You May Also Like