Connected Magazine

Main Menu

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021

logo

Connected Magazine

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021
Features
Home›Features›Move fast and fix things: Ethics in Silicon Valley

Move fast and fix things: Ethics in Silicon Valley

By Jacob Harris
21/05/2019
383
0

In 1984, George Orwell imagined a world where ‘Big Brother’ was not only always watching but controlling what the population saw and even thought. In recent times we’ve discovered he wasn’t too far off. It’s just there’s more than one ‘brother’ and they’re selling their control to the highest bidder. Jacob Harris explains.

To many, the ability of tech juggernauts like Facebook and Google to influence what we think and how we behave is becoming increasingly clear. It was demonstrated by the Russian bots scandal that led to the Trump Presidency and, more recently, by the UN highlighting Facebook’s role in the amplification of hate speech and associated violence in Myanmar.

This power is a direct result of the extremely effective business model Facebook, Google and many other tech companies are built on. It’s what many refer to as ‘the attention economy’ or ‘surveillance capitalism’ and it would seem it has not only redefined how democracies function but has led to an unprecedented level of influence and control being sold to whoever’s willing to pay.

“The economic basis of the internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions of the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likeable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills,” says web designer and speaker Maciej Ceglowski in his talk, Build a Better Monster: Morality, Machine Learning and Mass Surveillance.

ADVERTISEMENT

Maciej highlights the fact that anyone who owns a smart phone carries a tracking device that knows (with great accuracy) where you’ve been, who you last spoke to and when; and contains potentially decades-long archives of your private communications, a list of your closest contacts, your personal photos and other very intimate information.

“Internet providers collect (and can sell) your aggregated browsing data to anyone they want. A wave of connected devices for the home is competing to bring internet surveillance into the most private spaces. Enormous ingenuity goes into tracking people across multiple devices, and circumventing any attempts to hide from the tracking,” says Maciej.

“With the exception of China (which has its own ecology), the information these sites collect on users is stored permanently and with almost no legal controls by a small set of companies headquartered in the United States.”

What makes matters worse, is that users have minimal (if any) control over what personal data is stored, who it’s shared with and what it’s used for. Examples of this power imbalance are regularly reported on: from the misappropriation of 87 million Facebook users’ information that came to light in the Cambridge Analytica scandal (which in turn led to further revelations about the personal information Facebook gathers and sells without our knowledge or consent); to the Associated Press investigation in August which revealed that Google is tracking and storing its users’ location information, even when the privacy setting that says otherwise has been selected.

Governments are responding to these breaches of trust with measures like the General Data Protection Regulation [GDPR] in the EU. But in many ways we’re playing a constant game of catch-up, leading some commentators (such as TechCrunch in their article A flaw by flaw guide to Facebook’s new GDPR privacy settings) to question the efficacy of these measures when users can easily be persuaded to waive them. Indeed, it would seem that the situation calls for a re-examination of the fundamental business model that has led us here and its ethical implications when applied at scale.

According to Maciej, the twin pillars of the online economy are an apparatus for harvesting tremendous quantities of data from people, and a set of effective but opaque learning algorithms trained on this data. The algorithms learn to show people the things they are most likely to ‘engage’ with – click, share, view, and react to. And have become very good at provoking these reactions from people.

The use of these tools for commerce is no doubt unsettling to many but when these tools for maximising clicks and engagement creep into the political sphere there’s even greater cause for concern.

“The idea that these mechanisms of persuasion could be politically useful, and especially that they might be more useful to one side than the other, violates cherished beliefs about the ‘apolitical’ tech industry. Whatever bright line we imagine separating commerce from politics is not something the software that runs these sites can see,” says Maciej.

“All the algorithms know is what they measure, which is the same for advertising as it is in politics: engagement, time on site, who shared what, who clicked what, and who is likely to come back for more. The persuasion works, and it works the same way in politics as it does in commerce – by getting a rise out of people.”

One solution many have argued for is creating a fiduciary relationship (similar to that of a doctor, lawyer or priest) between tech companies and their users.

“Arguments in favour of creating a fiduciary responsibility have been made for several years now,” says director of the Internet Ethics program at Markkula Centre for Applied Ethics, Santa Clara University Irina Raicu.

“A law that would create such a fiduciary relationship would be a step in the right direction, as long as it doesn’t leave too much to the tech companies’ own interpretations of what the duty means in practice (or to the courts who might be called to decide whether a particular action by a tech company was, indeed, in the best interest of the users of that company’s products),” says Irina.

Former Google design ethicist Tristan Harris agrees a fiduciary relationship would be more in line with desired ethical outcomes.

“Consider the asymmetric power an attorney has over their client: it knows way more about the law and has lots of privileged information about its client. So if it wanted to, it could easily screw over its client because the level of asymmetric power is enormous,” says Tristan at the Milken Institute’s 2018 Global conference.

He encourages us to compare the relationships we have with our lawyers, doctors, psychotherapists and the like, and the information they have on us, with our relationship with Facebook and the information it has. While a doctor or lawyer is legally obliged to use our information in our best interests, this is not the case with Facebook and its ilk. This is particularly concerning when we consider the number of people that are ‘plugged in’ to a single environment.

“Technology is increasingly becoming the number one political, social, electoral, cultural actor. There are two billion people who use Facebook, about the number of people who follow Christianity; 1.5 billion people who use YouTube, about the number of followers of Islam; and millennials check their phones 150 times a day on average.”

Compounding the issue is the use of AI technologies such as FB Learner Flow, which enables Facebook to predict what a user is going to be loyal (and not loyal) to in the future, predict when they have low self esteem and when they’re about to change their opinion on certain topics.

“We’ve never had an AI that can learn from two billion people’s minds. And when you think about it that way, it’s an incredibly dangerous situation to have that much power be completely unaccountable to the public interest, to democracy, and to truth. They can claim they care about users but they only care about them in so far as they’re the sheep that need to be jacked into the matrix they’ve created,” says Tristan.

Indeed, if technology is changing social, political and societal landscapes at such a rapid pace, it’s arguable we need to ensure widely valued ethical principles (such as liberty, moral agency and freedom of expression) are upheld before it’s too late.

“The creators and builders of technology need to accept the fact that their products help shape society; while many of them now do (that realisation has become wide-spread over the last few years) there are still many who reject the responsibilities that come with the power of their role,” says Irina.

“There is also the reality that some entrenched business models and accepted business practices have pushed in the opposite direction – negatively impacting autonomy, human dignity, and security. Those will need to change, too.”

These negative impacts can be seen clearly when you consider the way persuasion works in online settings. Maciej argues that this model of persuasion (that employs techniques to maximise ‘engagement’) has troubling implications in a democracy.

“One problem is that any system trying to maximise engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content. When I tried this experiment last night, within five clicks I went from a news item about demonstrators clashing in Berkeley to a conspiracy site claiming Trump was planning WWIII with North Korea, and another exposing FEMA’s plans for genocide,” says Maciej.

“This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behaviour into the algorithm; it made a correct observation about human nature and acted on it.”

It can further be argued that this effect has a part to play on the current global political landscape and has caused an abatement of centrist politics, which has lead some in the industry to assert that the role ethics plays in the evolution of technology needs to change.

“Technology has always had consequences; it can help, or hinder, human flourishing (or do both at the same time in different aspects of life); it has implications both for individuals and for the common good. So ethical considerations have always been part of the evolution of technology – whether or not the term ‘ethics’ was explicitly used in analyses of tech. However, the ethical analysis of tech development needs to be deeper, to include more diverse voices (especially those of the people who will be impacted by the technology), and to be established as an integral element of the tech development process,” says Irina.

Tristan maintains that we need to go further, and ask ourselves some deeper questions around what it means to have something as powerful as these tech giants in a free-market world and how we can make them accountable to something other than their own profits.

“From the moment you wake up and check your phone, thoughts start streaming in that you’re not controlling. It’s the designers at the tech companies who really do control what people are thinking. So the question becomes: How do you wake people up to that?” says Tristan.

  • ADVERTISEMENT

  • ADVERTISEMENT

Previous Article

Combating domestic violence: Strong networks, safe homes

Next Article

Why use automatic EQ?

  • ADVERTISEMENT

  • ADVERTISEMENT

Advertisement

Sign up to our newsletter

Advertisement

Advertisement

Advertisement

Advertisement

  • HOME
  • ABOUT CONNECTED
  • DOWNLOAD MEDIA KIT
  • CONTRIBUTE
  • CONTACT US