Connect: AIEthics

 

 

 

AI Ethics
Principles

Values

Autonomy

  • Giving rules to oneself. (Between slavery of living by others’ rules, and chaos of life without rules.)
  • People can escape their own past decisions. (Dopamine rushes from Instagram Likes don’t trap users in trivial pursuits.)
  • People can make decisions. (Amazon displays products to provide choice, not to nudge choosing.)
  • People can experiment with new decisions: users have access to opportunities and possibilities (professional, romantic, cultural, intellectual) outside those already established by their personal data.

Dignity

  • Users hold intrinsic value: they are subjects/ends, not objects/means.
  • People treated as individually responsible for – and made responsible for – their choices. (Freedom from patronization.)
  • The AI/human distinction remains clear. (Is the voice human, or a chatbot?)

Privacy

Human questions

How much intimate information about myself will I expose for the right job offer, or an accurate romantic match?

Originally, health insurance enabled adventurous activities (like skiing the double black diamond run) by promising to pay the emergency room bill if things went wrong. Today, dynamic AI insurance converts personal information into consumer rewards by lowering premiums in real time for those who avoid risks like the double black diamond. What changed?

An AI chatbot mitigates depression when patients believe they are talking with a human. Should the design – natural voice, and human conversational indicators like the occasional cough – encourage that misperception?

If my tastes, fears and urges are perfectly satisfied by predictive analytics, I become a contented prisoner inside my own data set: I always get what I want, even before I realize that I want it. How – and should – AI platforms be perverted to create opportunities and destinies outside those accurately modelled for who my data says I am?

Critical ethics debate

What’s worth more: freedom and dignity, or contentment and health?

Values

Fairness

  • Equals treated equally and unequals treated unequally. (Classical approach, Aristotle.)
  • Equalities measured as opportunities or outcomes?
  • Bias/discrimination suppressed in information gathering and application: Recognition of individual, cultural, and historical biases inhabiting apparently neutral data.
  • Data bias amplification (unbalanced outcomes feeding back into processes) is mitigated. Example: an AI resume filter privileging a specific trait creates outcomes that in turn support the privileging.

Solidarity

  • No one left behind: training data inclusive so that AI functions for all.
  • Max/Min distribution of AI benefits: the most to those who have least (Rawls).
  • Stakeholder participation in AI design/implementation. (What do the other drivers think of the driverless car passing on the left?)

Sustainability

  • Established metrics for socio-economic flourishing at the business/community intersection include infant mortality and healthcare, hunger and water, sewage and infrastructure, poverty and wealth, education and technology access. (17 UN Sustainable Development Goals.)
Human questions

Which is primary: equal opportunity for individuals, or equal outcomes for race, gender and similar identity groups?

AI catering to individualized tastes, vulnerabilities, and urges effectively diminishes awareness of the others’ tastes, vulnerabilities and urges – users are decreasingly exposed to their music, their literature, their values and beliefs. On the social level, is it better for people to be content, or to be together?

An AI detects breast cancer from scans earlier than human doctors, but it trained on data from white women. Should the analyzing pause until data can be accumulated – and efficacy proven – for all races?

Those positioned to exploit AI technology will exchange mundane activities for creative, enriching pursuits, while others inherit joblessness and tedium. Or so it is said. Who decides what counts as creative, interesting and worthwhile versus mundane, depressing and valueless – and do they have a responsibility to uplift their counterparts?

Critical ethics debates

What counts as fair? Aristotle versus Rawls.

Is equality about verbs (what you can do), or nouns (who you are, what you have)?

In the name of solidarity, how much do individuals sacrifice for the community?

Values

Performance

  • Accuracy & Efficiency
  • Personalized quality and convenience.

Safety

  • Secure, resilient and empowered with fallbacks to mitigate failures.
  • Human oversight: Designer or a deployment supervisor holds power to control AI actions.

Accountability

  • Explainability: Ensure AI decisions can be understood and traced by human intelligence, which may require debilitating AI accuracy/efficiency. (As a condition of the possibility of producing knowledge exclusively through correlation, AI may not be explainable.) Interpretability – calculating the weights assigned to datapoints in AI processing – may substitute for unattainable explainability. (Auditors understand what data most influences outputs, even if they cannot perceive why.)
  • Redress
Human questions

A chatbot responds to questions about history, science and the arts instantly, and so delivers civilization’s accumulated knowledge with an efficiency that withers the ability to research and to discover for ourselves (Why exercise thinking when we have easy access to everything we want to know?) Is perfect knowledge worth intellectual stagnation?

Compared to deaths per car trip today, how great an increment would be acceptable to switch to all-driverless cars, ones prone to the occasional glitch and consequent, senseless wreck?

If an AI picks stocks, predicts satisfying career choices, or detects cancer, but only if no one can understand how the machine generates knowledge, should it be used?

What’s worth more, understanding or knowledge? (Knowing, or knowing why you know?)

Which is primary, making AI better, or knowing who to blame, and why, when it fails?

Critical ethics debates

What, and how much will we risk for better accuracy and efficiency?

What counts as risk, and who takes it?

A driverless car AI system refines its algorithms by imitating driving habits of the human owner (driving distance between cars, accelerating, breaking, turning radiuses). The car later crashes. Who is to blame?

Versus

Compared to others, our principles lean toward human freedom/libertarianism, and are more streamlined. Small differences.

Ethics Guidelines for Trustworthy AI
AI High Level Expert Group, European Commission
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment
European Commission for the Efficiency of Justice
https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c

Ethical and Societal Implications of Data and AI
Nuffield Foundation
https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Sheffield-Foundat.pdf

The Five Principles Key to Any Ethical Framework for AI
New Statesman, Luciano Floridi and Lord Clement-Jones
https://tech.newstatesman.com/policy/ai-ethics-framework

Postscript on Societies of Control
October 1997, Deleuze
/Library: Deleuze, Foucault, Discipline, Control.pdf

A Declaration of the Independence of Cyberspace
John Perry Barlow
https://www.eff.org/cyberspace-independence

 

 

Ethics Theories

S image

Deontological

Guiding value

Tradition

Rules for action

A set of moral directives seems to recur through historical times and places, and in diverse religious, social and political contexts. This endurability can be taken to legitimize the guidelines. While no single list perfectly contains the recurring imperatives, typically there are:

Duties to self:
• Preservation
• Develop my own talents
• Fidelity/Integrity (Be true to myself)

Duties to others:
• Honesty (Be true to others)
• Beneficence (Help others as reasonably possible)
• Reparation (Repair harm done to others)
• Gratitude

Advantages/Drawbacks

Because ethical legitimacy stands on widespread historical acceptance of the moral rules, the guidelines are familiar, commonly employed, and easily applied to experience. But, multiple duties may yield contradictory imperatives. For example, a student may have money to buy a new computer (develop own talents) or donate to a scholarship fund (beneficence), but not both. No formula has been discovered to reliably adjudicate these duty conflicts.

S image

Deontological

Guiding value

Religious faith

Rule for action

Follow the command of God (or Gods in the case of polytheism, as in ancient Greece).

Advantages/Drawbacks

Divine sanction fortifies confidence in moral regulation, but difficulties remain in decoding how the regulation should be applied on the human level, as exemplified by conflicting interpretations of religious texts, and by the story of Job in the Bible.

S image

Deontological

Guiding value

Equality

Rule for action (Aristotle version)

Treat people identically unless they differ in ways relevant to the situation. Differences between people that are relevant should yield proportionately unequal treatment. (Treat equals equally, and unequals unequally.)

Advantages/Drawbacks

Aristotelian fairness yields objectively correct responses to dilemmas. But, it can be difficult to define the “equal” and “unequal” in practice, especially in terms of what counts as relevant to a situation. For example, a five foot woman and a six foot man each pay the same price for an airplane ticket. Should they receive the same legroom?

Rule for action (John Rawls version)

Decide without regard for how your conclusion affects you personally. The theory can be presented as a thought experiment in which deciders know nothing about themselves (age, education, preferences, and so on) and after making a judgement, those qualities are assigned to them randomly. So, with respect to the airplane ticket and legroom, deciders must imagine that their height will be assigned by lottery after pronouncing their decision.

Advantages/Drawbacks

An effective strategy in some situations. For example, when sharing a cookie between two friends, one breaks it in half and the other chooses the side: the person breaking the cookie operates from behind the veil of ignorance in that they don’t know how they will be affected by their own portioning. But, in many situations it’s nearly impossible for deciders to blindfold themselves to their own reality within the decision being made.

S image

Deontological

Guiding values

Rationality, Dignity

Rule for action (Rationality version)

Actions must be universalizable, meaning that it is possible to rationally conceive of everyone taking the action all the time. Lying, for example, cannot be universalized because if everyone lied all the time, no one would take anything seriously, so no one could successfully lie. Attempting to lie therefore contradicts itself. Restated, lying cannot make sense because universalizing the practice doesn’t make everything false, instead, it creates a reality like an adventure movie which is neither honest nor dishonest: it’s not true, but it’s also not misleading, just entertaining.

Advantages/Drawbacks

Powerfully objective, but practically torturous: imagine never lying about anything, ever.

Rule for action (Dignity version)

Treat others as ends in themselves, and never only as means. Because others’ independent life projects must be respected, treating them as tools or instruments serving my own projects becomes inadmissible. The difference can be understood in the distinction between collaboration and exploitation: the first treats others as ends in themselves, the second treats them as tools for use.

Advantages/Drawbacks

The ideal of universal dignity as inherent to human being is inspiring in the abstract, but does a remorseless murderer deserve to be treated as dignified? More practically, if humans may not be treated as mere instruments, what does that mean for our interactions with cashiers?

S image

Deontological

Guiding value

Freedom

Rule for action

Do what you want, up to the point where you interfere with others doing the same. Libertarian models extend freedom expressions from our minds and bodies, to our possessions and the fruits of our labors. In every case, freedom means applying rules to yourself, and obeying them.

Advantages/Drawbacks

Freedom maximization empowers individual experience: we are liberated to choose our own identities and destinies. But, the theory does little to resolve conflicts between individuals or support collective wellbeing. Zoning laws, for example, conflict with libertarian thought.

S image

Consequentialist

Guiding value

Happiness

Rule for action

Bring the greatest good and happiness to the greatest number. Total wellbeing is calculated by summing the condition of every member of society. Then, those actions raising the happiness count – or diminishing overall suffering – are implemented. Happiness can be defined hedonically (Bentham, physical pleasures), or idealistically (Mill, intellectual pleasures). In both cases, the happiness calculation must account for everyone, as far into the future as the effects of an action may be reasonably projected.

Advantages/Drawbacks

Overall wellbeing and the collective welfare is attractive in the abstract, but balances against injustices to flesh and blood individuals: if a fatal disease can be cured with a lethal experiment on a human, and there are no volunteers, a pure utilitarian will coerce participation. Another drawback is the difficulty in accurately calculating happiness in a world of diverse people with unpredictable futures.

S image

Consequentialist

Guiding value

Happiness

Rule for action

Bring the greatest good and happiness to the greatest number, not including the actor. Total wellbeing is calculated by summing the condition of every member of society except the person doing the calculating. Then, those actions raising the happiness count – or diminishing overall suffering – are implemented. Happiness can be defined hedonically (Bentham, physical pleasures), or idealistically (Mill, intellectual pleasures). In both cases, the happiness calculation must account for everyone except the actor, and the calculating must stretch as far into the future as the action’s effects may be reasonably projected.

Advantages/Drawbacks

Selflessly seeking collective wellbeing sounds noble, but is altruism based on generosity, or is it disguised self-abnegation?

S image

Consequentialist

Guiding value

Happiness

Rule for action

Bring the greatest good and happiness to me. Happiness can be defined hedonically (Bentham, physical pleasures), or idealistically (Mill, intellectual pleasures). Some egoists view the ethics as an inescapable psychological reality: we are all out for ourselves whether we admit it or not. Others support egoism as a rational choice, especially those promoting Enlightened Egoism, the view that acting to benefit others is desirable as an efficient strategy for self-service. More, the best way to bring happiness to others may be to seek it for oneself (Adam Smith, invisible hand).

Advantages/Drawbacks

No one knows my own happiness better than I do, so it makes sense that I hold the responsibility to seek it. Also, if the invisible hand idea is persuasive, then enlightened egoism becomes preferable to utilitarianism and altruism by default. But, egoism requires that others, even those closest to us, be categorized as unworthy of independent moral consideration.

S image

Virtue

Guiding value

Good living (Eudaimonia)

Rule for action

As opposed to deontological and consequentialist theories which attempt to form good rules for action, virtue ethics attempts to form good people, and then trust that they will act civically in a complex world. Virtue is a skill, one that is acquired intellectually through study, and also practically as part of youthful development in social institutions: families, schools, churches, the military, workplaces, sports teams, civic associations. As an example of virtue happening, a college student may attend an ethics lecture in the afternoon, and in the evening apply the lessons while participating in a water polo competition where the virtue of winning with humility (or losing with dignity) is practiced.

Advantages/Drawbacks

Because virtue is a skill, the attainment of mastery provides satisfaction, meaning virtue is its own reward: doing good feels good, and together they define a good life. However, exactly what counts as being virtuous is hard to define since different societies teach their youth different lessons and embody distinct practices for managing crime and punishment, vows of marriage and family responsibilities, the treatment of the vanquished in war and sporting competitions, and so on.

S image

Post-Nietzschean (Nietzsche/Heidegger)

Guiding value

Authenticity

Rule for action

As opposed to the traditional ethical obligation for individuals to aspire to an ideal identity as defined by their society, the ethics of authenticity asks you to be true only to yourself, whoever you may be. The precedent requirement is to determine who, exactly, you are. Nietzsche proposed the Eternal Return thought experiment, Heidegger proposed anxiety in the face of death. In both cases, the result is an understanding one’s unique life projects as distinct from broader social expectations. The subsequent ethical imperative is to engage those projects.

Advantages/Drawbacks

In a world without objective right and wrong, being true to myself provides a direction and use for my freedom. When the authentic person is an artist, the theory works well, but when the authentic person is a natural born murderer, the theory is less felicitous.

S image

Post-Nietzschean

Guiding value

Nativity

Rule for action

Traditionally, ethicists have worked to escape the idiosyncrasies of particular times and places by developing theories sufficiently abstract to apply universally. Culturalism reverses the tradition by embracing the idiosyncrasies: a community’s native beliefs are accepted as their legitimate moral rules, and the task of ethics is to learn the local practices, customs, and traditions, and then fit into them.

Advantages/Drawbacks

Respect for distinct cultures and traditions is maximized, but hope for ethical progress recedes because respecting another culture’s moral rules goes equally whether those rules seem noble, or barbaric.

S image

Post-Nietzschean (Habermas)

Guiding value

Consensus

Rule for action

Gather those involved in a conflict and discuss until reaching a shared resolution. The discourse must be rational and peaceful: participants comprehend their own agreements, and arrive without coercion. There is a partial analogy to American courthouse jury decisions here in that agreement is by informed consent, and the fact of agreement is the decision’s judicial/ethical legitimacy.

Advantages/Drawbacks

Provides a broad range of initially possible solutions since everything is on the table for discussion. But, everything on the table means a lot of talking since every conflict must be addressed and resolved from scratch.

S image

Post-Nietzschean (Gilligan)

Guiding value

Care

Rule for action

As opposed to concentrating on individuals, care ethics focuses on the links uniting people in their social networks. The aim is to strengthen the web of bonds, especially with those who are nearest. Families are a commonly cited example. For instance, a relative suffering drug addiction may receive a disproportionately large share of resources and concern. Or, if the addict becomes dangerously toxic, links to the family may be severed. In both scenarios, fortifying the web of familial care is the guiding concern, not any particular member.

Advantages/Drawbacks

Fortifying our intimate social networks conforms to intuitive feelings: many of us would rescue a sister before a stranger if only one could be saved. But, the theory can lead to tribalism, a mafia-family approach to civil co-existence.

S image

Post-Nietzschean (Deleuze)

Guiding value

Originality

Rule for action

The traditional ethical split between wrong and right is replaced by stagnation and creation. Creativity as an ethics repurposes customary elements of experience for original uses. A common example is slang: redirecting a language’s standard words for divergent meanings. Another example is the reorienting of web platforms, including the exploitation of LinkedIn as a dating site. As an ethics, creativity works within its native reality instead of coming from outside, its twists the elements of experience away from orthodox uses as opposed to destroying them, and it escapes conventions as opposed to overthrowing them.

Advantages/Drawbacks

Originality as the highest value can be individually invigorating. But, if the driving reason we innovate is to go on and create something else, the interminability is daunting, as in the endless “Yes, and…” of improvisation theater.

 

 

Syllabi

Updated 2021. Mostly curated from Casey Fiesler’s collection here.

Human Centered Data ScienceUniversity of WashingtoneScience InstituteJonathan T. Morgan
Information Ethics and Privacy in the Age of Big DataMontana State UniversityLibrary/Honors CollegeScott W. H. Young and Sara Mannheimer
Ethical and Policy Dimensions of Information Technology and New MediaUniversity of Colorado BoulderInformation ScienceCasey Fiesler
Ethical Issues in Technology DesignColumbia University - Teachers College Mathematics Science and TechnologyJoey J. Lee
Data StoriesCarnegie Mellon UniversityEnglishChristopher Warren
The Ethics and Governance of Artificial IntelligenceHarvard UniversityHarvard Law SchoolJonathan Zittrain
The Ethics and Governance of Artificial IntelligenceMassachusetts Institute of Technology, Harvard University MIT Media Lab Joi Ito, Jonathan Zittrain
Big Data,Big ResponsibilitiesUniversity of Pennsylvania - The Wharton SchoolLegal Studies & Business EthicsKevin Werbach
Algorithms and SocietyNorthwestern UniversityComputer Science and Communication Studies Brent Hecht
Ethics of Data ScienceNew York UniversitySteinhardt (Mike Cottrell College of Business)Laura Norén
Introduction to Data EthicsUniversity of MilanICTMarco Cremonini
Algorithms and Big DataUniversity of Southern CaliforniaCommMike Ananny
Fairness in Machine LearningUC BerkeleyComputer ScienceMoritz Hardt

 

 

Texts

An Introduction to Data Ethics

Shannon Vallor, Ph.D.
William J. Rewak, S.J. Professor of Philosophy, Santa Clara University

Fairness and Machine Learning

Solon Barocas, Moritz Hardt, Arvind Narayanan

Ethics of Information

From: Oxford
Luciano Floridi

Ethics for the Information Age

From: Oxford
Quinn, Michael J.

Ethics and Technology

From: Wiley
Herman T. Tavani

Introduction to Data Ethics

From: The Business Ethics Workshop, 3rd Edition. Boston, USA: Boston Academic Publishing / Flatworld Knowledge. pp. 349-376 (2018)
James Brusseau
Pace University, New York City

Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence

Patrick Lin

 

 

Cases

 

 

 

 

Driverless Cars

What should be done on the road? When decisions go wrong, who shoulders responsibility?

 

 

AI Monitored Driver

Video and artificial intelligence monitoring of drivers promises safety, but what do we sacrifice when the machine stops us from ever breaking the law?

 

 

Explainability and Alienation

The explainability/interpretability distinction, and the question of how we relate to our environment when we can’t understand why decisions are made.

Responsive image

 

 

Authenticity: Do you really want to be who you are?

As opposed to 20th century marketing which sought to create needs and then satisfy them with goods and services, it is now more cost-effective to discern consumers trues desires (even when they don’t realize them themselves). Predictive analytics combines with big data and market forces to satisfy consumers as they are. Is that satisfying?

 

 

Disease, Wearables, AI

Wearables and coronavirus, built from Twitter.

BM

 

 

Tinder Live, Big Data, Privacy

The ethics of big data and privacy in the Tinder Live comedy show.

 

 

Google Glass: Ethics and Glassholes

A woman walks into a bar wearing Google Glass and begins filming. Ethical issues follow.

 

 

Black Mirror Writers' Room

Slides intended to show the relationship between current technologies/social implications/ethical controversies and real episodes of Black Mirror. (Casey Fielder, CSU)

BM

 

 

 

 

 

 

Media

If your Shop Assistant was an App

Hidden camera by Forbrugerrådet Tænk w/ English captions

 

Surveillance Camera Man and the Ethics of Public Filming

In Seattle, Surveillance Camera Man sticks his camera in peoples’ faces to find out what they will do. He ends up shedding light on surveillance society. (30 Mins)

 

 

Tools

Moral Machine

Ethics, the Trolley Problem, Driverless Cars

Privacy Investigator

Personal Information Tradeoffs

 

 

Decks/Reports

How to Interview a Tech Company: The AI Now Guide with Philosophical Ethics Added

James Brusseau

Polite Technology as a path to morality?

Bruno Gransche

Ethical Considerations in Artificial Intelligence Courses (2017)

Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N., & Walsh, T

Building AI Data Ethics Committees: Report

Accenture & Northeastern University

Data Ethics, the New Competitive Advantage

Data Ethics EU

Data Ethics Canvas

Open Data Institute

Toolkit for creating an ethics roadmap for your AI project

Open Roboethics Institute

Big Data and Insurance: Implications for Innovation, Competition and Privacy

Benno Keller, Geneva Association

Conversations Towards Ethical AI: Views from Experts and Practitioners

Capgemini Research Institute

AI Index 2019 Report, Institute for Human-Centered Artificial, Stanford University

Ray Perrault and Saurabh Mishra

From Principles to Practice: An interdisciplinary framework to operationalise AI ethics

VDE Association for Electrical, Electronic & Information Technologies and Bertelsmann Stiftung

Principled Artificial Intelligence: 2020

Berkman Klein Center, Harvard University

Everyday Ethics for Artificial Intelligence

IBM

Ethically Aligned Design

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research

Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Kanta Dihal, Stephen Cave, Leverhulme Centre for the Future of Intelligence, University of Cambridge.

Round Up of Ethical Principles of Robotics and AI (2019)

Alan Winfield

Ethics Guidelines for Trustworthy AI

High-Level Expert Group on AI, European Commission

Opinion of the Data Ethics Commission

Germany’s Federal Government

 

 

Readings


Additions, updates, concerns: contact@aiethics.site

 

Other reading lists:
Ethics in AI research by Kathy Baxter

 

 

Postscript on Societies of Control
October, 1997, Deleuze
/Library: Deleuze, Foucault, Discipline, Control.pdf

 

A Declaration of the Independence of Cyberspace
John Perry Barlow
https://www.eff.org/cyberspace-independence

 

Ethics For The New Surveillance,
The Information Society, Vol. 14, No. 3, 1998. Gary T. Marx
http://web.mit.edu/gtmarx/www/ncolin5.html

 

What’s New About the New Surveillance? Classifying for Change and Continuity
Surveillance & Society 1(1):9-29 2002. Gary T. Marx
http://www.surveillance-and-society.org/articles1/whatsnew.pdf

 

Machine Ethics – Association Advancement AI Papers from the 2005 Symposia
http://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-06/FS05-06-020.pdf

 

Ethical Issues in Advanced Artificial Intelligence (Paper clip thought experiment)
Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, 2003 Nick Bostrom
https://www.nickbostrom.com/ethics/ai.html

 

Responsible AI—Two Frameworks for Ethical Design Practice
IEEE, 2020 by Dorian Peters ; Karina Vold ; Diana Robinson ; Rafael A. Calvo
https://ieeexplore.ieee.org/document/9001063

 

The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions
ACM AIES, 2019 by Jess Whittlestone, Rune Nyrup, Anna Alexandrova and Stephen Cave
https://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_188.pdf

 

A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics
UC-Berkeley, arXiv e-print by Roel Dobbe, Sarah Dean, Thomas Gilbert, Nitin Kohli
https://arxiv.org/pdf/1807.00553.pdf

 

A socio-technical framework for digital contact tracing
Cornell University, arXiv e-print by Ricardo Vinuesa, Andreas Theodorou, Manuela Battaglini, Virginia Dignum
https://arxiv.org/abs/2005.08370

 

The AI Black Box: Failure of Intent & Causation
Harvard Journal of Law & Technology 31,2: 2018, by Yavar Bathaee
https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf

 

How Netflix uses AI to Predict Your Next Series Binge - 2020
Anoop Deoras, ML Research Scientist, Netflix
https://blog.re-work.co/how-does-netflix-use-ai-to-predict-your-next-series-binge/

 

Configuring the Networked Self
Julie Cohen
http://juliecohen.com/configuring-the-networked-self/

 

Data Aggregators, Consumer Data, and Responsibility Online: Who is Tracking Consumers Online and Should They Stop?
Prepublication
/Library/LibContentAcademic/Martin-Tracking-Users-Online-Proof.pdf

 

You have one identity: performing the self on Facebook and LinkedIn
Media, Culture & Society, José van Dijck
/Library/LibContentAcademic/Dijk.pdf

 

When data is capital: Datafication, accumulation, and extraction
https://journals.sagepub.com/doi/full/10.1177/2053951718820549

 

How do data come to matter? Living and becoming with personal data
https://journals.sagepub.com/doi/full/10.1177/2053951718786314

 

Owning Ethics: CorporateLogics, Silicon Valley, and the Institutionalization of Ethics
https://datasociety.net/wp-content/uploads/2019/09/Owning-Ethics

 

Northpointe (COMPAS) versus ProPublica Recidivism Debate
No nonsense version of the "racial algorithm bias, Yuxi Liu
https://www.lesswrong.com/posts/
A computer program used for bail and sentencing decisions, Washington Post/Stanford University
https://www.washingtonpost.com/

 

Foucault, Deleuze, and the Ethics of Digital Networks
The Quest for Intercultural Information Ethics, Bernd Frohmann
/Library/LibContentAcademic/ICIE IV Foucault Deleuze.pdf ,

 

The surveillant assemblage
British Journal of Sociology, 2000, Haggerty & Ericson /Library/LibContentAcademic/The surveillant assemblage.pdf

 

Algorithmic paranoia and the convivial alternative
https://journals.sagepub.com/doi/full/10.1177/2053951716671340

 

Data cultures of mobile dating and hook-up apps: Emerging issues for social science research
Big Data & Society, 2017
/Library/LibContentAcademic/Data cultures of mobile dating.pdf ,

 

Romance Gets Georgia Tech Treatment With Machine Learning Algorithm
Georgia Tech, 2020
https://news.gatech.edu/2020/02/13/romance-gets-georgia-tech-treatment-machine-learning-algorithm,

 

Cracking the Tinder Code: An Experience Sampling Approach to the Dynamics and Impact of Platform Governing Algorithms
Journal of Computer-Mediated Communication, 2018
https://academic.oup.com/jcmc/article/23/1/1/4832995

 

Algorithms & Agency
Colorado Tech LJ, 2018
Collection

 

Interpretable Machine Learning, A Guide for Making Black Box Models Explainable
2019, Christoph Molnar
https://christophm.github.io/interpretable-ml-book/index.html

 

Big Data Ethics,
Wake Forest Law Review, 2014, Richards, King
https://papers.ssrn.com/sol3/papers.cfm

 

Database of Ruin
Harvard Business Review
https://hbr.org/2012/08/dont-build-a-database-of-ruin

 

Iron Cagebook
Counter Punch
http://www.counterpunch.org/2013/12/03/iron-cagebook

 

Big Other: Surveillance Capitalism and the Prospects of an Information Civilization
Journal of Information Technology (2015) 30, 75–89. (Surveillance Capitalism)
https://poseidon01.ssrn.com/delivery.php

 

A Politics of Intensity: Some Aspects of Acceleration in Simondon and Deleuze
Deleuze Studies 11.4 (2017)
/Library/LibContentAcademic/AccelerationInSimondonAndDeleuze.pdf

 

Control Societies and Platform Logic
New Formations, 2015, Alex Williams
/Library/LibContentAcademic/ControlandPlatform.pdf

 

What is Privacy?
PowerPoint Presentation
https://eportfolio.pace.edu

 

Spinoza, Feminism and Privacy: Exploring an Immanent Ethics of Privacy
Feminist Legal Studies, 2014
/Library/LibContentAcademic/Spinoza,_Feminism_and_Privacy.pdf

 

“You Social Scientists Love Mind Games”: Experimenting in the “divide” between data science and critical algorithm studies
https://journals.sagepub.com/doi/full/10.1177/2053951719833404

 

Social media and the social sciences: How researchers employ Big Data analytics
https://journals.sagepub.com/doi/abs/10.1177/2053951716645828

 

Doing data differently? Developing personal data tactics and strategies amongst young mobile media users
https://journals.sagepub.com/doi/full/10.1177/2053951718765021

 

Algorithms as culture: Some tactics for the ethnography of algorithmic systems
https://journals.sagepub.com/doi/full/10.1177/2053951717738104

 

What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets
https://journals.sagepub.com/doi/full/10.1177/2053951716631130

 

Big Data Ethics
Big Data & Society, Zwitter, 2014
https://journals.sagepub.com/

 

Data cultures of mobile dating and hook-up apps: Emerging issues for critical social science research https://journals.sagepub.com/doi/full/10.1177/2053951717720950

 

Critical data studies: An introduction
https://journals.sagepub.com/doi/full/10.1177/2053951716674238

 

Corporate Surveillance in Everyday Life: How Companies Collect, Combine, Analyze, Trade, and Use Personal Data on Billions
A Report by Cracked Labs, Vienna, June 2017
https://crackedlabs.org/dl/CrackedLabs_Christl_CorporateSurveillance.pdf

 

Automatic Society
Journal of Visual Art Practice, 2016, Bernard Stiegler
/Library/LibContentAcademic/AutomaticSociety.pdf

 

Datacrocy
Journal of Visual Art Practice, 2016
/Library/LibContentAcademic/Datacrocy.pdf

 

Ars and Organological Inventions in Societies of Hyper-Control
Leonardo, 2016
/Library/LibContentAcademic/HyperControl.pdf

 

Modulation after Control
New Formations, 2015, Hui
/Library/LibContentAcademic/ModulationAfterControl.pdf

 

Organized Autonomous Networks,
Cultural Politics, 2010, Milani
/Library/LibContentAcademic/O-A-N.pdf

 

Towards a Rhizomatic Technical History of Control
New Formations, 2015, Goffey
/Library/LibContentAcademic/RhizomeControl.pdf

 

The Production of Subjectivity: From Transindividuality to the Commons
New Formations, 2010
/Library/LibContentAcademic/SubectivityTransindividualityCommons.pdf

 

System Error: Complicity with Surveillance in Contemporary Workplace Documentaries
Seminar, 2016
/Library/LibContentAcademic/SurveillanceWorkplace.pdf

 

TransIndividualizations,
2006, Stiegler, Rogoff
/Library/LibContentAcademic/TransIndi.pdf

 

“Plausible Cause”: Explanatory Standards in the Age of Powerful Machines,
Vanderbilt Law Review, 2017
/Library/LibContentAcademic/“Plausible_Cause”_Explanatory.pdf

 

Exploring Information Ethics
Journal of Information Ethics, 2016
/Library/LibContentAcademic/Exploring_Information_Ethics_.pdf

 

Data barns, ambient intelligence and cloud computing: the tacit epistemology and linguistic representation of Big Data
Ethics of Information Technology, 2015
/Library/LibContentAcademic/Data_barns,

 

University in the Epoch of Digital Reason
Analysis and Metaphysics, 2015, Peters
/Library/LibContentAcademic/THE_UNIVERSITY

 

Everything counts in large amounts: a critical realist case study on data-based production
Journal of Information Technology, 2014
/Library/LibContentAcademic/Everything_counts

 

Big Data and the End of Theory
Wired Magazine, 2006, Anderson
https://www.wired.com/2008/06/pb-theory

 

Big Data and the End of Theory? No.
(Could Big Data be the end of theory in science? A few remarks on the epistemology of data‐driven science, Science & Society Fulvio Mazzocchi)
http://embor.embopress.org/content/16/10/1250

 

Big Data, New Epistemologies and Paradigm Shifts
Big Data & Society, 2014, Kitchin
/Library/LibContentAcademic/BigDataEpist.pdf

 

Small Data in the Era of Big Data,
GeoJournal, 2015, Kitchin
/Library/LibContentAcademic/Small_data

 

Big Data, epistemology and causality: Knowledge in and knowledge out in EXPOsOMICS
Big Data & Society, 2016, Canali
/Library/LibContentAcademic/BigDataEC.pdf

 

A World without Causation: Big Data and the Coming of Age of Posthumanism
Millennium: Journal of International Studies, 2015
/Library/LibContentAcademic/A World without Causation

 

The ethics of algorithms: Mapping the debate
Big Data & Society, 2016
/Library/LibContentAcademic/The ethics of algorithms

 

Beyond the Quantified Self: Thematic exploration of a dataistic paradigm
New Media and Society, 2015
/Library/LibContentAcademic/Beyond the Quantified Self

 

Can we trust Big Data? Applying philosophy of science to software
Big Data & Society, 2016
/Library/LibContentAcademic/Can we trust Big Data

 

Predictive Analytics and Incarceration, Prepublication

 

Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification,’
Journal of Strategic Information Systems, 2016
/Library/LibContentAcademic/STRATEGIC

 

Consumption as biopower: Governing bodies with loyalty cards
Journal of Consumer Culture, 2013
/Library/LibContentAcademic/Consumption as biopower

 

Antisocial media and algorithmic deviancy amplification: Analysing the id of Facebook’s technological unconscious
Theoretical Criminology, 2017
/Library/LibContentAcademic/Antisocial media

 

Bias in Algorithmic Filtering and Personalization
Ethics of Information Technology, 2013
/Library/LibContentAcademic/etin-final.pdf

 

Corporate Surveillance in Everyday Life: How Companies Collect, Combine, Analyze, Trade, and Use Personal Data on Billions
A Report by Cracked Labs, Vienna, June 2017
https://crackedlabs.org/dl/CrackedLabs_Christl_CorporateSurveillance.pdf

 

What's Up With Big Data Ethics?
By Jonathan H. King and Neil M. Richards
http://www.forbes.com/sites/oreillymedia/2014/03/28/whats-up-with-big-data-ethics/

 

What Big Data Needs: A Code of Ethical Practices
By Jeffrey F. Rayport
http://www.technologyreview.com/news/424104/what-big-data-needs-a-code-of-ethical-practices/

 

Big Data Is Our Generation’s Civil Rights Issue, and We Don’t Know It
By Alistair Croll
http://solveforinteresting.com/big-data-is-our-generations-civil-rights-issue-and-we-dont-know-it/

 

Injustice In, Injustice Out: Social Privilege in the Creation of Data
By Jeffrey Alan Johnson
http://the-other-jeff.blogspot.com/2013/03/injustice-in-injustice-out-social.html

 

Big Data and Its Exclusions
By Jonas Lerman
http://www.stanfordlawreview.org/online/privacy-and-big-data/big-data-and-its-exclusions

 

Big Data Are Made by (And Not Just a Resource for) Social Science and Policy-Making
By Solon Barocas
http://blogs.oii.ox.ac.uk/ipp-conference/2012/programme-2012/track-c-data-methods/panel-1c-what-is-big-data/solon-barocas-big-data-are-made-by-and.html

 

Big Data, Big Questions: Metaphors of Big Data
By Cornelius Puschmann and Jean Burgess
http://ijoc.org/index.php/ijoc/article/view/2169

 

View from Nowhere: On the cultural ideology of big data
By Nathan Jurgenson
http://thenewinquiry.com/essays/view-from-nowhere/

 

The Hidden Biases in Big Data
By Kate Crawford
https://hbr.org/2013/04/the-hidden-biases-in-big-data

 

The Anxieties of Big Data
By Kate Crawford
http://thenewinquiry.com/essays/the-anxieties-of-big-data/

 

How Big Data Is Unfair
By Moritz Hardt
https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de

 

Unfair! Or Is It? Big Data and the FTC’s Unfairness Jurisdiction
By Dennis Hirsch
https://privacyassociation.org/news/a/unfair-or-is-it-big-data-and-the-ftcs-unfairness-jurisdiction/

 

How Big Data Can be Used to Fight Unfair Hiring Practices
By Dustin Volz
http://www.nextgov.com/big-data/2014/09/how-big-data-can-be-used-fight-unfair-hiring-practices/95195/

 

Big Data’s Disparate Impact
By Solon Barocas and Andrew D. Selbst
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899

 

Big Data and the Underground Railroad
By Alvaro M. Bedoya
http://www.slate.com/articles/technology/future_tense/2014/11/big_data_underground_railroad_history_says_unfettered_collection_of_data.single.html

 

Punished for Being Poor: The Problem with Using Big Data in the Justice System
By Jessica Pishko
http://www.psmag.com/navigation/politics-and-law/punished-poor-problem-using-big-data-justice-system-88651/

 

The Ethics of Big Data in Higher Education
By Jeffrey Alan Johnson
http://www.i-r-i-e.net/inhalt/021/IRIE-021-Johnson.pdf

 

The Chilling Implications of Democratizing Big Data
By Woodrow Harzog and Evan Selinger
http://www.forbes.com/sites/privacynotice/2013/10/16/the-chilling-implications-of-democratizing-big-data-facebook-graph-search-is-only-the-beginning/

 

Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights
https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf

 

Data and Discrimination: Collected Essays
Edited by Seeta Pena Gangadharan with Virginia Eubanks and Solon Barocas
http://www.ftc.gov/system/files/documents/public_comments/2014/10/00078-92938.pdf

 

Emerging Responsible Data Questions for Human Rights and Human Security
By Mark Latonero
https://responsibledata.io/emerging-responsible-data-questions-for-human-rights-and-human-security/

 

The Ethical Risks of Detecting Disease Outbreaks With Big Data
By Michael White
http://www.psmag.com/health-and-behavior/ethical-risks-of-detecting-disease-outbreaks-with-big-data

 

Framing the Big Data Ethics Debate for the Military
By Karl F. Schneider, David S. Lyle, and Francis X. Murphy
http://ndupress.ndu.edu/Media/News/NewsArticleView/tabid/7849/Article/581865/jfq-77-framing-the-big-data-ethics-debate-for-the-military.aspx

 

Ten Simple Rules for Responsible Big Data Research
By Matthew Zook, Solon Barocas, danah boyd, Kate Crawford, Emily Keller, Seeta Peña Gangadharan, Alyssa Goodman, Rachelle Hollander, Barbara A. Koenig, Jacob Metcalf, Arvind Narayanan, Alondra Nelson, and Frank Pasquale
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005399

 

Data Scientist Cathy O'Neil on the Cold Destructiveness of Big Data
By Nikhil Sonnad
https://qz.com/819245/data-scientist-cathy-oneil-on-the-cold-destructiveness-of-big-data/

 
The Affirmative Action of Vocabulary
By Alistair Croll
https://medium.com/@acroll/the-affirmative-action-of-vocabulary-c123e8196b36

 

'For These Times': Dickens on Big Data
By Irina Raicu
https://www.recode.net/2014/5/1/11626330/for-these-times-dickens-on-big-data
 

 

Metaphors of Big Data
By Irina Raicu
https://www.recode.net/2015/11/6/11620416/metaphors-of-big-data

 

Immersive Theater, Big Data, Identity
CHI, 2018
Collection

 

Theme, 4 Articles: Algorithmic Normativities (Including: Algorithms as Folds)
Big Data and Society, 2019
Collection

 

Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms
By Patricia de Vries and Willem Schinkel
https://journals.sagepub.com/doi/pdf/10.1177/2053951719851532

 

Human Rights in the Age of Platforms -
Essay collection from MIT Press
https://mitpress.mit.edu/books/human-rights-age-platforms

 

Good Isn’t Good Enough
by Ben Green (CS)
https://www.benzevgreen.com/wp-content/uploads/2019/11/19-ai4sg.pdf

 

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
by Erico Tjoa and Cuntai Guan IEEE
https://arxiv.org/ftp/arxiv/papers/1907/1907.07374.pdf

 

Bottom-up data Trusts: disturbing the ‘one size fits all’ approach to data governance
International Data Privacy Law, 2019, Sylvie Delacroix, Neil D Lawrence
https://academic.oup.com/idpl/advance-article/doi/10.1093/idpl/ipz014/5579842

 

Privacy Law's False Promise
Washington University Law Review, Vol. 97, No. 2, 2020, Ari Ezra Waldman
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3499913

 

A Framework for Understanding Unintended Consequences of Machine Learning
Harini Suresh MIT hsuresh@mit.edu John V. Guttag MIT guttag@mit.edu
https://arxiv.org/pdf/1901.10002.pdf

 

Please Stop Explaining Black Box Models for High-Stakes Decisions
Cynthia Rudin, Duke University
https://www.arxiv-vanity.com/papers/1811.10154/

 

 

Applications

ai