Ethical Cost Analysis of Algorithms

More Industries are utilizing data science to collect and manipulate information to discover solutions to any given problem.  Solutions to simple or complex problems are made up of algorithms.   Algorithms are designated rules which process information to achieve results that are reliable and valid.   However, the context of the problem can make the impact of the solution more or less efficient in regard to ethical costs.  Ethical costs are negative or undesirable outcomes that emanate from the solution.

For example, the intended purpose of a drone strike is to eliminate the enemy target, however there is a probability that “other” casualties may occur. Therefore, the ethical cost of a drone strike, is the acceptable number of casualties, when attempting to achieve the desired result of eliminating the target.

How do you quantify the ethical costs in any problem?

Ethics are subjective and difficult to agree on and as a result we do not have consistent set of ethics that could be integrated into an algorithm. Should that stop us from using what we know about ethics and algorithm building to integrate a solution to these ethical risks we take with our creative solutions? NO!

For any problem there are undesirable outcomes that if possible we would choose to avoid. In the stage where the problem is described we should also create limitations to solutions that include these undesirable outcomes such as unfair employment terminations or civilian deaths. Using statistical practices, such as Bayesian inference analysis, we can evaluate the likelihood of an outcome and then simulate the frequency of those events overtime. The traditional costs (tc) could then be evaluated using legal definitions to “make the person whole” which include both physical and emotional costs. The ethical cost (ec) is the loss of reputation, legal penalties imposed by violations of government regulation, and other related penalties depending on the context. One last piece is the ethical opinion of the public, assuming the ethical issue remains a subjective one. An ethical coefficient (z) can be proportioned to match that of the public that would hold this ethical standard. Many companies are crowd sourcing ethical decision making to create a method and framework to calculate ethical decisions. An example of this is that Alphabet is crowdsourcing ethical dilemmas to help machines using Artificial Intelligence (AI) in decision making. (BERMAN, 2017)

We can create a simple enough function of net benefit (NB), benefits(B), and costs (C) s.t. costs include all traditional definitions of costs plus the proposed ethical costs. NB ~ f(B,C) Which is the basic cost benefit framework.

NB = B – C                                                        (1)

Benefits consist of all positive outcomes created using the algorithm. However, we will focus on expanding on the costs which can be broken into traditional costs (tc) and an ethical stabilizer to balance the overall net benefit. The ethical stabilizer includes the ethical costs (ec) and an ethical coefficient (z), to describe a general relationship, NB ~ f(B,tc,z,ec).

NB = B – [ tc + z ( ec ) ]                                  (2)

One last modification that can be made is to consider time and how it might affect the ethical costs. There is likely a fixed or initial cost incurred by the negative output of the algorithm which might not be enough to stop the use of it. However, there could be a marginal cost that is compounded over time. An example of this would be search filter algorithms. They are very good at filtering the results of a given search and the benefit in time saved out ways any initial ethical costs if there is any. The long term affect however, is the narrowing of one’s perspectives on the world around them. The results seen reinforce any biases and further ground one in their current beliefs. This can lead to extreme beliefs and be a negative outcome to society. To include this marginal cost, you could include an additional term into the ethical stabilizer using α as the fixed portion of the ethical cost and β as the coefficient representing the long-term cost.

NB = B – [ tc + α + (1 + β)t * {z(ec)}]                                         (3)

The issue of responsibility is another hot topic as it is becoming increasingly more difficult to control negative outcomes of these types of algorithms without an ethical stabilizer, z(ec). How do you seek resolution with a machine? Who is responsible for the actions or outputs the machine is making? Is it the CEO, politician, or engineer? These questions are all situational for the obvious reasons, but the responsibility issue must be addressed to some degree by that individual essentially signing off on all decision making. This requirement will give an incentive to get it right, maintain fair implementation, and monitoring of outputs to make sure that the expected performance is met or improved. Those impacted by the algorithms also will have a path to resolution.

The construct I have presented is not intended to be used in academic practices as it is much to simple. However, the goal is to spark a debate and inspire those confronted with ethical dilemmas when building algorithms to integrate relevant ethical stabilizers to give a more realistic and responsible analysis of their solutions.



BERMAN, ROBBY. “Can Crowdsourcing Teach AI to Do the Right Thing? | Big Think.” Big         Think, Big Think, 17 Oct. 2017,       teach-ai-to-do-the-right-thing

Mittelstadt, Brent Daniel, et al. “The ethics of algorithms: Mapping the debate.” Big Data         & Society, vol. 3, no. 2, 2016, p. 205395171667967., doi:10.1177/2053951716679679.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s