Connect with us

Hi, what are you looking for?

Politics

Ethics for Inhumans

What We Owe the Future
by William MacAskill
Basic Books, 2022; 333 pp.

William MacAskill, a philosophy professor at Oxford and a leading light of the effective altruism movement, has recently been in the news owing to the frenzied and fraudulent finance of his protégé Sam Bankman-Fried, who now awaits trial. The “effective altruists” took seriously the implications Peter Singer drew from his famous thought experiment: Suppose you come across a small child who is drowning in a pond. You can easily rescue the child, but if you do so, you will ruin the expensive pair of shoes you are wearing. If you refuse to save the child, wouldn’t this show you are a heartless brute? But, Singer says, nothing in the moral point of the example depends on your close physical proximity to the child. If you had given the cost of the shoes to charity, you could have saved a child living in the third world from death. Singer, relying on a utilitarian framework, next argues that you are morally obliged to give all your income above subsistence to charity, though he recognizes that few will be willing to do so. Further, in order to maximize the effect of your donations, you should investigate which charities are most effective, a prescription the effective altruists enthusiastically embrace. But they have done Singer one better. In order to maximize our charitable donations, we need to make as much money as possible, and that will often require us to seek employment in high-paying jobs and then give as much as we can to charity. Following this advice led Bankman-Fried to his career in investments.

It would be unfair to blame MacAskill for Bankman-Fried’s peculations, as there is no indication of MacAskill’s involvement in them, but his ethical manifesto merits attention in its own right. As its title suggests, it is a radical extension of effective altruism that emphasizes the future. To be “up-front” about it, What We Owe the Future takes a view of ethics detached from our common human lives and, in its endeavor to assume what Henry Sidgwick called “the point of view of the universe,” is utterly bizarre, much more in its theory than in its rather banal practical recommendations.

The key to MacAskill’s ethics is what he calls “longtermism, the idea that positively influencing the longterm future is a key moral priority of our time. Longtermism is about taking seriously just how big the future could be and how high the stakes are in shaping it. If humanity survives to even a fraction of its potential life span, then, strange as it may seem. . . . [w]hat we do now will affect untold numbers of future people” (pp. 4–5). If human beings live the lifespan of a typical mammalian species, billions and billions of future people remain to be born, and their interests swamp our own.

If you object, “Why should I care about that? I care about my family and friends, not possible people in the far future,” MacAskill’s response is one of disarming moderation: “Special relationships and reciprocity are important. But they do not change the upshot of my argument. I’m not claiming that the interests of present and future people should always and everywhere be given equal weight. I’m just claiming that future people matter significantly” (p. 11).

If you adopt MacAskill’s standpoint, though, you will be unable to maintain the distinction he suggests here. Suppose you give the existence of each possible future person a minute weight compared to persons you value. MacAskill takes utility to be additive; if there are enough future people, the sum of their utilities will outweigh the utility of those close to you. No matter how great the initial disparity between the utility of a person close to you and a future person, the numbers will render a verdict in favor of the future. And, judged from a commonsense standpoint, the situation is even worse. Given the vast numbers of future people, even a slight probability of improving their lot will outweigh the actual interests of those near and dear. MacAskill says that he does not demand that people sacrifice the interests of those close to them in this way but cannot avert this by the logic of his argument. If he seeks to escape by contending that the utilities of all possible future people should be taken as an indivisible whole rather than as a sum of individual utilities, then he cannot block people from giving the interests of those in the present a virtually infinite weight, very much counter to the spirit of his approach.

It is worth looking further at MacAskill’s moral mathematics, which he draws from the great Oxford philosopher Derek Parfit, though MacAskill takes it to an extreme that Parfit sought to avoid. As MacAskill rightly says, population ethics is very difficult and technical, but, to simplify grossly, Parfit sought to show that, on certain plausible assumptions, a situation in which some people have very high utilities and others lower ones can be shown to be inferior to an equal distribution of utilities if enough people are added to the distribution. (I ought to say that for this column, we must put aside the Austrian demonstrated preference notion of utility; more’s the pity.) If this process is repeated enough times, we will arrive at the “Repugnant Conclusion”:

Consider two worlds we’ll call Big and Flourishing and the second Enormous and Drab. Big and Flourishing contains ten billion people, all at an extremely high level of wellbeing. Enormous and Drab has an extraordinarily large number of people, and everyone has lives that have only slightly positive wellbeing. If the total view is correct. . . . [t]he wellbeing from enough lives that have slightly positive wellbeing can add up to more than the wellbeing of ten billion people that are extremely well-off. Parfit himself thought this was a deeply unpalatable result, so unpalatable that he called it the Repugnant Conclusion. (p. 180)

MacAskill argues that the most plausible way to avoid the Repugnant Conclusion, the critical level view, leads to equally counterintuitive results:

In the critical level view, adding lives that have low but positive wellbeing is a bad thing. . . . This view escapes the Repugnant Conclusion. . . . However, the critical level view has its own counterintuitive implications. . . . It leads to what’s called the Sadistic Conclusion: that it can be better to add to the world lives full of suffering than it is to add good lives. . . . The critical level view regards the addition of lives that only just have positive wellbeing as a bad thing; so adding enough such lives can result in worse overall wellbeing than adding a smaller number of lives that are full of suffering. (p. 185)

This objection to the critical level view fails because it remains in the grip of utility maximization over total populations. The critical level view is best taken not as a way to compare populations below the critical level of well-being with other populations, as MacAskill does, but rather as a bar to making any such comparisons at all once the critical level is reached. This avoids the Sadistic Conclusion, since the comparisons in that scenario are not allowed. If MacAskill responds that this limit is arbitrary, the objection may be turned against him. Why should we assume that comparisons of populations’ utility levels are always allowable, an assumption all the more questionable because declining to make it permits us to avoid both the Repugnant and Sadistic Conclusions?

Impatient readers may long ago have been anxious to object, “Even if we were to accept MacAskill’s future-oriented ethics, we know little about what will happen hundreds of thousands of years from now. Of what use in day-to-day practice are MacAskill’s speculations?” Here, for once, we may come to our author’s defense. He is well aware of the uncertainty of the future, indeed insists on it, and the policies he recommends are hardly radical, putting aside a few issues, such as a more-than-mild mania about artificial intelligence taking over the world.

Though I fully recognize that this is not an argument, I confess to a strong aversion to this weird band of “effective altruists,” who devote their lives to “doing good,” while largely confining their human relationships to fellow members of the cult and thanking God “that they are not as other men are” (Luke 18:11, KJV). Let us leave them as they anxiously compute their “carbon footprints,” and seek the foundations of ethics in a more human way.

Advertisement

    You May Also Like

    Investing

    RevisingTheBankSecrecyAct_NorbertMichelAndJenniferSchulp_CMFAWP007   The post Revising the Bank Secrecy Act to Protect Privacy and Deter Criminals (CMFA Working Paper No.007) appeared first on Alt-M.

    Investing

    Recently, an investment advisor and Bitcoin proponent tweeted the claim that “[f]or most of human history” the “[s]eparation of money and state was the...

    Business

    Rollee enables worker’s to share their professional data, spread over one or more financial platforms. Ali Hamriti, CEO and Co-Founder of Rollee, is on...

    Business

    The energy crisis means that as the price of wholesale commercial energy hits an unprecedented high, businesses must pay notably more for their energy...

    Disclaimer: successfuldealnow.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

    Copyright © 2024 successfuldealnow.com | All Rights Reserved