I’ve gotten out of the habit of blogging. That’s probably best forgotten.
(ha.)

Lately I’ve been thinking a bit about calculations in terms of what is best for everyone. I associate these with statistical reasoning but I don’t know if that actually makes sense. What I have in mind are things like “if we do this thing we know there will be some suffering but there will be benefits that outweigh the suffering.” I have a vague recollection of utilitarian philosophy from stuff I read (or read about, it’s been so long I’m not sure) like 15 years ago, that vague recollection crops up here. I should make a mental note to add something to my metaphorical “read about this” list. Anyhow, the greatest good for the greatest number of people…

I thought of this recently (thought of it while driving in circles at the airport waiting for the delayed plane of a dear relative to finally arrive; thought of it out loud, talking to myself while driving – it’s basically the same as blogging in that no one’s listening so I can say whatever I like and it’s a bit of silly/embarassing thing to admit to doing, and it’s easy to forget that I did it or what I said), two hypothetical scenarios.

Imagine a loading dock at a warehouse. Two pallets of goods need to be shipped very rapidly. In the haste, each pallet falls on an employee. Is it worth it? Answering that requires more information. Let’s say the first pallet is full of some life-saving medicine, which arrives and saves a hundred lives, and the injured person has lost a toe. The second pallet is full of disposable cosmetic color contact lenses, and the injured worker was killed. It’s really easy to say that in the second case, it would have been better if the pallet wasn’t shipped, such that the injured person stayed alive. In the first case it may make some people uncomfortable but it seems fairly easy to say it’s still good that the pallet was shipped.

In both of these scenarios there’s a benefit to some large group of people and there’s a cost to a smaller group of people who didn’t consent to bearing that cost. I’m not against this kind of reasoning in all cases on principle. I think in our society we sometimes run into situations like this, where there are competing alternatives each of which will have some nonconsensually distributed cost. So in some instances some people may have to make decisions like this, and not making a decision isn’t *really* an option.

Still, I think this is a kind of reasoning that we should hesitate over and try to avoid when possible. I also think that when this kind of thing comes up it’s worthwhile and maybe even important to be clear about what’s actually going on. By which I mean, I think people shouldn’t say “this is best for everyone” if everyone doesn’t actually agree. The person who lost a toe in the one hypothetical, if asked, “hundreds of people will live if you let us chop your toe off” might reasonably say “I’m sorry, but no.” And so they might also say “I know other people benefitted, but I was, well, attached to my toe, and I wish this hadn’t happened, if it would have meant that other people would die.” I could imagine an argument that says something like “you’re better off in this world where these people live and your toe is removed, because this is a better world” but that strikes me as mostly just rhetorical, in the sense that I don’t think there’s a single clear perspective from which to say “here’s the best world.” I mean, I have a perspective on what would be a better world, and I’d like that perspective to win out. I don’t care so much if people agree with it, as long as it’s implimented. (In case anyone cares, in my vision of a better society point three is Dexy’s Midnight Runners playing free daily in the university library. Let it all come down.) To be clear, with regard to the people who don’t agree, I think they’re making a moral mistake, but, first, I have a hunch that “a mistake in moral reasoning” is pretty close to a synonym for “something I disagree with” and, second, that this kind of mistake is in at least some cases going to not be subject to resolution via argument.

I think we could say that someone who wasn’t willing to sacrifice a toe for the lives of hundreds, or who said “to keep my toe I will have hundred die” has less than laudable moral judgment. But what if we change the thing lost? What it’s not a toe. What if it’s a child? Is someone who is willing to have their child die to save the lives of hundreds of others a better person than someone who is not willing to do so? Personally, I think just posing the question is uncomfortable, not least because it involves the horror of contemplating the death of a child, but also, I think, because that points out that there’s an implied equivalence or measuring going on – “one toe vs hundreds of lives, that’s worth it, fair trade” – and this kind of equivalence starts to break down, or at least get less comfortable when we make the things held equivalent be something (or rather, someone) really, really highly important to us. When we do that, the comparisons and measurements start to feel much more uncomfortable, in part, I think, because the people measured start to seem incomparable such that comparison and measurement seems inappropriate. Of course, in a way, this kind of calculation happens really often in our society. What’s a life worth? Depends on whose, lost in what way, but that’s a knowable quantity, in a way, a very limited way, and sometimes (some others’) lives are worth less to some actors than other things (like money, or oil). That’s one reason I think in response to “this is best for everyone” it’s sometimes a good idea to respond “how? and who is everyone? and who is this not best for?”

Also:

Advertisements