“Since we are the mere toys of Satan, the truth about reality is extremely depressing.”
—Derek Parfit, Reasons and Persons, wildly out of context
It seems to me that people often expect too much out of moral systems, and are too easily willing to give up on applying them if it turns out they aren’t perfect.
First of all, life is complicated. The world is complicated. There are a lot of different kinds of things worth valuing, and a lot of types of things worth avoiding, and they can interact with each other in intricate ways. It’s impossible to truly know what the absolute most moral action to take is in a world so complicated and fuzzy and convoluted.
We see the same kinds of problems even in less “fuzzy” contexts, though. Take decision theory, for example. Let’s set up a nice clean thought experiment where you simply have to decide between two options. There are many ways to decide between the two. You could decide randomly. You could go with your intuition, or how you feel at the moment. You could offload the problem onto somebody else. Or you could use some theory of morality or rationality or decisionmaking to pick between them.
However, further suppose you’re in a Satan-run universe. Whatever decision theory (call it T) you use to make your decision, it’s always possible for Satan to say something like “if you used T to make your decision, then I’ll give you the worst outcomes”. Even if T is some sort of “best possible theory”, which gets almost every other decision right, there’s no way for T to be correct in such a universe. All decision theories break down under these kinds of circumstances.
Circumstances of breakdown don’t necessarily depend on Satan. In math, there’s a loosely analogous idea of “incompleteness” for any sufficiently complicated proof system, which comes from Gödel’s incompleteness theorems. Roughly, there are true statements we can make within some system that we can’t prove using that system, and such systems cannot prove their own consistency (that is, they can’t prove that they themselves have a total lack of logical contradictions).
This means we can’t find some complete (covers all possible theorems), consistent (lacks contradictions), computable (can be found via running an algorithm) system of proof that covers all of mathematics. It can be fairly surprising to learn this, but once you’ve internalized the idea, it can also be fairly surprising to learn what kinds of systems defy it.
There are ways to construct all of Euclidean geometry such that it’s consistent and complete, and I wouldn’t call Euclidean geometry a small or useless system by any means. A common way to prove the incompleteness of a system is via arithmetic, for example encoding theorems as natural numbers and finding that there exist true statements (“true numbers”) that can’t be proven. However, there are even ways to construct simplified forms of arithmetic (Presburger arithmetic is one example) that don’t fall prey to Gödel’s trap.
Obviously, there is plenty of math that can be done even once you show that there’s no way it can “all” be done (or at least, all be done in a certain kind of systematized way). Math is still extremely useful, even though it too breaks down in these kinds of extreme cases. For analogous reasons, I’m not too worried about the obvious ways to break down moral systems. If even a system as rigorous and well understood as mathematics fails here, in my view it’s not really a fault of the system, but rather a fault in the extreme edge case that can show it breaking. There’s still plenty of useful moral (and mathematical) reasoning to be done.
All in all, I think there’s probably too much human effort spent on picking out a perfect moral system relative to trying to get ourselves to broadly apply at least a decently good one (although I do think the former is an incredibly important philosophical problem, and worth working on). It’s important not to act in total ignorance, but it’s just as important to act at all.