There is an extant meme about early human history, that the invention of agriculture was a huge mistake. Noah Yuval Hariri popularized the idea in Sapiens. He pointed out that ancient foragers were “taller and healthier than their peasant descendants”. Foragers also suffered less from famine and plague and had a “relatively short working week”. As a result, Hariri calls the agricultural revolution “history’s biggest fraud”.
You make a good case that 'raising the average' seems a bad metric. But I don't see that 'total utilitarianism' is the only justifiable one. It does imply the repugnant conclusion (which you don't dispute, you just dispute the empirics) ... that there must be some number of people who 'barely refrain from suicide' that would be better than say, 10 billion happy humans in perpetuity.
I don't see why there's anything "unreasonable" about having a person-affecting view or simply a welfare function that is some concave function of 'number of people ever or in the future' and 'how happy they are'.
I also think the "very few people commit suicide therefore they must leave net positive lives claim/implication" might be overstated. I think once you are alive it is perhaps very hard to commit suicide, there's some sort of 'biases' against not doing so, not to mention the stigmas/taboos.
But it seems to me a very real thing that there are lots of people in the world today as well as in the past that a reasonable outsider behind the veil of ignorance would say "I'd vastly rather not be born than to have that life."
I don't think I need to support total utilitarianism. Any view that makes it a good thing that good things happen to someone will be enough. In other words, if life X takes place, and we think that's better than it not taking place, then any state of affairs plus life X must be better than that state of affairs without life X. But we don't need pure additivity. (I guess we need something more than pure monotonicity, otherwise you might have something like "the value of each extra life is half the value of the previous one" and we'd tend to a constant.)
But it's true that my argument about "how things count" suggests total utilitarianism, by suggesting that the value of experienced-life X is independent of what else is going on.
I'm a bit suspicious of the repugnant conclusion argument because it has a kind of Searle's-Chinese-room flavour, i.e. a very weird situation where our common-sense intuitions may not work right. I'm not sure I know any useful way to resolve the dilemma of 'exactly how happy do 1 trillion people have to be, to be as good as 10 billion reasonably happy people'...! I'm not sure our moral reasoning apparatus is able to cope with these kinds of questions....
You make a good case that 'raising the average' seems a bad metric. But I don't see that 'total utilitarianism' is the only justifiable one. It does imply the repugnant conclusion (which you don't dispute, you just dispute the empirics) ... that there must be some number of people who 'barely refrain from suicide' that would be better than say, 10 billion happy humans in perpetuity.
I don't see why there's anything "unreasonable" about having a person-affecting view or simply a welfare function that is some concave function of 'number of people ever or in the future' and 'how happy they are'.
I also think the "very few people commit suicide therefore they must leave net positive lives claim/implication" might be overstated. I think once you are alive it is perhaps very hard to commit suicide, there's some sort of 'biases' against not doing so, not to mention the stigmas/taboos.
But it seems to me a very real thing that there are lots of people in the world today as well as in the past that a reasonable outsider behind the veil of ignorance would say "I'd vastly rather not be born than to have that life."
I don't think I need to support total utilitarianism. Any view that makes it a good thing that good things happen to someone will be enough. In other words, if life X takes place, and we think that's better than it not taking place, then any state of affairs plus life X must be better than that state of affairs without life X. But we don't need pure additivity. (I guess we need something more than pure monotonicity, otherwise you might have something like "the value of each extra life is half the value of the previous one" and we'd tend to a constant.)
But it's true that my argument about "how things count" suggests total utilitarianism, by suggesting that the value of experienced-life X is independent of what else is going on.
I'm a bit suspicious of the repugnant conclusion argument because it has a kind of Searle's-Chinese-room flavour, i.e. a very weird situation where our common-sense intuitions may not work right. I'm not sure I know any useful way to resolve the dilemma of 'exactly how happy do 1 trillion people have to be, to be as good as 10 billion reasonably happy people'...! I'm not sure our moral reasoning apparatus is able to cope with these kinds of questions....
Lesswrong had more on all these topics (https://www.lesswrong.com/posts/xdoJyqBBD4yidcpcG/no-human-life-was-not-a-misery-until-the-1950s).