« Why Robin Hanson got tenure | Main | Average shmoe cryonically deanimates »

May 20, 2008

Comments

Psy-Kosh

Actually, I already emailed you before I saw this. Anyways, basically my email simply asked you to clarify what you meant, since I'm not actually clear as to what it is I said that you objected to. (Seriously, not trying to avoid the issue, just genuinely confused, unclear. So I can't quite bite the bait until I can see it? er... okay, I guess I just broke the metaphor there)

Hopefully Anonymous

This should get you caught up a bit. Apparently I originally wrote it in July 2007. I've been thinking about this since I read the slate piece on the nobel prize sperm bank a few years ago.

Hopefully Anonymous

oops here it is:

http://hopeanon.typepad.com/my_weblog/2008/03/my-post-on-dysg.html

jim

Hi, Anon! Have you considered the possibility that your little Einsteins might not get on board with your agenda of 'maximizing the odds of living forever'? I mean, yours is a preference, not the necessary endpoint of a logical syllogism. Who knows; one of them might even get into its overly-productive head that life isn't worth the candle, and decide to blow it out with his 'Little Albert's Doomsday Machine Kit'.

Conversely, they might pick up the gauntlet of your mortality challenge, but deem that you and your ilk (however they decide to define you) aren't worthy of participation in their Brainy New World. Maybe you've got some unsavory genetic markers crawling around under your scalp, and are better off left on the trash heap of history with your unbred for braininess brethren...not to mention your sisthren! (Consider the film 'Gattaca').

The idea just doesn't seem to wash with your stated purposes, Anon. Or at least, it could certainly backfire on you.

Sister Y

Hi HA, nice to meet you! I posted the assumptions I see underlying your position as having as a comment on jim's blog:

1. A procedure can be perfected that will accurately reproduce the important aspects of a clone-subject's intelligence in a clone.
2. The clones will agree to work on the project of the cloners. (Jim's objection is to this, I think, above)
3. The cloners' project is feasible.
4. It's morally okay to make new people to serve the cloners' ends. The clones' suffering doesn't matter compared to their value to others. (Alien Resurrection . . . )

And while we're thinking about suffering, and about producing accurate clones, and perhaps about the projects of intelligent people, let's remember this:
http://www.cnn.com/2008/HEALTH/conditions/04/02/autism.sperm.donor/index.html

"Jackaway decided on "Donor X" because he appeared philosophical and intelligent on paper. He liked music, loved to travel and had a high IQ and a degree in economics.

What she couldn't know then is that her son would have autism. So she started to wonder whether Donor X might carry a gene that could have contributed."

Hopefully Anonymous

Jim,
I agree with your last sentence, not your penultimate sentence. The question, in my opinion, is which maximizes my persistence odds more- cloning/breeding our best existential risk minimizers, or NOT doing so? I intuit that the former does so rather than the latter, but it's a question I'd like to see rigorously explored, rather than just ignored or dealt with in a flippant or cursorily dismissive manner (which I think is what you're doing).

As you'll see in this link, I have a proposed incentive mechanism:

http://hopeanon.typepad.com/my_weblog/2008/03/my-post-on-dysg.html

but there's risk in everything and I acknowledge that. I don't think the answer is "there's a risk that making thousands of clones/offsprings of the best existential risk minimizers could harm us, therefore we shouldn't do it", but rather "we should weigh adverse outcome probabilityies from doing so vs. the adverse outcome probabilities from NOT doing so".

Sister Y, cloning isn't necessary to this proposal. Breeding/ in vitro w/ surrogate pregnancy works too.

The concerns you raise are real (for example, the donor x story) but once again, I think they should be weighed against the adverse outcome probabilities of NOT trying to quickly make many clone/bred offsprings of our best existential risk minimizers.

Thomas Themel

I think you want to elaborate a bit before this can turn into a serious discussion.

Assuming technical problems do not exist and disregarding nature vs nurture, the next thing I'm wondering about is the selection process for those "smartest existential risk minimizers" who would then be replicated en masse.

Psy-Kosh

Hopefully: I read over that other post... and I'm still not seeing in what you meant by what I said being a cop out or a fake nitpick.

On the other hand, I'm operating on lack of sleep, and it seems yesterday already my brain was doing stupid stuff...

Anyways, intelligence certainly has a strong genetic component, obviously. But does the whole "existantial risk minimizer" thing, when controlled for intelligence, have any significant genetic component? (That is, is there any reason to even suspect that it does?)

jim

I'm not purposely dismissing your argument, Anon, as much as offering a counter argument to a position that I find less than thorougly thought through. I understand your rationale, and am actually somewhat sympathetic to the motivations behind the idea; though, being an antinatalist, I think the goal is misplaced, for reasons I think you probably understand. But beyond that, I think you place far too much stock in where a little extra intelligence can and will take us. Iterated simply, despots can be smart, too; even ones raised in 'good' families. To be honest, you almost seem like a Lamarckist in your tacit assumptions that personal values flow down the same conduit as IQ points.

Having said that, I'll acknowledge that, because of my aforementioned bias, I find your casual willingness to breed and utilize waves of people as steppingstones toward your immortality schemes rather abhorrent, considering the costs involved. I realize I'm probably in a very small minority here, as most people seem to be willing to breed and use people to their own ends. Hell, I did it myself once upon a time.

But over and above all this, Anon, I think you're living in a risk-minimizing fantasy world. Oh, you might squeeze out another decade or two, if you're lucky. But you're gonna die, dude; and most likely, you'll suffer along the way (hopefully, not too much). I don't celebrate this reality, but reality it is, and all those super-clones and zygotes you're proposing to create will suffer as well. Possibly more, as intelligent people are probably more likely than most to seize upon the implications of what will
happen to them, and to the people and other lifeforms they care about.

Dismissive? No. Flippant? Perhaps- probably a substitute for the anger I feel towards the whole process of existence, and towards a philosophy that's willing to utilize generations of people in a futile attempt at vicarious immortality. As far as the life extension thing goes...eat you're Wheaties, don't smoke, get some exercise in, and learn to accept that you, too, will one day soon not exist. Don't like that idea? Then don't drag other people into the same dead end position (Jesus, I wish I'd had someone to tell me this before I had kids!).

Take care, Anon...thanks for the space.

jim

Sorry if the above comes across as impolite, Anon. I just got home from work, it's 2 in the morning, and I'm afraid my diplomatic capacities might be waning a bit. Niters!

Sister Y

I suppose I'd also want to clarify your goals. If your goal is the long-term survival of humanity, perhaps the best existential-risk-minimization strategy is the strategy that worked for millions of years prior to 100,000 B.C. - namely, LOW average cognitive capacity and little technology. I doubt that's what you want. Clearly, you're interested in something else - an intelligent, developing, mysterious future of long (perhaps unlimited?) life spans and unimagined technology.

Perhaps you are the only one pushing for this intervention because you are one of the few who sees the overall costs of the venture as exceeding the projected benefits. Some of the costs and benefits are at least broadly theoretically calculable; they may be "undervalued" or "overvalued" in a way that's simply mistaken. Other aspects of the costs and benefits are more like ethical or aesthetic values - others may overvalue or undervalue compared to you, but neither of you is mistaken.

So I suppose my answer to your question is: either (1) you're mistaken in undervaluing the material costs, or mistaken in overvaluing the likelihood of success of the project; or (2) others are mistaken in doing the opposite. Or, your aesthetic and ethical values are different from those of the majority in evaluating the costs and benefits of the project.

I know that's not much of an answer, but it might help you sort through things.

By the way I wrote an essay largely for you (and partially with Caledonian in mind) on my blog, which I managed to spell correctly this time. It's "Unfriendliness is Unsolvable" if you want to read it.

Hopefully Anonymous

Sister Y, the tagline of my blog is
"trying to understand reality, to maximize my odds of living forever."

I'm not sure you've absorbed that. "The longterm survival of humanity", "an intelligent, developing, mysterious future of long (perhaps unlimited?) life spans and unimagined technology", for me that's all important only to the degree that it can maximize my persistence odds. Which may mean that it's not important to me at all, or that it's very important to me. I don't know yet.

If you wrote "Unfriendliness is Unsolvable" with me in mind, are you aware you may have been preaching to the choir?

I distinguish you and jim from critics like TGGP, in that you don't seem have read much of what I've written, before projecting certain points of view on me, most of which I've been pretty clear in disavowing.

However, I look forward to reading your piece!

Hopefully Anonymous

"But beyond that, I think you place far too much stock in where a little extra intelligence can and will take us. Iterated simply, despots can be smart, too; even ones raised in 'good' families. To be honest, you almost seem like a Lamarckist in your tacit assumptions that personal values flow down the same conduit as IQ points."

I don't think I've ever stated "where a little extra intelligence can and will take us" Nor do I think I've ever even implied that "personal values flow down the same conduit as IQ points". I think there may be a black box element in terms of how our best existential risk minimizers arise. In that black box are all sorts of things, including genetic material. So as part of general strategy diversification, it makes sense to me to make lots more people with that same genetic material. We should make lots more people with similar educational backgrounds, etc. too. But I don't see it as either or, the way you rather explicitly seem to do. The flaw in your reasoning is pretty evident, but I think you have majoritarian social/ethical aesthetics on your side, probably giving you space to be this sloppy. I'm most sympthatic to what I think Andrew Sullivan calls Oakshottian conservatism, the idea that we should be careful to alter social norms that developed over thousands of years, providing us social stability. However, given we all seem to have less than 80 years of life left, those of us who want personal (as opposed to species level) immortality seem to have a rational incentive to increase the risk portfolio of species-level survival to maximize personal persistence odds.

"Having said that, I'll acknowledge that, because of my aforementioned bias, I find your casual willingness to breed and utilize waves of people as steppingstones toward your immortality schemes rather abhorrent, considering the costs involved. I realize I'm probably in a very small minority here, as most people seem to be willing to breed and use people to their own ends. Hell, I did it myself once upon a time."

I hope you're in a very small minority. Because otherwise that will likely be a huge challenge to my ability to maximize my persistence odds.

"But over and above all this, Anon, I think you're living in a risk-minimizing fantasy world. Oh, you might squeeze out another decade or two, if you're lucky. But you're gonna die, dude; and most likely, you'll suffer along the way (hopefully, not too much). I don't celebrate this reality, but reality it is, and all those super-clones and zygotes you're proposing to create will suffer as well. Possibly more, as intelligent people are probably more likely than most to seize upon the implications of what will
happen to them, and to the people and other lifeforms they care about."

Have I posted anything to indicate I disagree with this? Yup, I'll probably fail and die. And yup, some things that seem like they'd maximize my persistence odds would create numerically more suffering people than would exist otherwise. I'd love to have thousands (millions) of clones of myself to run all sorts of knockout experiments for my benefit. It would likely take a superpower dictatorship over many decades to pull that off (even the Nazis and the Japenese Imperialists only managed experiments on thousands of human subjects) and I don't have the chops for that. I'm satisfied with playing the long odds, and with doing what's within my resources to maximize my persistencde odds.

"As far as the life extension thing goes...eat you're Wheaties, don't smoke, get some exercise in, and learn to accept that you, too, will one day soon not exist. Don't like that idea? Then don't drag other people into the same dead end position (Jesus, I wish I'd had someone to tell me this before I had kids!)."

I guess we'll have to agree to disagree? Your perspective is as absurd to me as mine is abhorrent and futile to you.

"Take care, Anon...thanks for the space."

You too, Jim!

jim

Anon, I'd just like to clear one thing up for my own satisfaction. You said...

"...my persistence odds would create numerically more suffering people than would exist otherwise. I'd love to have thousands (millions) of clones of myself to run all sorts of knockout experiments for my benefit. It would likely take a superpower dictatorship over many decades to pull that off (even the Nazis and the Japenese Imperialists only managed experiments on thousands of human subjects) and I don't have the chops for that."

Now, are you saying that you'd be willing to support and/or participate in Nazi-like experimentation on the order of thousands or millions of people for your own bid at life longevity/immortality, or is the last part meant as a disclaimer? I'm not trying to set you up here; after all, I'm positing the end of humankind, period, though by other means, and for other reasons. I'd imagine that, if push came to shove, you'd probably have somewhat more support from the general populace than I, especially if your solution promised the same payoff to others. I'd just like to know whether I'm misinterpreting you, or not.

Vox Day also seems to take this position, btw, in terms of committing seemingly questionable moral acts because God says so (though he'd counter that anything that God says IS a moral act). And since he's admitted that he'd unquestioningly obey God's directives precisely because of God's threat of damnation, due to disobedience, his is an equally self-serving position (self-serving used in the descriptive sense, and not in the condemnatory one).

I'm just curious to know if we're destined to completely talk past each other on this issue. I mean, if I were to ask you "How do you sleep at night?", and your answer winds up being "On a pillow", I can't imagine where else the conversation could possibly go... different paradigms, and all that.

Anyhow, thanks for the responses up to this point, and if you're willing to answer this question, I'd appreciate it. Thanks in advance.

Hopefully Anonymous

"Now, are you saying that you'd be willing to support and/or participate in Nazi-like experimentation on the order of thousands or millions of people for your own bid at life longevity/immortality, or is the last part meant as a disclaimer?"

Yes. The Nazi and Imperialist Japanese human experiment programs are really interesting because of their apparent historical uniqueness. As far as I can tell, the Allies took a substantial risk in not having reciprocally efficient medical and biowarfare research programs, particularly in comparison to the Imperial Japan. However, the US apparently made up for it in part by adopting the Imperial Japanese biowarfare research program, and perhaps its chief scientist (according to wikipedia), as the Cold War began.

Once again according to wikipedia, almost everything we know about dealing with death due to freezing we learned from nazi human medical experimentation. What was learned from imperial japanese medical experimentation, as far as I know, is not public shared knowledge held by the biowarfare sections of the U.S. military.

It's a shame that during the Cold War the superpowers weren't as ethically flexible about human medical experimentation as they were about increasing existential risk with the mass production of nuclear weapons aimed at high density urban populations. We'd know a lot more than we do today, and net human suffering (something you care about?) might be down.

But I only really care to the degree my persistence odds would improve.

I hope that thoroughly answers your question?

jim

I think it does.

TGGP

Quite an interesting post and conversation. Off-topic, but I thought HA might be interested in this on Susan Sontag's repeated refusal to accept death. Keep Hope Alive[?]
Via the Hog.

Chip Smith

HA,

It surprises me somewhat that you describe Jim's position as absurd. The value you assign to maximizing your own persistence odds must derive from an acute abhorrence at the prospect of your own demise, no? Perhaps you have characterized it in other terms (I'm only a semi-regular visitor, so forgive me). Regardless, since you recognize that this is a game you are likely to lose, doesn't it piss you off that someone fired the starting gun to begin with? Perhaps the plight of all those hypothetical HA clones toiling against the clock, and under the same existential burden, seems like small potatoes when you consider the remotely possible payoff, but I feel for them.

You'll consider this meaningless, but I will nevertheless point out that had you never been hatched into this state of bedoomed urgency, there would be absolutely none of this to worry about. Tonight I will sleep better knowing that those clones will, in all rational probability, be forever spared the psychic burden of tilting against the reaper's windmill. For them, the problem of existential risk has already - hopefully - been solved.

BTW, I would really love to hear your thoughts on David Rieff's book. I read it a month ago, and I can't stop thinking about it.

Hopefully Anonymous

Chip,
I have no desire to be persuaded not to try to maximize my persistence odds. In fact, I consider attempted persuasions as threats to my persistence. The difference between Jim's anti-natalist comments in my thread and yours is that there's information in his comments which could be helpful (or good faith attempts to be helpful) in helping me clarify how to best maximize my persistence odds.

Chip Smith

HA,

I don't mean to waste your time, and I certainly don't mean to threaten your aims. I'm actually rooting for you to have the last laugh. But I do sincerely wonder about the underlying reasons for this battle you have chosen to wage against most odds. If you could take a pill that would numb the persistence-maximizing part of your brain while guaranteeing you would enjoy peace of mind until you die just like the billions before you, I'm guessing that you would be adamantine in declining such hemlock. But suppose one of your clones downed one by accident? Wouldn't it be maddening to see that other copy of your gene machine tweaked to contentment, perhaps chuckling over your resolve? In a way, despite the cognitive gulf that surely separates us, I've come to feel like that dissenting clone, cracking open another beer as the ship goes down.

I'm sorry I don't have anything more productive contribute. If I were you, I suppose I would concentrate on making as much money as possible in the short term. Then, I would try to come up with a rational cut-off date when, assuming the persistence odds haven't dramatically improved due to some new and practical technology, I would pay the most reputable third party available to put me down easy into the a deep freeze from which I might awake when the odds have tipped. In the meantime, I would concentrate on finding at least one high IQ mate and have as many children as possible in the hope that at least some of them will guard after your suspended remains and continue with the project.

Of course, this last option would not be available to me since my death-loathing translates into a deep moral distaste toward the prospect of conscripting others into the same dire predicament. But the option is available to you, and it seems like smartest way to play your remaining chips.

Another thought. It might be possible - somewhere in the world - to create a kind of sperm bank trust arrangement, ensuring that your scattered brood will be bequeathed with a monetary reward in the event that they come up with a means of reanimating you. The donor-recipient would simply be required to sign off on the terms. And why wouldn't she? There would be a possible payoff for her kids, after all, and the gambit would constitute a strong incentive for her kids - your biological offspring - to cultivate a productive understanding of edging technology.

Wow. I think there might actually be some value in that last scenario. Is it something you've considered?

Chip

Hopefully Anonymous

Chip,
Thanks for the brainstorming on my behalf. I've considered variations of this. At this point I'd rather breed more Bostroms, De Grays, etc. than more of me, because they seem more likely to solve biological aging and minimize existential risk. However, what you suggest sounds worth exploring to me. I encourage you to think about it more and to elaborate on it further. :)

Chip Smith

HA,

Notwithstanding legal issues, you should encourage the most promising immortalist researchers to devise sperm trusts for their own projects. But you should absolutely set about creating one as well. If the Bostroms and De Grays have genetic heirs who work to claim the prize, it is likely that their work will be beneficial to your own genetic heirs who will presumably be working toward the same end BUT with a legally entrusted incentive that attaches to your particular interests. Maximizing the number of sperm trusts in effect might also create a natural incentive for thus situated heirs to work in collaboration to solve the central problem in order to increase their individual odds of cashing in by reanimating their respective trustors. Unless I'm missing something, the incentives would converge so that, at least potentially, everyone benefits - the donor-recipients, the conditional trustees (except for the fact that they might have been better off never having come into existence), and, most importantly from your perspective, the frozen trustor-donors.

It seems like a good first step would be to contact a lawyer who can draft a model trust agreement and advise as to what jurisdictions would be most friendly to the terms. Another good move would be for enterprising singularity-driven researchers to join together to establish a private sperm bank in the optimal jurisdiction and open the doors to donor recipients. With the reward attached, it seems plausible that such a bank could even exercise discretion in choosing which donor-recipients (or purchasers, if an open market should prove to be a better model) would be eligible to receive the premium goo. You could select mothers for IQ, and/or for their possibly genetically nested predisposition toward immortalism.

Finally, I see no reason why such an arrangement should be conditioned on the trustor-donor being frozen; the agreement could be drafted to stipulate that the monetary reward would be collectible upon the event that a practicable breakthrough is reached during your natural life, with the cryogenic scenario contemplated in the same general terms.

Best,

Chip

Hopefully Anonymous

Chip,
Interesting. I encourage you to keep brainstorming. Keep in mind that egg donorship is as technologically available and more or less as socially accepted as sperm donorship. With surrogate motherhood a woman isn't limited by the number of offspring she can personally give birth to. That's without going into more controversial but now apparently technologically possible approaches as human cloning.

Sister Y

HA my brother, re: preaching to the choir - I didn't write it to convince you of anything, just as a sort of tribute. I find it fascinating that someone would want to live forever, and while I understand that that's your goal, it's hard for me to accept that as being your only goal, since it's so foreign to me. I can't imagine wanting to go on living just for the sake of living. It's much easier for me to picture someone wanting to live long enough to experience particular things, like contact with aliens or brain meld or cognitive enhancements or whatever. But to live just for the sake of more living? Fascinating. I suppose that's why it's important to keep in contact with people with radically different intuitions and values from mine.

Re: Chip's suggestion of finding a high-IQ mate - I'm really amused by the Assortative Mating theory that traces higher rates of autism to us girls being allowed to go to engineering school (and meet engineer boys, and breed). It seems that the offspring of two high-IQ individuals with slightly autistic-spectrum traits often will have full-blown autism. So I'd say, go for an artist or an opera singer or something, rather than a chemist or a computer engineer.

future lung

it is the year 6,000,000,000,000,000,000,000,000 AD. everybody is gone.

lung is lonely.

but don't worry! lung found a copy of the pretty book. lung will bring everyone back!

"rise from your grave and serve lung again!"

tee-hee

lung (of the future)

The comments to this entry are closed.