Christianity – Genetic Blowback?

Just a thought.

Just watched the excellent “Sex and the Church

Learned a lot. Highly recommended.

But although it explained – very well – what we know about the history of the Christian church’s embarrassing obsession with sex, it didn’t explain how or why the ideas which formed the core of the meme managed to survive past the “raised eyebrow” stage. And they are so psychotic that an explanation is required.

Clearly by the time Augustine had “clarified” the doctrine of sexual sin, the only logical conclusion that can be drawn is that a truly sinless human race – which is, ostensibly, what the Christian church would have liked to achieve – would, by virtue of complete sexual abstinence, have made themselves extinct within about a century.

Is it conceivable that no one understood that at the time? Not for me it ain’t. For me, it’s bleedin obvious that it would have been bleedin’ obvious to any sentient human hearing that proposal at any time. So how did it get past the snorts of ridicule? What on earth made so many meekly accept – at least in public – such a message as meaningful ethical guidance?

Not, of course, that they paid anything but lip service to the resultant edicts; or else there wouldn’t be so many of the buggers around today. So the first tactical error (in this context) made by the authoritarian church had the effect of making private disobedience an essential tool of survival. That’s not a good trait to encourage in a “subject.”

More significantly, if the tendencies to either disobedience or submission (to the demand for sexual abstinence) had any basis in genetic predisposition, their strategy also ensured the evolution of increasingly sceptical and disobedient Christians, whom – inevitably – learned to value autonomy over authority and, eventually, to reject authority altogether. Delicious irony?

I know. It’s a fairy story. Nice one though.

Advertisements

Optional Mortality – The Informed Consent Protocol

It’s time we set the rules for reviving digitally stored humans, once the revival technology has become available. I’m sorry if you had other plans, but this is important.

I don’t usually post responses to my forum comments on this blog but given my recent ramblings on our future as Digital Humans, it seems apt. First off, hat-tip to MrJSSmithy for the nudge. His question (Ah. If you get nagged about the unknown certificate, on the way in to the forum, please allow the “security exception”. Oh, and don’t forget to wipe your feet.) His question forced me to accept that my assumptions (about when we might choose to be revived in digital form) were a) hidden b) possibly unfounded or at least not necessarily universally applicable and c) needed to be made explicit.

There are multiple reasons we need to consider an Informed Consent Protocol, some of which are touched on in the play (Resurrection), where I introduce the notion of Omortality (optional mortality). Other reasons are touched on in my initial reply to Smithy.

While the arrival of the technology capable of sustaining our digital existence is obviously still speculative, it is certainly reasonable to assume that we’ll achieve the prerequisite storage capacity and brain reading techniques required to capture the human brain map well before we achieve the ability to revive that map as an autonomous human clone, psychologically identical to its source, but in a digital environment. Personally I reckon that gap (between the ability to store and the ability to revive) will be at least a few decades. Kurzweil is more optimistic.

When would Sir like to be revived?
In any case we can certainly anticipate that many bitizens will sign up for storage before they can ever know whether it will even be possible for them to be revived. Which means, if and when the revival technology is available we’ll have a backlog of – possibly millions or even hundreds of millions – of dead but digitally stored humans available to be re-activated. One obvious potential ethical issue will be the question of whether and in what circumstances each relevant individual has consented to be revived.

This is the most important issue which I am proposing to tackle with the Informed Consent Protocol. The idea is to allow anyone who opts to be digitally preserved to record, for the benefit of the eventual Revival Team or Computer, the conditions under which they would like to be re-activated and, optionally, the extent of that re-activation. As you may have gathered, I do not regard it as a simple “Yes/No” question.

There are definitely conditions in which I, for one, would not wish to be revived. For instance, if the planet is about to be struck by a massive asteroid or if the current batch of Islamic Terrorists has won their war against the modern world and humanity all lives under a new Caliphate – or any other form of Theocracy. Revival Mr Stottle? Think I’ll pass on this occasion.

Yes, I know that even the option of Revival would almost certainly have disappeared under a Caliphate but, a) I’m merely illustrating the point that there are potential circumstances under which I’d prefer to stay in storage. (Try me again in a coupla hundred years). And b) even (or especially) under a Theocracy, there will be a Resistance movement and it might be them who are trying to revive me.

So the Protocol needs to allow bitizens to set the parameters or conditions under which they would wish or not wish to be re-activated.

and how much of you shall we revive?
There are also potential levels of activation, short of full autonomy, which an individual may wish to accept in preference to full activation. The protocol needs to capture these preferences as well.

I’ve already made it clear that I wouldn’t wish my digital self to wake up in “the wrong sort of future”. But that doesn’t mean that no part of me could be revived without the full Stottle. In a digital environment the options are limited only by our imagination.

One such is a functional avatar, based on me but without the conscious spark (whatever that turns out to be) that makes it “me”. Such an avatar could serve two useful purposes. First, it could answer, on my behalf, any question that I’d be able to answer and could choose to answer or not based on its awareness of whether or not the full “me” would consent to answering. Second, it could identify the presence of the conditions in which I would be happy to be fully activated. And that possibility would make the protocol much easier to implement.

Instead of trying to describe all the possible reasons you may or may not wish to be revived, it would be much more straightforward if you could just say “Revive my Avatar to the point where it is capable of making the decision for me”.

Wake me up when I’m thirsty…
As well as deciding the moment of initial digital re-activation, I have predicted elsewhere that this (functional Avatars) is how future digital humans may well cope with living potential eternal lives. Unlike some, I do not imagine that, after living a few million years, some individuals might become bored and choose voluntary personal extinction. But I can imagine that, in some circumstances (eg travelling to a distant galaxy which might still take millions of years) where individuals might choose to become dormant until or unless their permanently conscious Avatar wakes them up because something interesting is about to happen (or just has).

But even if such Avatars become possible, we still need the Informed Consent Protocol so that each digitally stored human can record their unequivocal consent to the revival of, first, the Avatar and second, subject to the Avatar’s judgement, the fully restored human mind.

The other reason we need the protocol is, of course, that such an Avatar may NOT be possible, so we have to be able to leave some kind of guide to the conditions which would meet our consent.

So with all that in mind, here’s my first stab at the kind of questions you’d have to record your answers to, in order to allow a future Revival Team/Computer to make a reasonable assessment of your willingness to rejoin the human race. I do not intend to design some kind of “form” we’d fill in. I’ll just describe the issues the “form” has to cover. I’ll leave it for the legal eagles to create the paperwork.

Section 1 – Identity.
Obviously the Revival team will need a fool-proof way to identify you as the owner of the relevant digital store. That’ll almost certainly require a cryptographic proof. So a digital notary will verify your identity, record your consent and have it protected on an Immutable Audit Trail. It will include embedding the hash of the digital store (which we can assume to be unique itself) in the document which describes your consent to revival. This will tie the consent to the data. (It might even form part of the key which must be used to decrypt and unlock the data) The crypto-geeks will no doubt improve on that outline as we get closer to needing to store the data.

Section 2 – Avatar consent
Here we’d sign up to allowing an Avatar, judged – in the technical context of the time – capable of representing your wishes, to make the judgement on your behalf as to whether “now” is the right time to revive you. This is obviously a conditional consent based on the existence of technology which makes the Avatars possible and capable of that level of functionality.

Section 3 – Unaided consent
This is the more difficult scenario where we have to try to anticipate, today, all the possible reasons which might exist tomorrow which might deter us from being revived. Or an overriding positive condition which will authorise our revival regardless of any potential obstacles.

However, I don’t think it’s as difficult as it may first appear. Because, in short, you could always decide to go back into hibernation. So you could stipulate that you’ll act, in a sense, as your own Avatar. You’ll wake up, take a look around and decide whether or not to make the awakening permanent or hit the snooze button for another thousand years.

That would only require one condition to be true in order for your revival to be permitted and that condition is simply that the newly awakened you will retain the sole authority on whether and how long you stay re-activated. You might even make that the ONLY condition for your revival. “Don’t wake me up until and unless when I wake up, I can choose to return to indefinite storage”, or the more positive “Wake me up as soon as it becomes possible for me to exercise the option to return to storage”

Section 4 – Arbitrary conditions
Where the first three sections really deal with the technical issues of identification and available functionality, this section needs to deal with non-technical issues which might affect the stored individual’s decision on revival. If the (section 2) Avatar consent is possible, then this section would be unnecessary, but if not, then the individual may need to list the conditions which they consider would block or permit their revival; or should at least be present/absent before attempting revival under (Section 3) unaided consent.

For instance, someone might stipulate that they would only want to be revived if other named individuals had also chosen to be revived. Or, more negatively, if other named individuals had NOT chosen to be revived.

Section 5 – Simultaneous Consciousness and the “Right to Murder”?
This section is the direct result of MrJSSmithy’s question. It is probably not going to be an issue for the first generation of digitally stored humans because it won’t be possible, as mentioned above, to re-activate your stored version until the technology has advanced to make that possible and that is likely, in my view, to be a few decades after we’ve begun to store ourselves in digital form.

But step forward, say, a hundred years from now and there is no obvious reason why your digital clone could not be re-activated as soon as the backup is complete. As I said in the forum, I’ve always been conscious of the myriad of awkward issues this would raise and assumed that we’d avoid the problem by forbidding such activation while the “source” (or “Simulee” as I’ve named it in the forum reply) remained alive. (see the reply for more detail)

That, I now admit, was essentially a personal prejudice. I wouldn’t permit it for my clone, but I can’t think of any technical reason why it would not be possible to have multiple versions of yourself active at the same time. I’m quite sure we will do that deliberately when we ARE digital humans. For example, I can imagine sending a version of myself off to live on the plains of Africa to observe the wildlife in real-time for periods of decades at a time. It might be an advanced Avatar or a full clone. It might have no physical form, or the form of an insect just large enough to fly around with an HD camera, or whatever, and it might link up other versions of me, from time to time to merge experiences.

The question is, would such an arrangement be feasible or “a good idea” while your organic self was still around and gathering experience and data in its own pedestrian organic fashion? The biggest single problem being that, whereas digital versions of yourself could easily choose to merge their experiences, and will thus always comprise the full organic you, plus any new experiences the clone/s gather in their new existence, the traffic is likely to remain very much “one way”. i.e. the organic you will never be able to assimilate the experience of your active digital clone/s…

… and a major consequence of that would be that the inevitable divergence between the personality of source and clone/s may quickly reach the point at which they can no longer be considered the “same person”. Indeed, as I suggest in the forum discussion, the clones might actually become antagonistic to their own source!

For me, therefore, simultaneous consciousness is a big “no no”. But others may be indifferent or even think it’s a good idea. So this final section of the protocol needs to spell out whether, while you remain alive, you would consent to the full activation of the clone. And even that, even for me, is not going to be a simple “yes/no” question.

Attending My Own Funeral
For example, as I say in the same place, I can well imagine circumstances in which my organic self deteriorates into the senescence of old age and dementia robs me of the ability to meaningfully consent to anything. At which point I would be happy for my digital clone to be activated and assume “Power of Attorney” over my organic shell until it shuffles off this mortal coil. Indeed it is the vision of that future which led to my saying somewhere in the distant past “I hope and intend to be one of the first humans to attend (perhaps even conduct!) their own funeral”

Actually I now recognise that to be a bit too optimistic. Although I hope and still expect to survive till the storage technology becomes available, hanging on till revival is also possible is probably a bit of a stretch given that I’m already in my sixties.

Nevertheless, this final section needs to allow the organic source to stipulate the conditions under which activation of the clone could take place during their organic lifetime. And it is actually the most potentially controversial component of the entire protocol.

Essentially, this section needs to cover the issue of whether or not the organic human can “murder” their own digital clone, and even, in the Power of Attorney scenario, permit almost the exact opposite – where the clone, for example, eventually gives the final authority to switch off the life support system for its organic source.

I point out, in the forum reply, that the ONLY reason I would want to activate my own clone while I was both alive and fully functional, is that I would need to be convinced that the clone really was “me”. (I raised the point, first, during a lengthy debate, on whether that was even conceivable)

And that the only way I can currently imagine being sufficiently convinced would be to engage in a fairly lengthy and confidential conversation with my clone to probe it’s conformance with me. For which reason it would obviously have to be activated.

When Does My Clone Achieve Normal “Human Rights”?
But that immediately raises the question of the legal basis on which I can then effectively say, “yup, you’ve convinced me, now go back to sleep”. That, of course, would NOT be murder. (because the clone could eventually be revived again) but if we allow the more extended activation suggested by MrJSSmithy’s question, it raises the possibility, as I’ve already mentioned, of the clone become hostile to the source, or even without such hostility, developing characteristics which so horrify the source that the source decides s/he needs to terminate their own clone. i.e wipe the storage – not just put the clone to sleep. Would we – COULD WE – ever permit that?

I think that’s likely to become a hotter topic once the technology exists and clones have started to be stored. But I can certainly imagine a rule which would encompass the simpler situation described by my own preference.

For a start, given that my own clone would start out as psychologically identical in all respects to me, I have no problem in stating, on behalf of my clone, that I am willing to be put back to sleep after I have convinced myself that, as a clone, I really am “me”. I have no problem further stipulating that if my clone indicates, during the persuasion period, that it has changed its mind and now wishes to remain active, that this should be taken as direct evidence that it is a faulty copy (because it clearly does not mirror acceptance of this crucial condition) and should thus not just be de-activated but destroyed.

The first question, if you like, for the newly activated clone, would thus be: “do you still accept these conditions?” If not, the clone is immediately destroyed, whereas, if it indicates it is still happy with the conditions, then it has already consented to de-activation after persuasion.

But that only really deals with the relatively simple scenario required for the short “period of persuasion” and I don’t anticipate that such periods will even be necessary once the technology has been running long enough for people to trust it without such tests.

So the really difficult question is whether and how we would frame rules to deal with de-activation or destruction after a clone has been allowed to develop its own new life during the lifetime of the organic source. My gut instinct is to avoid that problem by blocking the option, as I would do for my own clone. Once you’ve allowed the clone to become a “different person” you can no longer kill it. The only law I can imagine being consistent with our current notions of autonomy and “human rights” – once a clone has been permitted to diverge to the point where it no longer wishes to become dormant – is one that states, from that point on, the clone is one of us…

*********************************************************************************

(Feel free to discuss this here or on the forum. You have to be a WordPress member to post comments here and you need to join my forum to add comments there. Be seeing you…)

Digital Evolution – Another Step Closer

This is a key step towards our digital evolution and our migration from organic to digital lifeforms. Basically, if we can’t record the human brain in sufficient resolution, we can’t migrate. Period. No Omortality

But this research looks like we’re poking our sticks in the right ant-nests! If we get this right, then, sometime in the next 10-20 years, we’ll have the technology to record and store the information constituting a complete human brain, probably in a few 10 minute sequences, to the resolution required to preserve our entire personality, memory and neural matrix well enough to be re-animated, later, when a digital substrate exists to house us.

Unfortunately, that might be MUCH later. Like another 50-100 years. So we might, I’m afraid, still have to spend a few years technically dead. Although, interestingly, along the way, technology should reach the point where the brain maps could be interacted with as a kind of “living in the permanent present” avatar, like Henry Molaison, who we’ve been hearing about only this last week…

This isn’t a breakthrough, but it is a major step in the direction we need to travel in order to achieve the break-through.

Oh, and along the way, it’s going to have some fascinating commercial and security spinoffs:

Ferinstance, I give you: the perfect authentication device. It not only verifies, unspoofably, unique individuals, but can even detect the absence of informed consent and thus even block those attacks based on coercion. You couldn’t unlock the safe or file even if you did have a gun pointed at your head. And the attacker will know this, so they won’t even try that. It will even enable version 1 of the Mindlock I mused on back in April.

And of course, it makes possible the Perfect communication and self-surveillance device I was fantasising about in the History of Digital Telepathy

…and think of the impact this is going to have on VR. I think we can bet that “Full Immersion” will come along shortly after the first wave of smart dust adopters have begun to appreciate the benefits of receiving data direct to the sense processing parts of the brain.

And obviously, whatever we record, subject to our informed consent, can be played back. Think what that’s going to do for the sex industry. Just a thought. Though I challenge you not to think about it.

It’s happening Reg! Something’s actually happening!
Just remember, you ‘eard it ‘ere first. Righ’!

More Support for “Early Use of Fire”

I’m sorry for the child of course. But I’m still rather pleased to read this news that the kid probably died of malnutrition as a result of withdrawal of meat from his diet.

It lends further support to my conjecture that we’ve been using Fire for more than 2 million years; in contrast to the orthodox archaeological view that half a million is more likely.

A key thing to consider is the “at least” in the articles first sentence. For humans to have suffered the consequences of withdrawal of meat, implies evolutionary adaptation which itself would itself have been the result of “at least” a couple of hundred thousand years of meat-eating. Put that together with Richard Wrangham’s observations about dramatic changes in Skull shape and diet around 2 million years back (which he puts down to cooking meat) and it’s all increasingly consistent with my suggestion that we started using fire methodically (rather than opportunistically) more than 2 million years ago as a direct result of the prolonged use and manufacture of flint tools – which are uncontroversially dated back “at least” 2.5 million years.

Anyone who’s attempted working with flint – or has a non electronic lighter – knows how easily they produce sparks and it has always seemed obvious to me that such sparks would have occasionally produced small fires in the dried brush of the savannah and that, after a few hundred thousand years it might well have occurred to even the most conservative Homo Erectus to think “hey – wait a minute…”

Social Psychosis

Did I miss something? The headline “Why people believe undocumented immigrants cause more crime” suggests the author has found an answer to the question: “why DO people believe (etc)” but according to this Physorg summary, all it seems she’s doing is pointing out the evidence which challenges the belief; then reiterating the question “why the belief?”… Odd.

At the very least I’d have expected to see some mention of the most likely source of such ill-informed belief: viz the disinformation provided, constantly and at high volume, by the tabloid “journalists” in print, visual and digital media.

In any case, the more interesting question, given that the culprits have access to the same evidence, is why, nevertheless, they choose to promote the disinfo, even though they can see their lies being dissected and revealed in public as easily as this story illustrates.

This behaviour is not, of course, limited to their treatment of the facts regarding levels of American crime committed by illegal aliens. They’re pretty similar with regard to their treatment of Climate Change, the War on Drugs, the causes of the Financial Meltdown in 2008, the prospects for the Global Economy and many, if not most, crucially important areas of human discourse.

A clue to their motivation comes in this paper, pithily entitled:A culture of mania: a psychoanalytic
view of the incubation of the 2008 credit crisis
(pdf) in which the author “suggest(s) that a manic culture is one typified by denial, omnipotence, triumphalism and over-activity”; exactly what we see from the Authoritarians the world over in relation to those key issues. Their inflated conviction regarding their own infallibility is one of the most dangerous features of the modern world.

I’m also inclined to welcome this as reasonable academic support for my own amateur efforts to define “Social Psychosis” which I first did back in 2005 in my attempt answer the question as to whether, when Authoritarians lie about the evidence for WMD, or War Crimes or Evolution or whatever, they are Lying, Stupid or Blind.

I made the point that:

Psychologically, people who form firm beliefs – in the absence of the validated evidence we’ve discussed – are, essentially both irrational and gullible… If they continue to hold such beliefs when the relevant hypotheses have been falsified, then, I would argue, they are showing the early signs of psychosis. When groups of like-minded people share the challenged beliefs, it becomes a social psychosis in which members turn to each other for mutual validation of their shared and increasingly distorted world view.

What Mark Stein is helpfully doing is putting some serious meat on the bones of that conjecture.

The Angel

Male-Dominated-Culture