Optional Mortality – The Informed Consent Protocol

It’s time we set the rules for reviving digitally stored humans, once the revival technology has become available. I’m sorry if you had other plans, but this is important.

I don’t usually post responses to my forum comments on this blog but given my recent ramblings on our future as Digital Humans, it seems apt. First off, hat-tip to MrJSSmithy for the nudge. His question (Ah. If you get nagged about the unknown certificate, on the way in to the forum, please allow the “security exception”. Oh, and don’t forget to wipe your feet.) His question forced me to accept that my assumptions (about when we might choose to be revived in digital form) were a) hidden b) possibly unfounded or at least not necessarily universally applicable and c) needed to be made explicit.

There are multiple reasons we need to consider an Informed Consent Protocol, some of which are touched on in the play (Resurrection), where I introduce the notion of Omortality (optional mortality). Other reasons are touched on in my initial reply to Smithy.

While the arrival of the technology capable of sustaining our digital existence is obviously still speculative, it is certainly reasonable to assume that we’ll achieve the prerequisite storage capacity and brain reading techniques required to capture the human brain map well before we achieve the ability to revive that map as an autonomous human clone, psychologically identical to its source, but in a digital environment. Personally I reckon that gap (between the ability to store and the ability to revive) will be at least a few decades. Kurzweil is more optimistic.

When would Sir like to be revived?
In any case we can certainly anticipate that many bitizens will sign up for storage before they can ever know whether it will even be possible for them to be revived. Which means, if and when the revival technology is available we’ll have a backlog of – possibly millions or even hundreds of millions – of dead but digitally stored humans available to be re-activated. One obvious potential ethical issue will be the question of whether and in what circumstances each relevant individual has consented to be revived.

This is the most important issue which I am proposing to tackle with the Informed Consent Protocol. The idea is to allow anyone who opts to be digitally preserved to record, for the benefit of the eventual Revival Team or Computer, the conditions under which they would like to be re-activated and, optionally, the extent of that re-activation. As you may have gathered, I do not regard it as a simple “Yes/No” question.

There are definitely conditions in which I, for one, would not wish to be revived. For instance, if the planet is about to be struck by a massive asteroid or if the current batch of Islamic Terrorists has won their war against the modern world and humanity all lives under a new Caliphate – or any other form of Theocracy. Revival Mr Stottle? Think I’ll pass on this occasion.

Yes, I know that even the option of Revival would almost certainly have disappeared under a Caliphate but, a) I’m merely illustrating the point that there are potential circumstances under which I’d prefer to stay in storage. (Try me again in a coupla hundred years). And b) even (or especially) under a Theocracy, there will be a Resistance movement and it might be them who are trying to revive me.

So the Protocol needs to allow bitizens to set the parameters or conditions under which they would wish or not wish to be re-activated.

and how much of you shall we revive?
There are also potential levels of activation, short of full autonomy, which an individual may wish to accept in preference to full activation. The protocol needs to capture these preferences as well.

I’ve already made it clear that I wouldn’t wish my digital self to wake up in “the wrong sort of future”. But that doesn’t mean that no part of me could be revived without the full Stottle. In a digital environment the options are limited only by our imagination.

One such is a functional avatar, based on me but without the conscious spark (whatever that turns out to be) that makes it “me”. Such an avatar could serve two useful purposes. First, it could answer, on my behalf, any question that I’d be able to answer and could choose to answer or not based on its awareness of whether or not the full “me” would consent to answering. Second, it could identify the presence of the conditions in which I would be happy to be fully activated. And that possibility would make the protocol much easier to implement.

Instead of trying to describe all the possible reasons you may or may not wish to be revived, it would be much more straightforward if you could just say “Revive my Avatar to the point where it is capable of making the decision for me”.

Wake me up when I’m thirsty…
As well as deciding the moment of initial digital re-activation, I have predicted elsewhere that this (functional Avatars) is how future digital humans may well cope with living potential eternal lives. Unlike some, I do not imagine that, after living a few million years, some individuals might become bored and choose voluntary personal extinction. But I can imagine that, in some circumstances (eg travelling to a distant galaxy which might still take millions of years) where individuals might choose to become dormant until or unless their permanently conscious Avatar wakes them up because something interesting is about to happen (or just has).

But even if such Avatars become possible, we still need the Informed Consent Protocol so that each digitally stored human can record their unequivocal consent to the revival of, first, the Avatar and second, subject to the Avatar’s judgement, the fully restored human mind.

The other reason we need the protocol is, of course, that such an Avatar may NOT be possible, so we have to be able to leave some kind of guide to the conditions which would meet our consent.

So with all that in mind, here’s my first stab at the kind of questions you’d have to record your answers to, in order to allow a future Revival Team/Computer to make a reasonable assessment of your willingness to rejoin the human race. I do not intend to design some kind of “form” we’d fill in. I’ll just describe the issues the “form” has to cover. I’ll leave it for the legal eagles to create the paperwork.

Section 1 – Identity.
Obviously the Revival team will need a fool-proof way to identify you as the owner of the relevant digital store. That’ll almost certainly require a cryptographic proof. So a digital notary will verify your identity, record your consent and have it protected on an Immutable Audit Trail. It will include embedding the hash of the digital store (which we can assume to be unique itself) in the document which describes your consent to revival. This will tie the consent to the data. (It might even form part of the key which must be used to decrypt and unlock the data) The crypto-geeks will no doubt improve on that outline as we get closer to needing to store the data.

Section 2 – Avatar consent
Here we’d sign up to allowing an Avatar, judged – in the technical context of the time – capable of representing your wishes, to make the judgement on your behalf as to whether “now” is the right time to revive you. This is obviously a conditional consent based on the existence of technology which makes the Avatars possible and capable of that level of functionality.

Section 3 – Unaided consent
This is the more difficult scenario where we have to try to anticipate, today, all the possible reasons which might exist tomorrow which might deter us from being revived. Or an overriding positive condition which will authorise our revival regardless of any potential obstacles.

However, I don’t think it’s as difficult as it may first appear. Because, in short, you could always decide to go back into hibernation. So you could stipulate that you’ll act, in a sense, as your own Avatar. You’ll wake up, take a look around and decide whether or not to make the awakening permanent or hit the snooze button for another thousand years.

That would only require one condition to be true in order for your revival to be permitted and that condition is simply that the newly awakened you will retain the sole authority on whether and how long you stay re-activated. You might even make that the ONLY condition for your revival. “Don’t wake me up until and unless when I wake up, I can choose to return to indefinite storage”, or the more positive “Wake me up as soon as it becomes possible for me to exercise the option to return to storage”

Section 4 – Arbitrary conditions
Where the first three sections really deal with the technical issues of identification and available functionality, this section needs to deal with non-technical issues which might affect the stored individual’s decision on revival. If the (section 2) Avatar consent is possible, then this section would be unnecessary, but if not, then the individual may need to list the conditions which they consider would block or permit their revival; or should at least be present/absent before attempting revival under (Section 3) unaided consent.

For instance, someone might stipulate that they would only want to be revived if other named individuals had also chosen to be revived. Or, more negatively, if other named individuals had NOT chosen to be revived.

Section 5 – Simultaneous Consciousness and the “Right to Murder”?
This section is the direct result of MrJSSmithy’s question. It is probably not going to be an issue for the first generation of digitally stored humans because it won’t be possible, as mentioned above, to re-activate your stored version until the technology has advanced to make that possible and that is likely, in my view, to be a few decades after we’ve begun to store ourselves in digital form.

But step forward, say, a hundred years from now and there is no obvious reason why your digital clone could not be re-activated as soon as the backup is complete. As I said in the forum, I’ve always been conscious of the myriad of awkward issues this would raise and assumed that we’d avoid the problem by forbidding such activation while the “source” (or “Simulee” as I’ve named it in the forum reply) remained alive. (see the reply for more detail)

That, I now admit, was essentially a personal prejudice. I wouldn’t permit it for my clone, but I can’t think of any technical reason why it would not be possible to have multiple versions of yourself active at the same time. I’m quite sure we will do that deliberately when we ARE digital humans. For example, I can imagine sending a version of myself off to live on the plains of Africa to observe the wildlife in real-time for periods of decades at a time. It might be an advanced Avatar or a full clone. It might have no physical form, or the form of an insect just large enough to fly around with an HD camera, or whatever, and it might link up other versions of me, from time to time to merge experiences.

The question is, would such an arrangement be feasible or “a good idea” while your organic self was still around and gathering experience and data in its own pedestrian organic fashion? The biggest single problem being that, whereas digital versions of yourself could easily choose to merge their experiences, and will thus always comprise the full organic you, plus any new experiences the clone/s gather in their new existence, the traffic is likely to remain very much “one way”. i.e. the organic you will never be able to assimilate the experience of your active digital clone/s…

… and a major consequence of that would be that the inevitable divergence between the personality of source and clone/s may quickly reach the point at which they can no longer be considered the “same person”. Indeed, as I suggest in the forum discussion, the clones might actually become antagonistic to their own source!

For me, therefore, simultaneous consciousness is a big “no no”. But others may be indifferent or even think it’s a good idea. So this final section of the protocol needs to spell out whether, while you remain alive, you would consent to the full activation of the clone. And even that, even for me, is not going to be a simple “yes/no” question.

Attending My Own Funeral
For example, as I say in the same place, I can well imagine circumstances in which my organic self deteriorates into the senescence of old age and dementia robs me of the ability to meaningfully consent to anything. At which point I would be happy for my digital clone to be activated and assume “Power of Attorney” over my organic shell until it shuffles off this mortal coil. Indeed it is the vision of that future which led to my saying somewhere in the distant past “I hope and intend to be one of the first humans to attend (perhaps even conduct!) their own funeral”

Actually I now recognise that to be a bit too optimistic. Although I hope and still expect to survive till the storage technology becomes available, hanging on till revival is also possible is probably a bit of a stretch given that I’m already in my sixties.

Nevertheless, this final section needs to allow the organic source to stipulate the conditions under which activation of the clone could take place during their organic lifetime. And it is actually the most potentially controversial component of the entire protocol.

Essentially, this section needs to cover the issue of whether or not the organic human can “murder” their own digital clone, and even, in the Power of Attorney scenario, permit almost the exact opposite – where the clone, for example, eventually gives the final authority to switch off the life support system for its organic source.

I point out, in the forum reply, that the ONLY reason I would want to activate my own clone while I was both alive and fully functional, is that I would need to be convinced that the clone really was “me”. (I raised the point, first, during a lengthy debate, on whether that was even conceivable)

And that the only way I can currently imagine being sufficiently convinced would be to engage in a fairly lengthy and confidential conversation with my clone to probe it’s conformance with me. For which reason it would obviously have to be activated.

When Does My Clone Achieve Normal “Human Rights”?
But that immediately raises the question of the legal basis on which I can then effectively say, “yup, you’ve convinced me, now go back to sleep”. That, of course, would NOT be murder. (because the clone could eventually be revived again) but if we allow the more extended activation suggested by MrJSSmithy’s question, it raises the possibility, as I’ve already mentioned, of the clone become hostile to the source, or even without such hostility, developing characteristics which so horrify the source that the source decides s/he needs to terminate their own clone. i.e wipe the storage – not just put the clone to sleep. Would we – COULD WE – ever permit that?

I think that’s likely to become a hotter topic once the technology exists and clones have started to be stored. But I can certainly imagine a rule which would encompass the simpler situation described by my own preference.

For a start, given that my own clone would start out as psychologically identical in all respects to me, I have no problem in stating, on behalf of my clone, that I am willing to be put back to sleep after I have convinced myself that, as a clone, I really am “me”. I have no problem further stipulating that if my clone indicates, during the persuasion period, that it has changed its mind and now wishes to remain active, that this should be taken as direct evidence that it is a faulty copy (because it clearly does not mirror acceptance of this crucial condition) and should thus not just be de-activated but destroyed.

The first question, if you like, for the newly activated clone, would thus be: “do you still accept these conditions?” If not, the clone is immediately destroyed, whereas, if it indicates it is still happy with the conditions, then it has already consented to de-activation after persuasion.

But that only really deals with the relatively simple scenario required for the short “period of persuasion” and I don’t anticipate that such periods will even be necessary once the technology has been running long enough for people to trust it without such tests.

So the really difficult question is whether and how we would frame rules to deal with de-activation or destruction after a clone has been allowed to develop its own new life during the lifetime of the organic source. My gut instinct is to avoid that problem by blocking the option, as I would do for my own clone. Once you’ve allowed the clone to become a “different person” you can no longer kill it. The only law I can imagine being consistent with our current notions of autonomy and “human rights” – once a clone has been permitted to diverge to the point where it no longer wishes to become dormant – is one that states, from that point on, the clone is one of us…

*********************************************************************************

(Feel free to discuss this here or on the forum. You have to be a WordPress member to post comments here and you need to join my forum to add comments there. Be seeing you…)

US Man Raped By Police – Then Billed For It


You know what shocks me more than the “rape” itself? The fact that only 84 people had watched that video before me.

If stuff like that doesn’t go viral, it’s no wonder the Police State of America is becoming normalised. Mind you, that video has an awful lot of competition. Try googling “Police Brutality” youtube and you’ll get around 5 million hits (I just got 4,960,000) so I suppose the discerning observer of the Police State has their work cut out trying to keep up with it all.

Are all those videos about violent American Police? No, only about 90% are exclusively American and I do concede that the results are slanted by the fact that the technology (including access to youtube) is much more likely to be available to the American witnesses and victims than to, say, their Chinese equivalents. But you can also find (a handful of) examples from other “western” nations including the UK, France, Australia and even Sweden, where, of course, the technology is just as prevalent.

That handful of examples from other parts of the “free world” only serves to emphasise just how serious the problem has become in the “Land of The Free”. I’m sure there’s a PhD waiting for the first to make a statistical comparison of the rates of Police Brutality and levels of Incarceration in and around the so-called “democratic” world.

Watching a random sample of the youtube videos is deeply depressing as well as promoting righteous anger (and occasionally incandescent rage), so I don’t recommend it if you have medical or psychological issues. But it is also profoundly educational.

After a while, you begin to recognise patterns. The first to strike me was how many of the state employed thugs have shaven heads and look like regular users of steroids. I’d gamble a moderate sum on the outcome of a random drug test should anyone dare to set one up. If my intuition is right, the steroids might have an important role in the level and prevalence of the aggressive attitudes and physical abuse. Steroids are well-known to promote such attitudes in regular users.

Here’s a couple that illustrate the Steroid look…

first this footage caught on CCTV within what I take to be police premises… (which means they knew they were being filmed but even that didn’t deter them)

jump forward to about 1 min 10 seconds to see the unprovoked attack by the steroidal cop on this teenage girl walking away from an incident in this one:

this one features another steroidal cop punching a mentally handicapped woman on a bus – again despite full awareness that he was being filmed:

Perhaps the least steroidal ones (indicated by retention of hair?) retain some human-like intelligence. This one, for example shows signs of understanding that performing his crime in front of a live camera is sub-optimal, and has even worked out how to switch it off before launching an attack, and then switch it back on! Like this:

it’s good to know, though, that the citizens aren’t as passive as the lack of public outrage implies. Checkout out these citizens’ resistance to the bullies at the illegal immigration checkpoints last year:

But the prize for the cleverest “resistance” (only just short of “These are not the Droids you’re looking for”) is this “threaten em with the bible” tactic:

All the above are excellent examples of why I’ve been banging on about Trusted Surveillance for the best part of a decade. The Police have definitely got the message. Which is why it’s hardly surprising that the Police State Bullies in some of the more primitive States have been doing their best to criminalise videos like the above. For example:

But elsewhere, Police are beginning to get the more positive message – that recording everything (deliberately rather than accidentally) both constrains police brutality and increases citizen compliance. In Rialto California, where they’ve been trying this out for a year or so, complaints have already dropped 88% and the use of Force (by the cops) by 60%. Now that’s a real improvement in Homeland Security…