Authority V Liberty (Round 4,287,541)

Nobody would contest the desirability of knowing exactly what was in the killers’ heads and history; preferably before they managed to gun down fourteen fellow American citizens in San Bernardino in December. The FBI obviously thinks this is a poster child for their demand for American tech companies to provide back-doors into our encrypted gadgets.

If you’re remotely inclined to sympathise with the FBI, consider this.

It is not just conceivable but highly likely that within 10-20 years, we will have technology capable of ferreting that information out of anyone’s  head. And if you think I’m exaggerating, take a look at this.

or this

or this

or this

or this

or this

or this (added 2016-06-15)

I could go on. The point is that those links illustrate the amount of effort being put into digital mind-reading and the extent to which it’s already been achieved; and that some people are already fully aware of the potential threat, which makes ALL other Privacy invasions pale into insignificance. My 10-20 year time-frame is probably conservative.

I’ve been taking a close personal interest in this technology since Dr Larry Farwell had his 15 minutes back in 2003 when he  managed to get his Brain Fingerprinting evidence accepted by a court which resulted in the release of Terri Harrington, who’d, by then, served 23 years after being wrongly convicted of murder.

I wrote to Farwell at the time, suggesting that his technology could offer the “perfect bio-metric”. I postulated, for example, that it could identify me, uniquely, by observing my neural reaction to seeing a photograph of my late father.  No one else’s brain could simulate my reaction so no one else could pretend to be me. I also suggested that another obvious benefit would be to solve the most intractable problem in secure authentication; viz: access under duress. “Yes they are entering the correct password or revealing the correct retinal scan, but are they only doing that because someone is holding a gun to their head?”

I’m still waiting for a reply!

But it’s obvious that, since then, the technology (and America’s military interest in it) has been marching on. So, whether you like it or not, it’s on its way.  And the authoritarians who are funding the most meaningful research don’t share my views on the use of the technology to prevent privacy invasion. Quite the opposite. They see it as the greatest possible advance in privacy invasion and you can expect laws to change to permit it as we get closer to it. In a sense, that’s exactly what’s happening today.

Once digital mind reading is possible, it will be plausible to argue that, for example, airlines should be allowed to put every passenger through such a mind scanner, in order to ensure that no-one with evil intent against the aircraft is permitted to board.

That’s not my fevered imagination either. Comes from the man himself, almost certainly, given the date of that article, as part of his personal reaction to 9-11.

A first reaction, given my fear of flying, is that I might even think its a good idea myself. Particularly if the “duress protection” was mandated as part of the technology, so that no one could be coerced into having their mind read. And if there was a formally agreed set of questions to which our brain responses would be measured, with no recording of data, alarms raised only on appropriate warnings etc etc, I’d certainly welcome the assurance that, provably, no one sharing that flight with me, had any intention, when they boarded at least, of bringing the plane down.

But as we’ve seen, in some detail, over the past decade, that’s not the way Authority works.   Duress protection, independently citizen audited surveillance of the process and strictly limited application are never on the authoritarian agenda. Instead, they demand back doors, weak encryption, surrender of passwords etc etc.

Society is divided into two groups. The authoritarians and their followers form one group and they will argue in favour of allowing the mind-scanners and insisting that we all step through them.

Once we’ve conceded that for something as serious as air travel, it will be only a matter of time before they mandate it for (in roughly descending order) weeding out Pedophiles, Rapists, Tax dodgers, Copyright cheats,  Trolls, Recreational drug users and Dissidents. Then, depending which level of authoritarianism you live under, they’ll move on to apostates, homosexuals, marital cheats, speeding motorists and other ne’er do wells.

Those who understand Liberty and the nature of threats like the above will probably have to fight the authoritarians literally to the death in what may come to be known as Humanity’s Final War.

The current Apple battle is an early skirmish in that war.

Pick your sides now and be sure of a good seat…

Finally, if you want to hear an intelligent presentation of the current state of the relevant science, and some of the issues, check this out:

Advertisements

Optional Mortality – The Informed Consent Protocol

It’s time we set the rules for reviving digitally stored humans, once the revival technology has become available. I’m sorry if you had other plans, but this is important.

I don’t usually post responses to my forum comments on this blog but given my recent ramblings on our future as Digital Humans, it seems apt. First off, hat-tip to MrJSSmithy for the nudge. His question (Ah. If you get nagged about the unknown certificate, on the way in to the forum, please allow the “security exception”. Oh, and don’t forget to wipe your feet.) His question forced me to accept that my assumptions (about when we might choose to be revived in digital form) were a) hidden b) possibly unfounded or at least not necessarily universally applicable and c) needed to be made explicit.

There are multiple reasons we need to consider an Informed Consent Protocol, some of which are touched on in the play (Resurrection), where I introduce the notion of Omortality (optional mortality). Other reasons are touched on in my initial reply to Smithy.

While the arrival of the technology capable of sustaining our digital existence is obviously still speculative, it is certainly reasonable to assume that we’ll achieve the prerequisite storage capacity and brain reading techniques required to capture the human brain map well before we achieve the ability to revive that map as an autonomous human clone, psychologically identical to its source, but in a digital environment. Personally I reckon that gap (between the ability to store and the ability to revive) will be at least a few decades. Kurzweil is more optimistic.

When would Sir like to be revived?
In any case we can certainly anticipate that many bitizens will sign up for storage before they can ever know whether it will even be possible for them to be revived. Which means, if and when the revival technology is available we’ll have a backlog of – possibly millions or even hundreds of millions – of dead but digitally stored humans available to be re-activated. One obvious potential ethical issue will be the question of whether and in what circumstances each relevant individual has consented to be revived.

This is the most important issue which I am proposing to tackle with the Informed Consent Protocol. The idea is to allow anyone who opts to be digitally preserved to record, for the benefit of the eventual Revival Team or Computer, the conditions under which they would like to be re-activated and, optionally, the extent of that re-activation. As you may have gathered, I do not regard it as a simple “Yes/No” question.

There are definitely conditions in which I, for one, would not wish to be revived. For instance, if the planet is about to be struck by a massive asteroid or if the current batch of Islamic Terrorists has won their war against the modern world and humanity all lives under a new Caliphate – or any other form of Theocracy. Revival Mr Stottle? Think I’ll pass on this occasion.

Yes, I know that even the option of Revival would almost certainly have disappeared under a Caliphate but, a) I’m merely illustrating the point that there are potential circumstances under which I’d prefer to stay in storage. (Try me again in a coupla hundred years). And b) even (or especially) under a Theocracy, there will be a Resistance movement and it might be them who are trying to revive me.

So the Protocol needs to allow bitizens to set the parameters or conditions under which they would wish or not wish to be re-activated.

and how much of you shall we revive?
There are also potential levels of activation, short of full autonomy, which an individual may wish to accept in preference to full activation. The protocol needs to capture these preferences as well.

I’ve already made it clear that I wouldn’t wish my digital self to wake up in “the wrong sort of future”. But that doesn’t mean that no part of me could be revived without the full Stottle. In a digital environment the options are limited only by our imagination.

One such is a functional avatar, based on me but without the conscious spark (whatever that turns out to be) that makes it “me”. Such an avatar could serve two useful purposes. First, it could answer, on my behalf, any question that I’d be able to answer and could choose to answer or not based on its awareness of whether or not the full “me” would consent to answering. Second, it could identify the presence of the conditions in which I would be happy to be fully activated. And that possibility would make the protocol much easier to implement.

Instead of trying to describe all the possible reasons you may or may not wish to be revived, it would be much more straightforward if you could just say “Revive my Avatar to the point where it is capable of making the decision for me”.

Wake me up when I’m thirsty…
As well as deciding the moment of initial digital re-activation, I have predicted elsewhere that this (functional Avatars) is how future digital humans may well cope with living potential eternal lives. Unlike some, I do not imagine that, after living a few million years, some individuals might become bored and choose voluntary personal extinction. But I can imagine that, in some circumstances (eg travelling to a distant galaxy which might still take millions of years) where individuals might choose to become dormant until or unless their permanently conscious Avatar wakes them up because something interesting is about to happen (or just has).

But even if such Avatars become possible, we still need the Informed Consent Protocol so that each digitally stored human can record their unequivocal consent to the revival of, first, the Avatar and second, subject to the Avatar’s judgement, the fully restored human mind.

The other reason we need the protocol is, of course, that such an Avatar may NOT be possible, so we have to be able to leave some kind of guide to the conditions which would meet our consent.

So with all that in mind, here’s my first stab at the kind of questions you’d have to record your answers to, in order to allow a future Revival Team/Computer to make a reasonable assessment of your willingness to rejoin the human race. I do not intend to design some kind of “form” we’d fill in. I’ll just describe the issues the “form” has to cover. I’ll leave it for the legal eagles to create the paperwork.

Section 1 – Identity.
Obviously the Revival team will need a fool-proof way to identify you as the owner of the relevant digital store. That’ll almost certainly require a cryptographic proof. So a digital notary will verify your identity, record your consent and have it protected on an Immutable Audit Trail. It will include embedding the hash of the digital store (which we can assume to be unique itself) in the document which describes your consent to revival. This will tie the consent to the data. (It might even form part of the key which must be used to decrypt and unlock the data) The crypto-geeks will no doubt improve on that outline as we get closer to needing to store the data.

Section 2 – Avatar consent
Here we’d sign up to allowing an Avatar, judged – in the technical context of the time – capable of representing your wishes, to make the judgement on your behalf as to whether “now” is the right time to revive you. This is obviously a conditional consent based on the existence of technology which makes the Avatars possible and capable of that level of functionality.

Section 3 – Unaided consent
This is the more difficult scenario where we have to try to anticipate, today, all the possible reasons which might exist tomorrow which might deter us from being revived. Or an overriding positive condition which will authorise our revival regardless of any potential obstacles.

However, I don’t think it’s as difficult as it may first appear. Because, in short, you could always decide to go back into hibernation. So you could stipulate that you’ll act, in a sense, as your own Avatar. You’ll wake up, take a look around and decide whether or not to make the awakening permanent or hit the snooze button for another thousand years.

That would only require one condition to be true in order for your revival to be permitted and that condition is simply that the newly awakened you will retain the sole authority on whether and how long you stay re-activated. You might even make that the ONLY condition for your revival. “Don’t wake me up until and unless when I wake up, I can choose to return to indefinite storage”, or the more positive “Wake me up as soon as it becomes possible for me to exercise the option to return to storage”

Section 4 – Arbitrary conditions
Where the first three sections really deal with the technical issues of identification and available functionality, this section needs to deal with non-technical issues which might affect the stored individual’s decision on revival. If the (section 2) Avatar consent is possible, then this section would be unnecessary, but if not, then the individual may need to list the conditions which they consider would block or permit their revival; or should at least be present/absent before attempting revival under (Section 3) unaided consent.

For instance, someone might stipulate that they would only want to be revived if other named individuals had also chosen to be revived. Or, more negatively, if other named individuals had NOT chosen to be revived.

Section 5 – Simultaneous Consciousness and the “Right to Murder”?
This section is the direct result of MrJSSmithy’s question. It is probably not going to be an issue for the first generation of digitally stored humans because it won’t be possible, as mentioned above, to re-activate your stored version until the technology has advanced to make that possible and that is likely, in my view, to be a few decades after we’ve begun to store ourselves in digital form.

But step forward, say, a hundred years from now and there is no obvious reason why your digital clone could not be re-activated as soon as the backup is complete. As I said in the forum, I’ve always been conscious of the myriad of awkward issues this would raise and assumed that we’d avoid the problem by forbidding such activation while the “source” (or “Simulee” as I’ve named it in the forum reply) remained alive. (see the reply for more detail)

That, I now admit, was essentially a personal prejudice. I wouldn’t permit it for my clone, but I can’t think of any technical reason why it would not be possible to have multiple versions of yourself active at the same time. I’m quite sure we will do that deliberately when we ARE digital humans. For example, I can imagine sending a version of myself off to live on the plains of Africa to observe the wildlife in real-time for periods of decades at a time. It might be an advanced Avatar or a full clone. It might have no physical form, or the form of an insect just large enough to fly around with an HD camera, or whatever, and it might link up other versions of me, from time to time to merge experiences.

The question is, would such an arrangement be feasible or “a good idea” while your organic self was still around and gathering experience and data in its own pedestrian organic fashion? The biggest single problem being that, whereas digital versions of yourself could easily choose to merge their experiences, and will thus always comprise the full organic you, plus any new experiences the clone/s gather in their new existence, the traffic is likely to remain very much “one way”. i.e. the organic you will never be able to assimilate the experience of your active digital clone/s…

… and a major consequence of that would be that the inevitable divergence between the personality of source and clone/s may quickly reach the point at which they can no longer be considered the “same person”. Indeed, as I suggest in the forum discussion, the clones might actually become antagonistic to their own source!

For me, therefore, simultaneous consciousness is a big “no no”. But others may be indifferent or even think it’s a good idea. So this final section of the protocol needs to spell out whether, while you remain alive, you would consent to the full activation of the clone. And even that, even for me, is not going to be a simple “yes/no” question.

Attending My Own Funeral
For example, as I say in the same place, I can well imagine circumstances in which my organic self deteriorates into the senescence of old age and dementia robs me of the ability to meaningfully consent to anything. At which point I would be happy for my digital clone to be activated and assume “Power of Attorney” over my organic shell until it shuffles off this mortal coil. Indeed it is the vision of that future which led to my saying somewhere in the distant past “I hope and intend to be one of the first humans to attend (perhaps even conduct!) their own funeral”

Actually I now recognise that to be a bit too optimistic. Although I hope and still expect to survive till the storage technology becomes available, hanging on till revival is also possible is probably a bit of a stretch given that I’m already in my sixties.

Nevertheless, this final section needs to allow the organic source to stipulate the conditions under which activation of the clone could take place during their organic lifetime. And it is actually the most potentially controversial component of the entire protocol.

Essentially, this section needs to cover the issue of whether or not the organic human can “murder” their own digital clone, and even, in the Power of Attorney scenario, permit almost the exact opposite – where the clone, for example, eventually gives the final authority to switch off the life support system for its organic source.

I point out, in the forum reply, that the ONLY reason I would want to activate my own clone while I was both alive and fully functional, is that I would need to be convinced that the clone really was “me”. (I raised the point, first, during a lengthy debate, on whether that was even conceivable)

And that the only way I can currently imagine being sufficiently convinced would be to engage in a fairly lengthy and confidential conversation with my clone to probe it’s conformance with me. For which reason it would obviously have to be activated.

When Does My Clone Achieve Normal “Human Rights”?
But that immediately raises the question of the legal basis on which I can then effectively say, “yup, you’ve convinced me, now go back to sleep”. That, of course, would NOT be murder. (because the clone could eventually be revived again) but if we allow the more extended activation suggested by MrJSSmithy’s question, it raises the possibility, as I’ve already mentioned, of the clone become hostile to the source, or even without such hostility, developing characteristics which so horrify the source that the source decides s/he needs to terminate their own clone. i.e wipe the storage – not just put the clone to sleep. Would we – COULD WE – ever permit that?

I think that’s likely to become a hotter topic once the technology exists and clones have started to be stored. But I can certainly imagine a rule which would encompass the simpler situation described by my own preference.

For a start, given that my own clone would start out as psychologically identical in all respects to me, I have no problem in stating, on behalf of my clone, that I am willing to be put back to sleep after I have convinced myself that, as a clone, I really am “me”. I have no problem further stipulating that if my clone indicates, during the persuasion period, that it has changed its mind and now wishes to remain active, that this should be taken as direct evidence that it is a faulty copy (because it clearly does not mirror acceptance of this crucial condition) and should thus not just be de-activated but destroyed.

The first question, if you like, for the newly activated clone, would thus be: “do you still accept these conditions?” If not, the clone is immediately destroyed, whereas, if it indicates it is still happy with the conditions, then it has already consented to de-activation after persuasion.

But that only really deals with the relatively simple scenario required for the short “period of persuasion” and I don’t anticipate that such periods will even be necessary once the technology has been running long enough for people to trust it without such tests.

So the really difficult question is whether and how we would frame rules to deal with de-activation or destruction after a clone has been allowed to develop its own new life during the lifetime of the organic source. My gut instinct is to avoid that problem by blocking the option, as I would do for my own clone. Once you’ve allowed the clone to become a “different person” you can no longer kill it. The only law I can imagine being consistent with our current notions of autonomy and “human rights” – once a clone has been permitted to diverge to the point where it no longer wishes to become dormant – is one that states, from that point on, the clone is one of us…

*********************************************************************************

(Feel free to discuss this here or on the forum. You have to be a WordPress member to post comments here and you need to join my forum to add comments there. Be seeing you…)

Digital Evolution – Another Step Closer

This is a key step towards our digital evolution and our migration from organic to digital lifeforms. Basically, if we can’t record the human brain in sufficient resolution, we can’t migrate. Period. No Omortality

But this research looks like we’re poking our sticks in the right ant-nests! If we get this right, then, sometime in the next 10-20 years, we’ll have the technology to record and store the information constituting a complete human brain, probably in a few 10 minute sequences, to the resolution required to preserve our entire personality, memory and neural matrix well enough to be re-animated, later, when a digital substrate exists to house us.

Unfortunately, that might be MUCH later. Like another 50-100 years. So we might, I’m afraid, still have to spend a few years technically dead. Although, interestingly, along the way, technology should reach the point where the brain maps could be interacted with as a kind of “living in the permanent present” avatar, like Henry Molaison, who we’ve been hearing about only this last week…

This isn’t a breakthrough, but it is a major step in the direction we need to travel in order to achieve the break-through.

Oh, and along the way, it’s going to have some fascinating commercial and security spinoffs:

Ferinstance, I give you: the perfect authentication device. It not only verifies, unspoofably, unique individuals, but can even detect the absence of informed consent and thus even block those attacks based on coercion. You couldn’t unlock the safe or file even if you did have a gun pointed at your head. And the attacker will know this, so they won’t even try that. It will even enable version 1 of the Mindlock I mused on back in April.

And of course, it makes possible the Perfect communication and self-surveillance device I was fantasising about in the History of Digital Telepathy

…and think of the impact this is going to have on VR. I think we can bet that “Full Immersion” will come along shortly after the first wave of smart dust adopters have begun to appreciate the benefits of receiving data direct to the sense processing parts of the brain.

And obviously, whatever we record, subject to our informed consent, can be played back. Think what that’s going to do for the sex industry. Just a thought. Though I challenge you not to think about it.

It’s happening Reg! Something’s actually happening!
Just remember, you ‘eard it ‘ere first. Righ’!

We’re Underestimating the Risk of Human Extinction

Why now?

I’m certainly not the first but even I have been prattling on about extinction level threats for over 15 years. This, after all is what “god” was talking about when we met in 2000. And I painted some of the threats we can look forward to a few years back when I wrote Reasons to be fearful” (Ch 10 part 2) in 2005. And Resurrection was, in part, inspired by Nick Bostrum’s original essay on the Simulated Universe Argument.

So what’s new? Why has he surfaced here? Answers should be attached in the usual way (or, if you’re not a WordPress blogger, email me)

Full article at Atlantic

What If Humans Were Twice as Intelligent? | Digg Topnews

the major benefit of increased intelligence will be that the implementation of true democracy (which is critically dependent on intelligent debate) will become much easier.

The two biggest obstacles to democracy at the moment are the vested interests who are currently in control – who show no signs of wishing to relinquish or even dilute that control; and the generally high levels of ignorance and low levels of intelligence in the population at large. The widespread inability to understand and adopt the principles of rational epistemology make it very difficult to promote or even defend true democracy.

The web is slowly helping to overcome the ignorance problem, but it can’t do much to improve actual processing power. For that I think we’re going to have to wait a couple of decades for the first brain implants to arrive which begin to enhance our intellectual abilities. And the problem with that is that the early adopters will likely be those who need it least, thus widening the gap between top and bottom of the social ladder…

What If Humans Were Twice as Intelligent? | Digg Topnews.

Superhydrophobic spray means no more washing clothes – among others | ZME Science

bugger clothing, I want it on my windscreen. Where can I buy it?

Superhydrophobic spray means no more washing clothes – among others | ZME Science.

DARPA boffins develop unfeasibly light metal fluff-structure • The Register

DARPA boffins develop unfeasibly light metal fluff-structure • The Register.