michael vassar vs. weird things, round two

Michael Vassar is back for the second round of our debate about Ray Kurzweil and The Technological Singularity.

sci-fi ai

Here it is, the second episode of my exchange with Michael Vassar, the president of the Singularity Institute. If you caught the first edition, get ready to switch gears a bit and dive into theoretical computer science as we explore the ideas of mind uploading, simulating human brains with supercomputers and what that may mean for scientists while encountering a scenario that could make Descartes shudder in terror…

Please Leave Your Brain Where It Isin my recent article in Forbes online

The post itself dealt with Ray Kurzweil's idea of uploading human minds to a computer by mid-century which he expressed in The Singularity Is Near, giving it a description very reminiscent of the classic anime Ghost In The Shell. In order to upload a brain anywhere, you need a system that actually works like a brain so whatever you upload will actually function. Since computers don't match the criteria, an upload seems like an idea that's completely unrealistic in implementation.

Ray Kurzweil's Digital Pipe Dreams

So is Ray alone in making this claim? Here's a Wikipedia's cliff notes on his aforementioned book, for free, public reference. In the description of Ray's vision for the 2030s, we find the mention of mind uploading, the exact idea that post and it's follow up tackle. Let's remember that Kurzweil's goal is to cheat death with cutting edge technology so simply trying to copy his mind to a supercomputer wouldn't get the job done. He'd need a full blown brain to machine transfer.

People do propose simulating brains, which will of course require gathering a lot of information from them. The first two talks at the 2009 Singularity Summit will discuss technical details relating to brain simulations, but the important claim is simply that the brain is a physical system and it is possible for a computer to simulate any physical system. If a physical brain is interested in steak or in sex, as in your examples, a simulation of that brain which produces brain-like behavior will also be interested in steak or sex, or at least will transform inputs into outputs as if it was.

Ah but it's not that simple. You need stimuli and a way to virtually control those urges. In effect, you would be a puppet master running through the brain's routine as understood by the developers who write the software by which it functions. But really, that's beside the point when it comes to mind uploading because the way that a simulated brain will work, will be very different from the way a human brain does, as we both agree.

In general, we currently lack a robust theory of consciousness. Most Singularitarians do think that a simulation that behaves exactly like them must be conscious, but the truth […] of this claim doesn't have any bearing on the practical impact of simulated humans.

Actually it does. If a simulated human brain is conscious, is aware and is capable of reasoning, anything you do to it must follow the same ethical guidelines as any other person. If you were to run a test on a conscious brain in a computer and your test causes a critical system crash, then you would have technically committed homicide. The laws and rights for human beings are based on the ability to reason and our sapience. If your creation has a consciousness and aware of the environment around it, it should have the same legal right as a human. The testing and experiments you go on to mention may be severely restricted by ethical guidelines and rightfully so. However, I don't see any reason why a simulated brain would be capable of consciousness since it would simply visualize chemical and electrical signals in our brain by solving formulas.

And this is where I have to put up another objection when it comes to using this simulated brain in a medical experiment or for clinical research. The brain would be built by software designers and developers and thus, it will be based on their understanding of the brain and how it works. But that understanding might be wrong so a doctor trying to figure out something about the brain would of course be skeptical and would need to confirm whether the brain truly works the way it does in the simulation. Even if everything works fine, the fact that it's a simulation and not an actual brain would restrict the applicability of the research, like a cosmological model has to be supported by astronomical observations before it becomes a full blown theory.

  archived from wowt
              
# tech // computer science / computers / technological singularity


  show comments
latest reads

the xenonite plot armor of project hail mary

Hail Mary was a badly mismanaged, rushed death trap driven by groupthink and politics, and Ryland Grace was right to balk at the idea.
the xenonite plot armor of project hail mary

how ai can love bomb you into being an asshole

In ads, chatbots are omniscient arbiters and truth brokers. In practice, they're sycophantic enablers according to the latest research.
how ai can love bomb you into being an asshole

why we're all getting meaner and meaner online

Yes, being a professional asshole is now a viable career option. Which is awful news for online discourse.
why we're all getting meaner and meaner online

how and why corporate jargon and technobabble lull the mind

Yes, sadly, some of the worst stereotypes about corporate culture really are true.
how and why corporate jargon and technobabble lull the mind

the great theoretical chatbot job apocalypse

According to Anthropic, LLMs can obliterate most white collar jobs. Well, theoretically...
the great theoretical chatbot job apocalypse

i prompt, therefore i am: how tech forgot about human agency

Tone deaf tech bros no longer seem to understand that their pitch for AI is fundamentally dystopian and dismissive.
i prompt, therefore i am: how tech forgot about human agency