March for Science Melbourne

March for Science RallyJoin us in Melbourne on April 22nd to champion science as a pillar of human prosperity!  This will be huge – invite everyone to come – yes, everyone!

WHEN: EARTH DAY, 22nd April 2017
WHERE: Melbourne (Schedule & Location TBA)
WHY: Among other things, a global event bringing together people from all walks of life who believe we need more evidence and reason in our political process.

“The March for Science champions publicly funded and publicly communicated science as a pillar of human freedom and prosperity. We unite as a diverse, nonpartisan group to call for science that upholds the common good, and for political leaders and policymakers to enact evidence-based policies in the public interest.”

The March for Science in Melbourne will be on the 22nd of April.  Please join the meetup group and tweet about it – push it out on social media!

The global mission statement is as follows: “THE MARCH FOR SCIENCE CHAMPIONS PUBLICLY FUNDED AND PUBLICLY COMMUNICATED SCIENCE AS A PILLAR OF HUMAN FREEDOM AND PROSPERITY. WE UNITE AS A DIVERSE, NONPARTISAN GROUP TO CALL FOR SCIENCE THAT UPHOLDS THE COMMON GOOD, AND FOR POLITICAL LEADERS AND POLICYMAKERS TO ENACT EVIDENCE-BASED POLICIES IN THE PUBLIC INTEREST.”

SCIENCE, NOT SILENCE

Recent world events have inspired us to march in our cities to ask our leaders to use science to make decisions through evidence, not ignorance, and to ensure science and scientific literacy is accessible and achievable to all.

 

Robert S. Young thinks it’s a bad idea – what do you think?

march4science_washington

 

Check out the main facebook page (connected to the March in Washington DC). And the Main Website!

The Simulation Argument – How likely is it that we are living in a simulation?

The simulation hypothesis doesn’t seem to be a terse parsimonious explanation for the universe we live in. If what is most important is to simulate ancestors, what’s the motivation for all the hugely detailed rendering of space? Why not just simulate earth or our solar system or our galaxy?

People often jump to the conclusions and assume* that the great simulators have infinite computing power. Infinity – another thing we have never been able to measure 🙂 Max Tegmark wrote an interesting piece about why infinity is probably not real. Until we do have evidence of infinities in the real world, I believe we should treat all thought experiments that rely on infinities as mere intuition pumps.

Without the assumption that potential simulators have infinite computing power, but assume instead they have a finite amount – it seems logical that there would be a cost/benefit trade-off between computation and simulation, detail/number of sims that would need to be taken into account. Limits to available computation would decrease the motivation for building huge amounts of simulations and/or highly detailed simulations.

People think their way around the astronomical computational waste and add yet another extra assumption* that the simulation may grow to fill all the spaces we probe and interact with – though this would still increase the computational requirements to run the simulation. With this assumption, we should believe that if we are in a simulation, compared to just 500 years ago, it is costing the simulators a whole lot more to run now that we can stare into the depths of physics and peer about the universe. It has been argued that we should avoid building big computers or perform certain experiments because the simulators may decide to turn off our simulation because it begins costing them to much to run.

If we are in a simulation – many argue for the most part, it probably doesn’t matter. Based on Newcomb’s problem – even if we are in an elegant simulation, then the simulated laws of physics will behave just as they would if they were actual laws
If we feel compelled to put an estimate on it – the more we develop empirically informed naturalistic explanations for the universe, the lower our estimates should be that we are in a simulation.

If there are considerable costs to creating simulations with the detail of our universe – why simulate ancestors if it costs so much?
What is so important about ancestor simulations to justify the expense?

* the more assumptions we add to a hypothesis, the less certain we should be about it

The Seminal Nick Bostrom Interview

Here is the interview I did with Bostrom in 2012:

Why so much confidence that we are in a simulation?

I hear reports that Bostrom’s confidence that we are in a simulation have decreased over the years (less than 10% I heard recently – can’t find a direct reference right now) – while others, after he wrote the seminal paper, have increased their confidence quite dramatically. Based on various article headlines I am fairly certain that many latch onto a surface level understanding of the arguments that support their existing biases. So its probably best to read the paper and understand the Simulation Hypothesis and the Simulation Argument before hand waving about what Bostrom thinks.

How much credence should we give sound arguments that are empirically unfalsifiable?

I’d say some – not everything can be falsified – generally I rank arguments with empirical evidence higher than those that don’t.

I Wonder what do the Intelligent Design movement think of this?

Some atheists may be worried that such a philosophical implications – but most seem to think the Simulation Argument is cool.

reisch-matrix-2

Resources

Various links on the simulation argument and hypothesis curated by Bostrom – including the original paper: http://www.simulation-argument.com/

All Aboard The Ship of Theseus with Keith Wiley

An exploration of the philosophical concept of metaphysical identity, using numerous variations on the infamous Ship of Theseus thought experiment.

Video interview with Keith Wiley

Note: a separate text interview is below.

SONY DSC

SONY DSC

Keith Wiley is the author of A Taxonomy and Metaphysics of Mind-Uploading, available on Amazon.

The ship of Theseus, also known as Theseus’ paradox, is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object. The paradox is most notably recorded by Plutarch in Life of Theseus from the late first century. Plutarch asked whether a ship that had been restored by replacing every single wooden part remained the same ship.

The paradox had been discussed by other ancient philosophers such as Heraclitus and Plato prior to Plutarch’s writings, and more recently by Thomas Hobbes and John Locke. Several variants are known, including the grandfather’s axe, which has had both head and handle replaced.
See more at Wikipedia…

Text Interview

Note this is not a transcription of the video/audio interview.

The Ship of Theseus Metaphor

Adam Ford: Firstly, what is the story or metaphor of the Ship of Theseus intended to convey?

Keith Wiley: Around the first century AD, Plutarch wrote several biographies, including one of the king Theseus entitled Life of Theseus, in which he wrote the following passage:

The ship on which Theseus sailed with the youths and returned in safety, the thirty-oared galley, was preserved by the Athenians down to the time of Demetrius Phalereus. They took away the old timbers from time to time, and put new and sound ones in their places, so that the vessel became a standing illustration for the philosophers in the mooted question of growth, some declaring that it remained the same, others that it was not the same vessel.Plutarch

People sometimes erroneously believe that Plutarch presents the scenario (replacing a ship piecemeal style until all original material is absent) with a conclusion or judgment, i.e., that it makes some prescription of the “correct” way to interpret the scenario (as to, yes or no, is the ship’s identity preserved). However, as you see from the passage above, this is not the case. Plutarch left the question open. He mere poses the question and leaves it to the reader to ruminate on an actual answer.

The specific questions in that scenario are:

  • Does identity require maintaining the same material components? Aka, is identity tied and indicated by specific sets of atoms?
  • If not, then does preservation of identity require some sort of temporally overlapping sequence of closely connected parts?

The more general question being asked is: What is the nature of identity? What are its properties? What are its requirements (to claim preservation under various circumstances)? What traits specify identity and indicate the transformations under which identity may be preserved and under which it is necessarily lost?

Here is a video explainer by Keith Wiley (intended to inspire viewers to think about identity preservation)

Adam Ford: How does this story relate to mind uploading?

Keith Wiley: The identity of relatively static objects, and of objects not possessing minds or consciousness, is an introduction to the thornier question of metaphysical personal identity, i.e., the identity of persons. The goal in considering how various theories of identity describe what is happening in the Ship of Theseus is to prime our thinking about what happens to personal identity of people in analogous scenarios. For example, in a most straightforward manner, the Ship of Theseus asks us to consider how our identity would be affected if we replaced, piecemeal style, all the material in our own bodies. The funny thing is, this is already the case! It is colloquially estimated that our bodies turn over their material components approximately every seven years (whether this is precisely accurate is beside the point). The intent is not that a conclusion drawn from the Ship of Theseus definitively resolves the question concerning personal identity, because the former is a much simpler scenario. The critical distinction is that people are more obviously dynamic across time than static physical objects because our minds undergo constant psychological change. This raises the question of whether some sort of “temporal continuity” is at play in people that does not take effect in ships. There is also the question of whether consciousness somehow changes the discussion in radical ways. So the Ship of Theseus is not conclusive on personal identity. It is just a way to get us started in thinking about such issues.

Adam Ford: Fishing for clarification on how you use the term ‘identity’, Robin Hanson (scenario of uploads in the future in Age of Em) enquired about what kind of identity concept you are interested in. That is, what function do you intend this concept to serve?

Keith Wiley: Sure. First, and this might not be what Robin meant, there are different fundamental kinds of identity, two big ones being quantitative and numerical. Two things quantitatively identified possess the same properties, but are not necessarily “the same entity”. Two things numerically identical are somehow “the same thing”, which is problematic in its phrasing since they were admitted to be “two things” to begin with. The crucial distinction is in whether numerical identity makes any difference, or whether quantitative identity is all the fundamentally matters.

For me, I phrase the crucial question of personal identity relative to mind uploading in the following way: Do we grant equal primacy to claims to the original single identity to all minds (people) who psychologically descend from that common ancestral mind (person)? I always phrase it this way: granting primacy in claims to a historical identity. Do we tolerate the metaphysical interpretation that all descendant minds are equal in the primacy of their claim to the identity they perceive themselves to be? Alternatively, do we disregard such claims, dictating to others that they are not, in fact, who they believe themselves to be, and that they are not entitled to the rights of the people they claim to be? My concern is of:
bias (differing assignments of traits to various people),
prejudice (differing assignments of values, claims, or rights resulting from bias),
and discrimination (actions favoring and dismissing various people, resulting from prejudices).

Adam Ford: Is ‘identity’ the most appropriate word to be using here?

Keith Wiley: Well, identity certainly doesn’t seem to fully “work”. There’s always some boundary case or exception that undermines any identity theory we attempt to assign. My primary concern, such as it is on an entirely abstract philosophical musing (at this point in history when mind uploading isn’t remotely possible yet) is only secondarily the nature of identity. The primary concern, justified by those secondary aspects of identity, is whether we should regard uploads in some denigrated fashion. Should we dismiss their claims that they are the original person, that they should be perceived as the original person, that they should be treated and entitled and “enrighted” as the original person? I don’t just mean from a legal standpoint. We can pass all sorts of laws that force people to be respectful, but that’s an uninteresting question to me. I’m asking if it is fundamentally right or wrong to regard an upload in a denigrated way when judging its identity claims.

Ontology, Classification & Reality

Adam Ford: As we move forward the classification of identity will likely be fraught with struggle. We might need to invent new words to clarify the difference between distinct concepts. Do you have any ideas for new words?

Keith Wiley: The terminology I generally use is that of mind descendants and mind ancestors. In this way we can ask whether all minds descending from a common ancestral mind should be afforded equal primacy in their claim to the ancestral identity, or alternatively, whether there is a reasonable justification to exhibit biases, prejudices, and discriminations against some minds over such such questions. Personally, I don’t believe any such asymmetry in our judgment of persons and their identity claims can be grounded on physical or material traits (such as whose brain is composed of more matter from the ancestral brain, which comes up when debating nondestructive uploading scenarios).

Adam Ford: An appropriate definition for legal reasons?

Keith Wiley: I find legal distinctions to be uninteresting. It used to be illegal for whites and blacks to marry. Who cares what the law says from a moral, much less metaphysical, perspective. I’m interested in finding the most consistent, least arbitrary, and least paradoxical way to comprehend reality, including the aspect of reality that describes how minds relate to their mental ancestors.

Adam Ford: For scientific reasons?

Keith Wiley: I don’t believe this is a scientific question. How to procedurally accomplish uploading is a scientific question. Whether it can be done in a nondestructive way, leaving the original body and brain unharmed, is a scientific question. Whether multi-uploading (producing multiple uploads at once) is technically possible is a scientific question, say via an initial scan that can be multi-instantiated. I think those are crucial scientific endeavors that will be pursued in the future, and I participate in some of the discussions around that research. But at this point in history, when nothing like mind uploading is possible yet, I am pursuing other aspects, nonscientific aspects, namely the philosophical question of whether we have the correct metaphysical notion of identity in the first place, and whether we are applying identity theories in an irrational, or even discriminatory, fashion.

Implications for Brain Preservation

Adam Ford: Potential brain preservation (inc cryonics) customers may be interested in knowing the possible likely science of reanimation (which, it has been suggested, includes mind uploading) – and the type of preservation which will most likely achieve the best results. Even though we don’t have mind uploading yet – people are committing their brains to preservation strategies that are to some degree based on strategies for revival. Mummification? No – that probably won’t work. Immersion in saline based solution? Yeah for short periods of time. Plastination? Yes but only if it’s the connectome we are after… And then there is different methods of cryonic suspension that may be tailored to different intended outcomes – do you want to destructively scan the brain layer by layer and be uploaded in the future? Do you want to be able to fully revive the actual brain in the (potentially in a longer term future)?

Keith Wiley: People closer to the cryonics community than myself, such as some of my fellow BPF board members, claim that most current cryonics enthusiasts (and paying members or current subjects) are not of the mind uploading persuasion, preferring biological revival instead. Perhaps because they tend to be older (baby boomer generation) they have not bought into computerization of brains and minds. Their passion for cryonics is far more aligned with the prospect of future biological revival. I suspect there will be a shift toward those of a mind uploading persuasion as the newer generations, more comfortable with computers, enter the cryonics community.

As you described above, there are few categories of preservation and a few paths of potential revival. Preservation is primarily of two sorts: cryogenic and at least conceivably reversible, and room temperature and inconceivably reversible. The former is amenable to both biological revival and mind uploading. The latter is exclusively amenable to mind uploading. Why would one ever choose the latter option then? Simple: it might be the better method of preservation! It might preserve the connectome in greater detail for longer periods of time with lesser rates of decay — or it might simply be cheaper or otherwise easier to maintain over the long term. After all, cryonic storage requires cryonic facilities and constant nitrogen reintroduction as it boils off. Room temperature storage can be put on the shelf and forgotten about for millennia.

Adam Ford: What about for social (family) reasons?

Keith Wiley: This is closer to the area where I think and write, although not necessarily in a family-oriented way. But social in terms of whether our social contracts with one another should justify treating certain people in a discriminatory fashion and whether there is a rational basis for such prejudices. Not that any of this will be a real-world issue with which to tackle for quite some time. But perhaps some day…

Adam Ford: If the intended outcomes of BP are for subjective personal reasons?

Keith Wiley: I would admit that much of my personal interest here is to try to grind out the absolutely most logical way to comprehend minds and identity relative to brains, especially under the sorts of physical transformations that brains could hypothetically experience (Parfit’s hemispherical fission, teleportation, gradual nanobot replacement, freeze-slice-scan-and-emulate, etc.).

Philosophy

Adam Ford: In relation to appropriate definitions of ‘identity’ for scientific reasons – what are your thoughts on the whole map/territory ‘is science real’ debate? Where do you sit – scientific realism, anti-realism and structural realism (epistemic or ontic)? what’s your favorite?

Keith Wiley: I suppose I lean toward scientific realism (to my understanding: scientific claims and truths hold real truth value, not just current societal “perspective”, and further they can be applied to yet-to-be observed phenomena), although antirealism is a nifty idea (scientific truths are essentially those which we have yet to disprove, but expect to with some future overturning, or furthermore, unobserved phenomena are not reasonable subjects of scientific inquiry). The reason I don’t like the latter is it leads to antiintellectualism, which is a huge problem for our society. Rather than overturning or disregarding scientific theories, I prefer to interpret it as that we refine them, saying that new theories apply in corners where the old ones didn’t fit well (Newton’s laws are fine in many circumstances, but are best appended by quantum mechanics at the boundary’s of their applicability). Structural and ontic realism are currently vague to me. I’ve read about them but haven’t really grinded through their implications yet.

Adam Ford: If we are concerned about our future and the future of things we value we perhaps should ask a fundamental question: How do things actually persist? (Whether you’re a perdurantist or an endurantist – this is still a relevant question – see 5.2 ‘How Things Persist?’ in ‘Endurantism and Perdurantism’)

Keith Wiley: Perdurantism and Endurantism are not terms I have come across before. I do like the idea of conceptualizing objects as 4D temporal “worms”. I describe brains that way in my book for example. If this is the “right” way (or at least a good way) to conceive of the existence of physical objects, then it partially solves the persistence or preservation-of-identity problem: preservation of identity is the temporal stream of physical continuity. The problem is, I reject any physical requirement for explicitly *personal* identity of minds, because there appears to be no associated physical trait — plus that would leave open how to handle brain fission, ala Parfit, so worms just *can’*t solve the problem of personal identity, only of physical objects.

Adam Ford: Cybernetics – signal is more important than substrate – has cybernetics influenced your thinking? If so, how?

Keith Wiley: If by signal, you mean function, then I’ve always held that the functional traits of the brain are far more important (it not entirely more important) than mere material components.

Adam Ford: “signal is more important than substrate” – Yet the signal quality depends on the substrate – surely a ship’s substrate is not as tightly coupled to its function of moving across a body of water (wood, fiberglass, even steel will work) than a conscious human mind is to its biological brain. in terms of the granularity of replacement part – how much is needed?

Keith Wiley: Good question. I have no idea. I tend to presume the requisite level is action potential processing and generation, which is a pretty popular assumption I think. We should be open on this question at this time in history and current state of scientific knowledge.

Adam Ford: What level of functional representation is needed in order to be preserve ‘selfhood’?

Keith Wiley: Short answer: We don’t know yet. Long answer, it is widely presumed that the action-potential patterns of the connectome are where the crucial stuff is happening, but this is a supposition. We don’t know for sure.

Adam Ford: A Trolley Problem applied to Mind Uploaded Clones: As with the classic trolley problem, a trolley is hurtling down a track towards 5 people. As in the classic case, you can divert it onto a separate track by pulling a nearby leaver. However, suddenly 5 functionally equivalent carbon copies* of the original 5 people appear on the separate track. Would you pull the lever to save the originals but kill the copies? Or leave the originals to die, saving the copies? (*assume you just know the copies are functionally equivalent to the originals)

Keith Wiley: Much of my writing focuses on mind uploading and the related question of what minds are and what personal identity is. My primary claim is that uploads are wholly human in their psychological traits and human rights, and furthermore that they have equal primacy in their claim to the identity of the person who preceded an uploading procedure — even if the bio-original body and brain survive! The upload is still no less “the original person” than the person housed in the materially original body, precisely because bodies and material preservation are irrelevant to who we are, by my reckoning. If this is not the case, then how can we solve the fission paradox? Who gets to “be the original” if we split someone in two? The best solution is that only psychological traits matter and material traits are simply irrelevant.

So, for those reasons, I would rephrase your trolley scenario thusly: track one has five people, track two has five other people. Coincidentally, pairs of people from each track have very recently diverging memories, but the scenario is psychologically symmetrical between the two tracks even if there is some physical asymmetry in terms of how old the various material compositions (bodies) are. So we can disregard notions of asymmetry for the purpose of analyzing the moral or identity-preserving-killing implications of the trolley problem. It is simply “Five people on one track, five on another. Should you pull the lever, killing those on the diverted track to save those on the initial track?” That’s how I rephrase it.

Adam Ford: I wonder if the experiment would yield different results if there were 5 individuals on one track and 6 copies of 1 person on the other? (As some people suggest that copies are actually identical to the original – eg for voting purposes)

Keith Wiley: But they clearly aren’t identical in the scenario you described. The classic trolley problem has always implied that the subjects are reasonably alert and mentally dynamic (thinking). It isn’t carefully described so as to imply that the people involved are explicitly unconscious, to say nothing of the complexities involved in rendering them as physically static objects (preserved brains undergoing essentially no metabolic or signal-processing (action potentials) activity. The problem is never posed that way. Consequently, they are all awake and therefore divergent from one another, distinct individuals with all the rights of individual personhood. So it’s just five against six in your example. That’s all there is to it. People might suggest, as you said above, that copies are identical to each other (or to the original), but those people are just wrong.

So an interesting question then, is what if the various subjects involved actually are unconscious or even rigidly preserved? Can we say their psychological sequences have not diverged and that they therefore represent redundant physical instantiations of a given mind? I explore this exact question in my book by the way. I think a case could be made that until psychological divergence (until the brains are rolling forward through time, accumulating experiences and memories) we can say they are redundant in terms of identity and associated person-value. But to be clear, if the bio-original was statically preserved, then uploaded or duplicated, and then both people were put on the train tracks in their preserved state, physically identical, frozen with no ongoing psychological experience, then I would be clear to state that while it might not matter if we kill the upload, it *also* doesn’t matter if we choose the other way and kill the bio-original! That is the obvious implication of my reasoning here. And in your case above, if we have five distinct people on one track (let’s stay everyone involved is statically preserved) and six uploads of one of those people on the other track, then we could recast the problem as “five on one track and one on the other”. The funny thing is, if we save the six and revive them, then, after the fact, we have granted life to six distinct individuals, but we can only say that after we revive them, not at the time of the trolley experiment when they are statically preserved. So now we are speculating on the “tentative” split personhood of a set of identical but static minds based on a later time when they might be revived. Does that tentative individuality grant them individuality while they are still preserved? Does the mere potential to diverge and individualize grant them full-blown distinct identity before the divergence has occurred? I don’t know. Fascinating question. I guess the anti-abortion-choice and pro-abortion-choice debate has been trying to sort out the personhood of tentative, potential, or possible persons for a long time (and by extension, whether contraception is acceptable hits the same question). We don’t seem to have all agreed on a solution there yet, so we probably won’t agree in this case either.

Philosophy of identity

Adam Ford: Retention of structure across atomic change – is identity the structure, the atomic composition, the atomic or structural continuum through change, or a mixture?

Keith Wiley: Depends on one’s chosen theory of identity of course. Body theory, psychological theory, psychological branching theory, closest continuer theory, 4D spacetime “worm” theory. There’s several to choose from — but I find that some more paradox-prone than others, and I generally take that as an indication of a weak theory. I’m a branchest, although the distinction from worm theory is, on some accounts, virtually indistinguishable.

Adam Ford: Leibniz thought about the Identity of indiscernibles (principle in ontology that no two things can have all properties the same) – if objX and objY share all the same properties, are they the same thing? If KeithX and KeithY share the same functional characteristics are they the same person?

Keith Wiley: But do they really share the same properties to begin with, or is the premise unfounded? When people casually analyze these sorts of scenarios, the two people are standing there, conscious, wondering if someone is about to pass judgment on them and kill them. They are experiencing the world from slightly different sensorial vantage points (vision, sound, etc.) Their minds are almost certainly diverged in their psychological state mere fractions of a second upon regaining consciousness. So they aren’t functionally identical in the first place. Thus the question is flawed, right? The question can only be applied if they are unconscious and rigidly preserved (frozen perhaps). Although I believe a case could be made that mere lack of consciousness is sufficient to designate them *psychologically* identical even if they are not necessarily physically identical due to microscopic metabolic variations — but I leave that subtly as an open question for the time being.

Adam Ford: Here is a Symmetric Universe counterexample – Max Black – two distinct perfect spheres (or two Ship of Theseuses) are two separate objects even though they share all the same properties – but don’t share the same space-time. What are your thoughts?

Keith Wiley: This is very close to worm theory. It distinguishes seemingly identical entities by considering their spacetime worms, which squiggle their way through different spacetime paths and are therefore not identical in the first place. They never were. The reason they appeared to be identical is that we only considered 3D space projection of their truly 4D spacetime structure. You can easily alias pairs of distinct higher-dimensional entities by looking only at their projections onto lower dimensions and thereby wrongly conclude that they are identical when, in fact, they never were to begin with in their true higher dimensional structure. For example, consider two volumes, a sphere and a cylinder. They are 3D. But project them onto a 2D plane (at the right angle) and you get two circles. You might wrongly conclude they are identical, but they weren’t to begin with! You simply ignored an entire dimension of their nature. That’s what the 4D spacetime worm says about the identity of physical objects.

However, once we dismiss any relevance or importance of physical traits anyway (because I reject body identity on the matter of personal identity, favoring psychological identity), then the 4D worm becomes more convoluted. The question then becomes, what sort of “time worm” describes psychological changes over time instead of physical, structure, and material changes over time? I think it’s as simple as: take an information pattern instantiated in a physical system (a brain), produce a second physical instantiation, and now readily conclude that the psychological temporal worm (just a temporal sequence of psychological states frankly) has diverged.

Adam Ford: Nice answer! – I’m certainly interested in hearing more about worm theory – I think this wikipedia source is about the same thing: https://en.wikipedia.org/wiki/Perdurantism
Do you have any personal writings I can point at in the text form of the interview?

Keith Wiley: Ah, I hadn’t heard that term before. Thanks for the reference. Well, I always refer to my book of course, and more recently Randal Koene and I published a paper in the Journal of Consciousness Studies this past March.

(See Free near-final version on arxiv

Adam Ford: David Pearce is skeptical that our we as in our subjects of experience are actually enduring metaphysical egos – he seems more of a stage theorist – that each moment of subjective experience is fleeting – only persisting through one cycle of quantum cohesion delimited by decoherence.

Keith Wiley: Hmmm, I see the distinction in the link to stage theorist you provided above, and I do not believe I am committed to a position on that question. I go both ways in my own writing, sometimes describing things as true 4D entities (I describe brains that way in my book) but also writing quite frequently in terms of “mind descendants of mind ancestors”. That phrasing admits that perhaps identity does not span time in a temporal worm, but rather that it consists of instantaneous time slices of momentary identity connected in a temporal sequence. Like I said, I am uncommitted on this distinction, at least for now.

Identity: Accidental properties vs Essential properties

Adam Ford: Is the sense of an enduring metaphysical ego really an ‘accidental property’ (based on our intuitions of self) rather than an ‘essential property’ of identity?

Keith Wiley: It is possible we don’t yet know what a mind is in sufficient detail to answer such a question. I confess to not being entirely sure what the question is asking. That said, it is possible that conscious and cognitively rich aliens have come up with a fairly different way of comprehending what their minds actually are, and consequently may also have rather bizarre notions of what personal identity is.

Note that in the video, I sometimes offer an answer to the question “Did we preserve the ship in this scenario?” and I sometimes don’t, simply asking the viewer “So did we preserve it or not? What do you think?” This is because I’m certainly not sure of all the answers to this question in all the myriad scenarios yet.

Adam Ford: This argument is criticized by some modern philosophers on the grounds that it allegedly derives a conclusion about what is true from a premise about what people know. What people know or believe about an entity, they argue, is not really a characteristic of that entity.
There may be a problem in that what is true about a phenomenon or object (like identity) shouldn’t be derived from how we label or what we know about it – the label or description isn’t a characteristic of the identity (map not the territory etc).

Keith Wiley: I would essentially agree that identity shouldn’t merely be a convention of how we arbitrarily label things (i.e., that labeling grants or determines identity), but rather the reverse, that we are likely to label things so as to indicate how we perceive their identity. The question is, does our perception of identity indicate truth, which we then label, or does our perception determine or choose identity, which we then label? I would like to think reality is more objective than that, that there at least some aspects of identity that aren’t merely our choices, but rather traits of the world that we discover, observe, and finally label.

ship-of-theseus

Notes

References

A Taxonomy and Metaphysics of Mind-Uploading https://www.amazon.com/dp/0692279849
The Fallacy of Favouring Gradual Replacement Mind Uploading Over Scan-and-Copy https://arxiv.org/abs/1504.06320 Research Gate: https://www.researchgate.net/publication/299820458_The_Fallacy_of_Favouring_Gradual_Replacement_Mind_Uploading_Over_Scan-and-Copy

The Endurance/Perdurance Distinction By Neil Mckinnon http://www.tandfonline.com/doi/pdf/10.1080/713659467
Endurantism and Perdurantism for a discussion on 3 different ways on what these terms have been taken to mean : http://www.nikkeffingham.com/resources/Endurantism+and+Perdurantism.pdf
Plutarch: http://penelope.uchicago.edu/Thayer/E/Roman/Texts/Plutarch/Lives/Theseus*.html

Definitions


Perdure – remain in existence throughout a substantial period of time; persisting in virtue of having both temporal and spatial parts (alternatively the thesis that objects are four dimensional and have temporal parts)
Endure – being wholly present at all times at which it exists (endurance distinct from perducance in that endurance has strict identity and perdurance has a looser unity relation (genidentity))
Genidentity – is an existential relationship underlying the genesis of an object from one moment to the next.
Gunk – In mereology, an area of philosophical logic, the term gunk applies to any whole whose parts all have further proper parts. That is, a gunky object is not made of indivisible atoms or simples. Because parthood is transitive, any part of gunk is itself gunk.

Bio

Keith Wiley has a Ph.D. in Computer Science from the University of New Mexico and was one of the original members of MURG, the Mind Uploading Research Group, an online community dating to the mid-90s that discussed issues of consciousness with an aim toward mind-uploading. He has written multiple book chapters, peer-reviewed journal articles, and magazine articles, in addition to several essays on a broad array of topics, available on his website. Keith is also an avid rock-climber and a prolific classical piano composer.


Also see Jennifer Wang’s (Stanford University) video as she introduces us to the Ship of Theseus puzzle that has bedeviled philosophy since the ancient Greeks. She tells the Ship of Theseus story, and draws out the more general question behind it: what does it take for an object to persist over time? She then breaks this ancient problem down with modern clarity and rigor.

Narratives, Values & Progress – Anders Sandberg

Anders Sandberg discusses ideas & values and where we get them from, mindsets for progress, and that we are living in a unique era of technological change but also, importantly we are aware that we are living in an era of great change. Is there a direction in ethics? Is morality real? If so, how do we find it? What will our descendants think of our morals today – will they be weird to future generations?

One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.Anders Sandberg

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Transcript

I grew up reading C.S. Lewis and his Narnia Stories. And at that time I didn’t get what was going on – I think it was when finally I was reading one, I then started thinking ‘this seems like an allegory’ and then sort of realizing ‘a christian allegory’ and then I felt ‘oh dear!’. I had to of course read all of them. In the end I was quite cross at Lewis for trying to foist that kind of stuff on children. He of course was unashamed – he was arguing in his letters ‘of course, if you are a christian you should make christian stories and try to tell them’ – but then of course he hides everything – so instead of having Jesus he turns him into a lion and so on.
But there’s an interesting problem in general of course ‘where do we get our ideas from?’. I grew up in boring Sweden in the 70’s so I had to read a lot of science fiction in order to get excited. That science fiction story reading made me interested in the technology & science and made it real – but it also gave me a sort of libertarian outlook accidentally. I realised that well, maybe our current rules for society are arbitrary – we could change them into something better. And aliens are people too, as well as robots. So in the end that kind of education also set me on my path.
So in general what we read as children effects us in sometimes very subtle ways – I was reading one book about technologies of the future by a German researcher – today of course it is very laughably 60ish – very much thinking about cybernetics and the big technologies, fusion reactions and rockets – but it also got me thinking about ‘we can change the world completely’ – there is no reason to think that it works out that only 700 billion people can live on earth – we rebuild it to house trillions – it wouldn’t be a particularly nice world, it would be nightmarish by our current standards – but it would actually be possible to do. It’s rather that we have a choice of saying ‘maybe we want to keep our world rather small scale with just a few billion people on it’. Other would say ‘we can’t event sustain a few billion people on the planet – we’re wearing out the biosphere’ – but again it’s based on a certain assumption about how the biosphere functions – we can produce the food more efficiently than we currently do. If we went back to the primitive hunter gatherers we would need several hundred earths to sustain us all simply hunter gatherers need enormous areas of land in order to get enough prey to hunt down in order to survive. Agriculture is much more effective – and we can go far beyond that – things like hydroponics and in-vitro meat might actually in the future mean that we would say it’s absolutely disgusting, or rather weird to culture farmland or eat animals! ‘Why would you actually eat animals? Well only disgusting people back in the stone-age did that’. In that stone age they were using silicone of course.
Dividing history into ages is very fraught because when you declare that ‘this is the atomic age’ you make certain assumptions – so the atomic age didn’t turn out so well because people lost their faith in their friend the atom – the space age didn’t turn out to be a space age because people found better ways of using the money – in a sense we went out into space prematurely before there was a good business case for it. The computer age on the other hand – well now computers are so everywhere that we could just as well call it the air age – it’s everywhere. Similarly the internet – that’s just the latest innovation – probably as people in the future look back we’re going to call it something completely different – just like we want to divide history into things like the Medieval age, or the Renaissance, which are not always more than just labels. What I think is unique about our era in history is that we’re very aware that we are living in a changing world; that is not going to be the same in 100 years, that is going to be utterly utterly different from what it was 100 years back. So many historical eras people have been thinking ‘oh we’re on the cusp of greatness or a great disaster’. But we actually have objective good reasons for thinking things cannot remain as they were. There are too many people, too many brains, too much technology – and a lot of these technologies are very dangerous and very transformative – so if we can get through this without too much damage to ourselves and the planet, I think we are going to have a very interesting future. But it’s also probably going to be a future that is somewhat alien from what we can foresee.
If we took an ancient roman and put him into the modern society he would absolutely shocked – not just by our technology, but by our values. We are very clear that compassion is a good virtue, and he would say the opposite and say ‘compassion is for old ladies’ – and of course a medieval knight would say ‘you have no honor in the 21st century’ and we’d say ‘oh yes, honor killings and all that – that’s bad, yeah actually a lot of those medieval honorable ideals they’re actually immoral by our standards’. So we should probably take that our moral standards are going to be regarded by the future as equally weird and immoral – and this is of course a rather chilling thought because our personal information is going to be available in the future to our descendants or even ourselves as older people with different values – a lot of advanced technologies we are worrying about are going to be wielded by our children, or by an older version of ourselves in ways we might not approve – but they’re going to say ‘yes but we’ve actually figured out the ethics now’.
The problem of course of where ethics is ever going is a really interesting question in itself – so people say oh yes, it’s just relative, it’s just societies making up rules to live by – but I do think we learned a few things – the reduction in violence over historical eras shows that we are getting something right. I don’t think that our relatives could just say that ‘violence is arbitrarily sometimes good and sometimes bad’ – I think it’s very clearly a bad thing. So I think we are making moral progress in some sense – we are figuring out better ways of thinking about morality. One of the interesting things about our current world is that we are aware that a lot of ideas about morality are things going on in our culture and in our heads – and are not just the laws of nature – that’s very useful. Some people of course think that there is some ideal or best moral system – and maybe there is – but we’re not very good at finding it. It might turn out that in the long run if there is some kind of ultimate sensible moral – we’re going to find it – but that might take a very long time and might take brains much more powerful than ours – it might turn out that all sufficiently advanced alien civilizations eventually figure out the right thing to do – and do it. But it could also turn out actually when we meet real advanced aliens they’re going to be as confused about philosophy as we are – that’s one of the interesting things to find out about the universe.

anders-sandberg-will-our-morals-be-weird-to-future-generations

Points covered:
– Technologies of the Future
– Efficient sustainability, in-vitro meat
– Living in an era of awareness of change
– Values have changed over time
– Will our morals be weird to future generations?
– Where is ethics going?
– Does moral relativism adequately explain reductions in violence?
– Is there an ideal ‘best moral system’? and if so, how do we find it?

Longevity Day with Aubrey de Grey!

“Longevity Day” (based on the UN International Day of Older Persons – October 1) is a day of support for biomedical aging and longevity research. This has been a worldwide international campaign successfully adopted by many longevity activists groups. In this interview Aubrey de Grey lends support to Longevity Day and covers a variety of points, including:
– Updates: on progress at SENS (achievements, and predictions based on current support), funding campaigns, the recent Rejuvenation Biotechnology conference, and exciting news in health and medicine as it applies to longevity
– Advocacy: What advocates for longevity research need to know
– Effective Altruism and Science Philanthropy – giving with impact – cause prioritization and uncertainty – how to go about measuring estimates on impacts of dollars or units of effort given to research organizations
– Action: High impact areas, including more obvious steps to take, and some perhaps less obvious/underpopulated areas
– Leveraging Longevity Day: What to do in preparation to leverage Longevity Day? Once one has celebrated Longevity Day, what to do next?

“Longevity Day” (based on the UN International Day of Older Persons – October 1st) is a day of support for biomedical aging and longevity research. This has been a worldwide international campaign successfully adopted by many longevity activists groups.

Here is the Longevity Day Facebook Page.

longevity-advocacy-action-aubrey-de-grey-longevity-day-oct-1st

Can We Improve the Science of Solving Global Coordination Problems? Anders Sandberg

Anders Sandberg discusses solving coordination problems:

anders-s-02_40_16_03-still042Includes discussion on game theory including:the prisoners dilemma (and the iterated form), the tit-for-tat strategy, and reciprocal altruism. He then discusses politics, and why he considers himself a ‘heretical libertarian’ – then contrasts the benefits and risks of centralized planning vs distributed trial & error and links this in with discussion on Existential Risk – centralizing very risky projects at the risk of disastrous coordination failures. He discusses groupthink and what forms of coordination work best. Finally he emphasises the need for a science of coordination – a multidisciplinary approach including:

  1. Philosophy
  2. Political Science
  3. Economics
  4. Game Theory

Also see the tutorial on the Prisoners Dilemma:

And Anders paper on AGI models.

A metasystem transition is the evolutionary emergence of a higher level of organisation or control in a system. A number of systems become integrated into a higher-order system, producing a multi-level hierarchy of control. Within biology such evolutionary transitions have occurred through the evolution of self-replication, multicellularity, sexual reproduction, societies etc. where smaller subsystems merge without losing differentiation yet often become dependent on the larger entity. At the beginning of the process the control mechanism is rudimentary, mainly coordinating the subsystems. As the whole system develops further the subsystems specialize and the control systems become more effective. While metasystem transitions in biology are seen as caused by biological evolution, other systems might exhibit other forms of evolution (e.g. social change or deliberate organisation) to cause metasystem transitions. Extrapolated to humans, future transitions might involve parts or the whole of the human species becoming a super-organism.Anders Sandberg

Anders discusses similar issues in ‘The thermodynamics of advanced civilizations‘ – Is the current era the only chance at setting up the game rules for our future light cone? (Also see here)

anders-sandberg-coordination-problems-3b

Further reading
The Coordination Game: https://en.wikipedia.org/wiki/Coordination_game

Heavy-Tailed Distributions: What Lurks Beyond Our Intuitions?

Understanding heavy-tailed distributions are important to assessing likelihoods and impact scales when thinking about possible disasters – especially relevant to xRisk and Global Catastrophic Risk analysis. How likely is civilization to be devastated by a large scale disaster or even go extinct?
Anders discusses how heavy-tailed distributions account for more than our intuitions tell us.

How likely is civilization to devastated by a global disaster or even go extinct?
In this video, Anders Sandberg discusses (with the aid of a whiteboard) how heavy-tailed distributions account for more than our intuitions tell us .

Considering large-scale disasters may be far more important than we intuit.

Transcript of dialog

So typically when people talk about probability they think about nice probability distribution like the bell curve or the Gaussian curve. So this means that it’s most likely that you get something close to zero and then less and less likely that you get very positive or very negative things and this is a rather nice looking curve.

However, many things in the world turn out to have much nastier probability distributions. A lot of disasters for example have a power law distribution. So if this is the size of a disaster and this is the probably, they fall off like this. This doesn’t look very dangerous from the start. Most disasters are fairly small, there’s a high probability of something close to zero and a low probability of something large. But it turns out that the probability getting a really large one can become quite big.

So suppose this one has alpha equal to 1 – that means that there is the chance of getting a disaster of size 10 is proportional to 1 in 10 and that disaster is 10 times as large that’s just a 10th of that probability and that it’s also 10 times as large as that big disaster (again a 10th of that).

That means that we’ve quite a lot of probability of getting very very large disasters – so in this case getting something that is very far out here is exceedingly unlikely, but in the case of power laws you can actually expect to see some very very large outbreaks.

So if you think about the time that various disasters happen – they happen irregularly and occasionally one is through the roof, and then another one, and you can’t of course tell when they happen – that’s random. And you can’t really tell how big they are going to be except that you’re going to be distributed in this way.

The real problem is that when something is bigger than any threshold that you imagine.. well it’s not just going to be a little bit taller, it’s going to be a whole lot taller.

So if we’re going to see a war for example as large as even the Second World War, we shouldn’t expect it to kill a million people more. We could expect it to kill tens or most likely hundreds or even a billions of people more – which is a rather scary prospect.

So the problem here is that disasters seem to be having these heavy tails. So a heavy a tail in probability slang that means that the probability mass over here, the chance that something very large is happening, there again it falls off very slowly. And this is of course a big problem because we tend to think in terms of normal distributions.

Normal distributions are nice. We say they’re normal because a lot of the things in our everyday life get distributed like this. The tallness of people for example – very rarely do we meet somebody who’s a kilometer tall, however, when we meet the people and think about how much they’re making or much money they have – well Bill Gates. He is far far richer than just ten times you and me and then he’s actually got, he’s from afar out here.

So when we get to the land where we have these fat heavy tails when both the the richest (if we are talking about rich people and the dangers if we talk about this) also tend to be much bigger than we can normally think about.

Adam: Hmm yes definitely un-intuitive.

Mmm and the problem is of course our intuitions are all shaped by what’s going on here in the normal realm. We have this experience about what has happened so far in our lives and once we venture out here and talk about very big events or intuitions suddenly become very bad. We make mistakes. We don’t really understand the consequences, cognitive biases take over and this can of course completely mess up our planning.

So we invest far too little in handling the really big disasters and we’re far too uninterested in going for the big wins in technology and science.

We should pay more attention probability theory (esp heavy-tailed distributions) in order to discover and avoid disasters that lurk beyond our intuitions.


Also see –
– Anders Sandberg: The Survival Curve of Our Species: Handling Global Catastrophic and Existential Risks

Anders Sandberg on Wikipedia: https://en.wikipedia.org/wiki/Anders_Sandberg

anders-sandberg-03_41_59_21-still025

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important to solve consciousness before superintelligence (especially if superintelligence wipes us out) in hope for a future inclusive of the value that consciousness experience entails – instead of a mechanized future with no consciousness to experience it.
I explored the idea of a consciousness in artificial intelligence in ‘The Knowledge Argument Applied to Ethics‘ – which deals with whether an AI will act differently if it can experience ‘raw feels’ – and this seems to me to be of importance to AI Safety and (if we are ethically serious, and also assume value in ‘raw feels’ or) about preserving the future of value.

Dave Rubin asks the question: “If we get to a certain point with Artificial Intelligence and robots become aware and all that stuff… this can only end horribly right? …it will be pretty good for a while, but then at some point, by their own self-preservation basically, they will have to turn on their masters… I want the answer right now…”

Sam Harris responds: “..I worry about it [AI] to that degree but not quite in those terms. The concern for me is not that we will build superintelligent AI or superintelligent robots which initially seem to work really well and then by some process we don’t understand will become malevolent; and kill us – you know – the terminator movies. That’s not the concern…. Most people who are really worried about this – that’s not really what they are worried about. Although that’s not inconceivable – it’s almost worse than that. What’s more reasonable is that will.. As we’re building right now… we’re building machines that embody intelligence to increasing degree.. But it’s narrow AI.. so the best chess player on earth is a computer but it can’t play tic-tac-toe – it’s narrowly focused on a specific kind of goal – and that’s broadening more and more as we get machines that can play many different kinds of games for instance well. So we’re creeping up on what is now called ‘general intelligence’ – the ability to think flexibly in multiple domains – and we’re you’re learning in one domain doesn’t cancel you’re learning in another – and so it’s something more like how human beings can acquire many different skills and engage in many different modes of cognition and not have everything fall apart – that’s the Holy Grail of artificial intelligence – we want ‘general intelligence’ and something that’s robust – it’s not brittle…it’s something that if parts of it fail it’s not catastrophic to the whole enterprise… and I think there is no question that we will get there, but there are many false assumptions about the path ahead. One is that what we have now is not nearly as powerful as the human mind – and we’re just going to incrementally get to something that is essentially a human equivalent. Now I don’t see that as the path forward at all… all of our narrow intelligence … much of our narrow intelligence insomuch as we find it interesting is already superhuman, right, so like we have your calculator on your phone and it’s superhuman for arithmetic – and the chess playing program is superhuman – it’s not almost as good as a human – it’s better than any human on earth and will always be better than any human on earth right? Um, and more and more we will get that piecemeal effort of superhuman narrow AIs and when this is ever brought together in a general intelligence what you’re going to have is not just another ordinary human level intelligence – you’re going to have something that is in some ways may be radically foreign – in some ways it’s not going to be everything about us emulated in the system – but whatever is intelligent there is not going to be superhuman almost by definition and if it isn’t t=0 it’s going to be the next day – it’s just going to improve so quickly and when you talk about a system that can improve itself – if we ever build intelligent AI that then becomes the best source of it’s own improvement – so something that can improve it’s source code better than any human could improve it’s source code – once we start that process running, and the temptation to do that will be huge, then we have – what has been worried about now for 75 years – the prospect of an intelligence explosion – where the birth of this intelligence could get away from us – it’s now improving itself in a way that is unconstrained.  So people talk about ‘the Singularity’ now which is what happens when that takes off – it’s a horizon line in technological innovation that we can’t see beyond – and we can’t predict beyond because it’s now just escaping – you’re getting 1000’s of years of progress in minutes – right if in fact this process gets initiated – and so it’s not that we have superhuman robots that are just well behaved and it goes on for decades and then all of the sudden they get quirky and they take their interests to heart more than they take ours to heart and … you know the game is over. I think what is more likely is we’ll build intelligent systems that are so much more competent than we are – that even the tiniest misalignment between their goals and our own – will ultimately become completely hostile to our well being and our survival.”

The video of the conversation is here, more of the transcription below the video

Dave Rubin: “That’s scarier, pretty much, than what I laid out right? I laid out sort of a futuristic .. ahh there going to turn on us and start shooting us one day maybe because of an error or something – but you’re laying out really that they would… almost at some point that they would, if they could become aware enough, that they simply wouldn’t need us – because they would become ‘super-humans’ in effect – and what use would we serve for them at some point right? (maybe not because of consciousness…)”

Sam Harris: “I would put consciousness and awareness aside because – I mean it might be that consciousness comes along for the ride – it may be the case that you can’t be as intelligent as a human and not be conscious – but I don’t know if that’s right…”

Dave Rubin: “That’s horizon mind stuff right?”

Sam Harris: “Well I just don’t know if that’s actually true – it’s quite possible that we could build something as intelligent as we are – in a sense that it can meet any kind of cognitive or perceptual challenge or logical challenge we would pose it better than we can – but there is nothing that is like to be that thing – if the lights aren’t on it doesn’t experience happiness, though it might say it experiences happiness right? I think what will happen is that we will definitely – you know the notion of a Turing test?”

Dave Rubin: “This is like, if you type – it seems like it’s responding to you but it’s not actually really…”

Sam Harris: “Well, Allan Turing, the person who is more responsible than anyone else for giving us computers once thought about what it would mean to have intelligent machines – and he proposed what has been come to be known as the ‘Turing Test’.”

Dave Rubin: “It’s like the chat right?”

Sam Harris: “Yeah but .. when you can’t tell whether you’re interacting with a person or a computer – that computer in that case is passing the Turing Test – and as a measure of intelligence – that’s certainly a good proxy for a more detailed analysis of what it would mean to have machine intelligence… if I’m talking to something at length about anything that I want – and I can’t tell it’s not a person, and it turns out it’s somebody’s laptop – that laptop is passing the Turing Test. It may be that you can pass the Turing Test without even the subtlest glimmer of consciousness arising. Right, so that laptop is no more conscious than that glass of water is – right? That may in fact be the case, it may not be though – so I just don’t know there. If that’s the case, for me that’s just the scariest possibility – because what’s happening is .. I even heard at least one computer scientist and it was kind of alarming but I don’t have a deep argument against it – if you assume that consciousness comes along for the ride, if you assume that anything more intelligent than us gives rise to – either intentionally for by happenstance – is more conscious than we are, experiences a greater range of creative states – in well-being and can suffer more – by definition, in my view ethically, it becomes more important… if we’re more important than Cocker Spaniels or ants or anything below us – then if we create something that’s obviously above us in every conceivable way – and it’s conscious – right?”

Dave Ruben: “It would view us in the same way any we view anything that [???] us”

Sam Harris: “It’s more important than us right? And I’d have to grant that even though I’d not be happy about it deciding to annihilate us… I don’t have a deep ethical argument against why… I can’t say from a god’s eye view that it’s bad that we gave birth to super beings that then trampled on us – but then went on to become super in any ways we can’t possibly imagine – just as, you know, bacteria can’t imagine what we’re up to – right. So there are some computer scientists who kind of solve the fears, or silence the fears with this idea – that say just listen, if we build something that’s god like in that respect – we will have given birth to – our descendants will not be apes, they will be gods, and that’s a good thing – it’s the most beautiful thing – I mean what could be more beautiful than us creating the next generation of intelligent systems – that are infinitely profound and wise and knowledgeable from our point of view and are just improving themselves endlessly up to the limit of the resources available in the galaxy – what could be more rewarding than that?”

Dave Ruben: “Sounds pretty good”

Sam Harris: “And the fact that we all destroyed ourselves in the process because we were the bugs that hit their windshield when they were driving off – that’s just the price you pay. Well ok that’s possible but it’s also conceivable that all that could happen without consciousness right? That we could build mere mechanism that is competent in all the ways so as to plow us under – but that there is no huge benefit on the side of deep experience and well being and beauty and all that – it’s all just blind mechanism, which is intelligent mechanism .. in the same way as the best chess playing program – which is highly intelligent with respect to chess but nobody thinks as conscious. So that’s the theory … but on the way there – there is many weird moments where I think we will build machines that will pass the Turing Test – which is to say that they will seem conscious to us, they will seem to be able to detect our emotions and respond to our emotions, you know will say ‘you know what – you look tired, and maybe you should take a nap’ – and it will be right you know, it will be a better judge of your emotions than your friends are – right? And yet at a certain point certainly if you emulate this in a system whether it’s an avatar online or an actual robot that has a face right? That can display it’s own emotion and we get out of the uncanny valley where it just looks creepy and begins to look actually beautiful and rewarding and natural – then our intuitions that we are in dialog with a conscious other will be played upon perfectly right? .. and I think we will lose sight of it being an interesting problem – it will no longer be interesting to wonder whether our computers are conscious because they will be demonstrating as much as any person has ever demonstrated it – and in fact even more right? And unless we understand exactly how consciousness emerges in physical systems, at some point along the way of developing that technology – I don’t think we will actually know that they’re conscious – and that will be interesting – because we will successfully fool ourselves into just assuming – it will seem totally unethical to kill your robot off – it will be a murder worse than you killing a person because at a certain point it will be the most competent person – you know, the wisest person.”

Dave Ruben: “Sam, I don’t know if you’re writing a book about this – but you clearly should write a book about this – I’ll write one of the intros or something – there you go. Well listen we did two hours here – so I’m not going to give you the full Rogen treatment ”

Sam Harris: “We did a half Rogen”

Dave Ruben: “We did a half Rogen – but you know you helped me launch the first season – you’re launching second season – legally you have to now launch every season..”

* Some breaks in conversation (sentences, words, ums and ahs) have been omitted to make it easier to read

AI: The Story So Far – Stuart Russell

stuart russell - redAwesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition seems such that people often don’t give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from an authority figure.

During the presentation Stuart brings up a nice quote by Norbert Wiener:

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.Norbert Wiener

P.s. Stuart Russell co-authored AI A Modern Approach with Peter Norvig – arguably the most popular textbook on AI theory.

The lecture was presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford’s Future of Humanity Institute (FHI).

What I’m finding is that senior people in the field who have never publicly evinced any concern before are privately thinking that we do need to take this issue very seriously, and the sooner we take it seriously the better.Stuart Russell

Video of presentation:

 

The field [of AI] has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:

1. AI is likely to succeed.
2. Unconstrained success brings huge risks and huge benefits.
3. What can we do now to improve the chances of reaping the benefits and avoiding the risks?

Some organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT. I serve on the Advisory Boards of CSER and FLI.

Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.

– Stuart Russell (Quote Source)

 

UPDATE – Interview
I got to meet Stuart Russell at IJCAI in 2017, he agreed to do an interview which turned out very nicely. Here is the results:

Suffering, and Progress in Ethics – Peter Singer

Peter Singer_profileSuffering is generally bad – Peter Singer (who is a Hedonistic Utilitarian), and most Effective Altruists would agree with this. Though in addressing the need for suffering today Peter acknowledges that, as we are presently constituted, suffering is useful as a warning sign (e.g. against further injury). But what about the future?
What if we could eliminate suffering?
Perhaps in the future we will have advanced technological interventions to warn us of danger that will be functionally similar to suffering, but without the nasty raw feels.
Peter Singer, like David Pearce, suggests that if we could eliminate suffering of non-human animals that are capable of suffering, perhaps in some way that is difficult to imagine now – that this would be a good thing.

Video Interview:

I would see no reason to regret the absence of sufferingPeter Singer
Peter can’t see any regret to lament the disappearance of suffering, though perhaps people may say it would be useful to help understand literature of the past. Perhaps there are some indirect uses for suffering – but on balance Peter thinks that the elimination of suffering would be an amazingly good thing to do.

Singer thinks it is interesting to speculate what might be possible for the future of human beings, if we do survive over the longer term. To what extent are we going to be able to enhance ourselves? In particular to what extent are we going to be more ethical human beings – which brings to question ‘Moral Enhancement’.

The Expanding Circle - Peter SingerHave we made Progress in Ethics? Peter argues for the case that our species has expanded the circle of our ethical concern we have in his book ‘The Expanding Circle‘, and more recently Steven Pinker took up this idea in ‘Better Angels Of Our Nature’ – and this has happened over the millennia, beyond initially the tribal group, then to a national level, beyond ethnic groups to all human beings, and now we are starting to expand moral concern to non-human sentient beings as well.

Steven Pinker thinks that increases in our ethical consideration is bound up with increases in our intelligence (as proposed by James Flynn – the Flynn Effect – though this research is controversial (it could be actual increases in intelligence or just the ability to do more abstract reasoning)) and increases in our ability to reason abstractly.

As mentioned earlier there are other ways in which we may increase our ability and tendency to be more moral (see Moral Enhancement), and in the future we may discover genes that may influence us to think more about others, to dwell less on negative emotions like anger or rage. It is hard to say whether people will use these kinds of moral enhancers voluntarily, or whether we need state policies to encourage people to use moral enhances in order to produce better communities – and there are a lot of concerns here that people may legitimately have about how the moral enhancement project takes place. Peter sees this as a fascinating prospect and that it would be great to be around to see how things develop over the next couple of centuries.

Note Steven Pinker said of Peter’s book:

Singer’s theory of the expanding circle remains an enormously insightful concept, which reconciles the existence of human nature with political and moral progress. It was also way ahead of its time. . . . It’s wonderful to see this insightful book made available to a new generation of readers and scholars.Steven Pinker

The Expanding Circle

Abstract: What is ethics? Where do moral standards come from? Are they based on emotions, reason, or some innate sense of right and wrong? For many scientists, the key lies entirely in biology–especially in Darwinian theories of evolution and self-preservation. But if evolution is a struggle for survival, why are we still capable of altruism?

Peter Singer - The Most Good You Should Do - EA Global Melbourne 2015In his classic study The Expanding Circle, Peter Singer argues that altruism began as a genetically based drive to protect one’s kin and community members but has developed into a consciously chosen ethic with an expanding circle of moral concern. Drawing on philosophy and evolutionary psychology, he demonstrates that human ethics cannot be explained by biology alone. Rather, it is our capacity for reasoning that makes moral progress possible. In a new afterword, Singer takes stock of his argument in light of recent research on the evolution of morality.

References:
The Expanding Circle book page at Princeton University: http://press.princeton.edu/titles/9434.html

The Flynn Effect: http://en.wikipedia.org/wiki/Flynn_effect

Peter Singer – Ethics, Evolution & Moral Progress – https://www.youtube.com/watch?v=91UQAptxDn8

For more on Moral Enhancement see Julian Savulescu’s and others writings on the subject.

Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

Science, Technology & the Future: http://scifuture.org