Uncovering the Mysteries of Affective Neuroscience – the Importance of Valence Research with Mike Johnson

Valence in overview

Adam: What is emotional valence (as opposed to valence in chemistry)?

Mike: Put simply, emotional valence is how pleasant or unpleasant something is. A somewhat weird fact about our universe is that some conscious experiences do seem to feel better than others.

 

Adam: What makes things feel the way they do? What makes some things feel better than others?

Mike: This sounds like it should be a simple question, but neuroscience just don’t know. It knows a lot of random facts about what kinds of experiences, and what kinds of brain activation patterns, feel good, and which feel bad, but it doesn’t have anything close to a general theory here.

..the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it.

And the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it. For instance, we know that certain regions of the brain, like the nucleus accumbens and ventral pallidum, seem to be important for pleasure, so we call them “pleasure centers”. But we don’t know what makes something a pleasure center. We don’t even know how common painkillers like acetaminophen (paracetamol) work! Which is kind of surprising.

In contrast, the hypothesis about valence I put forth in Principia Qualia would explain pleasure centers and acetaminophen and many other things in a unified, simple way.

 

Adam: How does the hypothesis about valence work?

Mike: My core hypothesis is that symmetry in the mathematical representation of an experience corresponds to how pleasant or unpleasant that experience is. I see this as an identity relationship which is ‘True with a capital T’, not merely a correlation.  (Credit also goes to Andres Gomez Emilsson & Randal Koene for helping explore this idea.)

What makes this hypothesis interesting is that
(1) On a theoretical level, it could unify all existing valence research, from Berridge’s work on hedonic hotspots, to Friston & Seth’s work on predictive coding, to Schmidhuber’s idea of a compression drive;

(2) It could finally explain how the brain’s so-called “pleasure centers” work– they function to tune the brain toward more symmetrical states!

(3) It implies lots and lots of weird, bold, *testable* hypotheses. For instance, we know that painkillers like acetaminophen, and anti-depressants like SSRIs, actually blunt both negative *and* positive affect, but we’ve never figured out how. Perhaps they do so by introducing a certain type of stochastic noise into acute & long-term activity patterns, respectively, which disrupts both symmetry (pleasure) and anti-symmetry (pain).

 

Adam: What kinds of tests would validate or dis-confirm your hypothesis? How could it be falsified and/or justified by weight of induction?

Mike: So this depends on the details of how activity in the brain generates the mind. But I offer some falsifiable predictions in PQ (Principia Qualia):

  • If we control for degree of consciousness, more pleasant brain states should be more compressible;
  • Direct, low-power stimulation (TMS) in harmonious patterns (e.g. 2hz+4hz+6hz+8hz…160hz) should feel remarkably more pleasant than stimulation with similar-yet-dissonant patterns (2.01hz+3.99hz+6.15hz…).

Those are some ‘obvious’ ways to test this. But my hypothesis also implies odd things such as that chronic tinnitus (ringing in the ears) should product affective blunting (lessened ability to feel strong valence).

Note: see https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/ and http://opentheory.net/2018/08/a-future-for-neuroscience/ for a more up-to-date take on this.

 

Adam: Why is valence research important?

Mike Johnson: Put simply, valence research is important because valence is important. David Chalmers famously coined “The Hard Problem of Consciousness”, or why we’re conscious at all, and “The Easy Problem of Consciousness”, or how the brain processes information. I think valence research should be called “The Important Problem of Consciousness”. When you’re in a conscious moment, the most important thing to you is how pleasant or unpleasant it feels.

That’s the philosophical angle. We can also take the moral perspective, and add up all the human and non-human animal suffering in the world. If we knew what suffering was, we could presumably use this knowledge to more effectively reduce it and make the world a kinder place.

We can also take the economic perspective, and add up all the person-years, capacity to contribute, and quality of life lost to Depression and chronic pain. A good theory of valence should allow us to create much better treatments for these things. And probably make some money while doing it.

Finally, a question I’ve been wondering for a while now is whether having a good theory of qualia could help with AI safety and existential risk. I think it probably can, by helping us see and avoid certain failure-modes.

 

Adam: How can understanding valence could help make future AIs safer? (How to help define how the AI should approach making us happy?, and in terms of a reinforcement mechanism for AI?)

Mike: Last year, I noted a few ways a better understanding of valence could help make future AIs safer on my blog. I’d point out a few notions in particular though:

  • If we understand how to measure valence, we could use this as part of a “sanity check” for AI behavior. If some proposed action would cause lots of suffering, maybe the AI shouldn’t do it.
  • Understanding consciousness & valence seem important for treating an AI humanely. We don’t want to inadvertently torture AIs- but how would we know?
  • Understanding consciousness & valence seems critically important for “raising the sanity waterline” on metaphysics. Right now, you can ask 10 AGI researchers about what consciousness is, or what has consciousness, or what level of abstraction to define value, and you’ll get at least 10 different answers. This is absolutely a recipe for trouble. But I think this is an avoidable mess if we get serious about understanding this stuff.

 

Adam: Why the information theoretical approach?

Mike: The way I would put it, there are two kinds of knowledge about valence: (1) how pain & pleasure work in the human brain, and (2) universal principles which apply to all conscious systems, whether they’re humans, dogs, dinosaurs, aliens, or conscious AIs.

It’s counter-intuitive, but I think these more general principles might be a lot easier to figure out than the human-specific stuff. Brains are complicated, but it could be that the laws of the universe, or regularities, which govern consciousness are pretty simple. That’s certainly been the case when we look at physics. For instance, my iPhone’s processor is super-complicated, but it runs on electricity, which itself actually obeys very simple & elegant laws.

Elsewhere I’ve argued that:

>Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we’re looking at it from the wrong level of abstraction.

 

Adam: What do you think of Thomas A. Bass’s view of ITheory – he thinks that (at least in many cases) it has not been easy to turn data into knowledge. That there is a pathological attraction to information which is making us ‘sick’ – he calls it Information Pathology. If his view offers any useful insights to you concerning avoiding ‘Information Pathology’ – what would they be?

Mike: Right, I would agree with Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

 

Adam: Information Theory frameworks – what is your “Eight Problems” framework and how does it contrast with Giulio Tononi’s Integrated Information Theory (IIT)? How might IIT help address valence in a principled manner? What is lacking IIT – and how does your ‘Eight Problems’ framework address this?

Mike: IIT is great, but it’s incomplete. I think of it as *half* a theory of consciousness. My “Eight Problems for a new science of consciousness” framework describes what a “full stack” approach would look like, what IIT will have to do in order to become a full theory.

The biggest two problems IIT faces is that (1) it’s not compatible with physics, so we can’t actually apply it to any real physical systems, and (2) it says almost nothing about what its output means. Both of these are big problems! But IIT is also the best and only game in town in terms of quantitative theories of consciousness.

Principia Qualia aims to help fix IIT, and also to build a bridge between IIT and valence research. If IIT is right, and we can quantify conscious experiences, then how pleasant or unpleasant this experience is should be encoded into its corresponding mathematical object.

 

Adam: What are the three principles for a mathematical derivation of valence?

Mike: First, a few words about the larger context. Probably the most important question in consciousness research is whether consciousness is real, like an electromagnetic field is real, or an inherently complex, irreducible linguistic artifact, like “justice” or “life”. If consciousness is real, then there’s interesting stuff to discover about it, like there was interesting stuff to discover about quantum mechanics and gravity. But if consciousness isn’t real, then any attempt to ‘discover’ knowledge about it will fail, just like attempts to draw a crisp definition for ‘life’ (elan vital) failed.

If consciousness is real, then there’s a hidden cache of predictive knowledge waiting to be discovered. If consciousness isn’t real, then the harder we try to find patterns, the more elusive they’ll be- basically, we’ll just be talking in circles. David Chalmers refers to a similar distinction with his “Type-A vs Type-B Materialism”.

I’m a strong believer in consciousness realism, as are my research collaborators. The cool thing here is, if we assume that consciousness is real, a lot of things follow from this– like my “Eight Problems” framework. Throw in a couple more fairly modest assumptions, and we can start building a real science of qualia.

Anyway, the formal principles are the following:

  1. Consciousness can be quantified. (More formally, that for any conscious experience, there exists a mathematical object isomorphic to it.)
  2. There is some order, some rhyme & reason & elegance, to consciousness. (More formally, the state space of consciousness has a rich set of mathematical structures.)
  3. Valence is real. (More formally, valence is an ordered property of conscious systems.)

 

Basically, they combine to say: this thing we call ‘valence’ could have a relatively simple mathematical representation. Figuring out valence might not take an AGI several million years. Instead, it could be almost embarrassingly easy.

 

Adam: Does Qualia Structuralism, Valence Structuralism and Valence Realism relate to the philosophy of physics principles of realism and structuralism? If so, is there an equivalent ontic Qualia Structuralism and Valence Structuralism?….

Mike: “Structuralism” is many things to many contexts. I use it in a specifically mathematical way, to denote that the state space of qualia quite likely embodies many mathematical structures, or properties (such as being a metric space).

Re: your question about ontics, I tend to take the empirical route and evaluate claims based on their predictions whenever possible. I don’t think predictions change if we assume realism vs structuralism in physics, so maybe it doesn’t matter. But I can get back to you on this. 🙂

 

Adam: What about the Qualia Research Institute I’ve also recently heard about :D! It seems both you (Mike) and Andrés Gómez Emilson are doing some interesting work there

Mike: We know very little about consciousness. This is a problem, for various and increasing reasons– it’s upstream of a lot of futurist-related topics.

But nobody seems to know quite where to start unraveling this mystery. The way we talk about consciousness is stuck in “alchemy mode”– we catch glimpses of interesting patterns, but it’s unclear how to systematize this into a unified framework. How to turn ‘consciousness alchemy’ into ‘consciousness chemistry’, so to speak.

Qualia Research Institute is a research collective which is working on building a new “science of qualia”. Basically, we think our “full-stack” approach cuts through all the confusion around this topic and can generate hypotheses which are novel, falsifiable, and useful.

Right now, we’re small (myself, Andres, and a few others behind the scenes) but I’m proud of what we’ve accomplished so far, and we’ve got more exciting things in the pipeline. 🙂

Also see the 2nd part, and the 3rd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Antispecism & Compassionate Stewardship – David Pearce

I think our first ethical priority is to stop doing harm, and right now in our factory farms billions of non-human animals are being treated in ways that if our victims were human, we would get the perpetrators locked up for life. And the sentience (and what it’s worth the sapience) of a pig compares with the pre-linguistic toddler. A chicken perhaps may be no more intellectually advanced or sentient than a human infant. But before considering the suffering of free living animals we need to consider, I think, the suffering we’re causing our fellow creatures.

Essentially it’s a lifestyle choice – do we want to continue to exploit and abuse other sentient beings because we like the taste of their flesh, or do we want to embrace the cruelty free vegan lifestyle. Some people would focus on treating other sentient beings less inhumanely. I’d say that we really need an ethical revolution in which our focus is: how can we help other sentient beings rather than harm them?

It’s very straightforward indeed to be a vegetarian. Vegetarians tend to statistically live longer, they record high IQ scores, they tend to be slimmer – it’s very easy to be a vegetarian. A strict vegan lifestyle requires considerably more effort. But over the medium to long run I think our focus should be going vegan.

In the short run I think we should be closing factory farms and slaughterhouses. And given that factory farming and slaughterhouses are the greatest source of severe chronic readily avoidable
suffering in the world today, any talk of intervening compassionate stewardship of the rest of the living world is fanciful.

Will ethical argument alone persuade us to stop exploiting & killing other non-human beings because we like the taste of their flesh? Possibly not. I think realistically one wants a twin track strategy that combines animal advocacy with the development of in-vitro meat. But I would strenuously urge anyone watching this program to consider giving up meat and animal products if you are ethically serious.

The final strand of the Abolitionist Project on earth however is free-living animals in nature. And it might seem ecologically illiterate to argue that it is going to be feasible to take care of elephants, zebras, and free living animals. Because after all – let’s say there is starvation, it’s in winter, if you start feeding a lot of starving herbivores – all this does is lead the next spring to a population explosion followed by ecological collapse & more suffering than before.

However what is potentially feasible, if we’re ethically serious, is to micromanage the entire living world – now this sounds extremely far fetched and utopian, but I’ll sketch how it is feasible. Later this century and beyond, every cubic meter of the planet is going to be computationally accessible to surveillance, micro-management and control. And if we want to, we can use fertility regulation & immuno-contraception to regulate population numbers – cross-species fertility control. Starting off presumably with higher vertebrates – elephants for instance – already now – in the Kruger National Park for example – in preference to the cruel practice of culling, population numbers are controlled by immuno-contraception.

So starting off with higher vertebrates but eventually in our wildlife parks, then across the phylogenetic tree, it will be possible to micromanage the living world.

And just as right now if you were to stumble across a small child who is drowning in a pond – you would be guilty of complicity in that child’s drowning if you didn’t pull the child out – exactly the same intimacy over the rest of the living world is going to be feasible later this century and beyond.

Now what about obligate carnivores – predators? Surely it’s inevitable that they’re going to continue to prey on herbivores, so that means one might intuitively suppose that the abolitionist project could never be completed. But even there, if we’re ethically serious there are workarounds – in-vitro meat – for instance big cats if they are offered in vitro meat, catnip flavored in-vitro meat – they’re not going to be tempted to chase after herbivores.

Alternatively, a little bit of genetic tweaking, and you no longer have an obligate carnivore.

I’m supposing here that we do want to preserve recognizable approximations of today’s so-called charismatic megafauna – many people are extremely unhappy at the idea that lions or tigers or snakes or crocodiles should go extinct. I’m not personally persuaded that the world would be a worse place without crocodiles or snakes, but if we do want to preserve them it’s possible genetically to treat them or provide in vitro meat so that they don’t actually do any harm to sentient beings.

Some species essentialists would respond that a lion that is no longer chasing, asphyxiating, disemboweling zebras is no longer truly a lion. But one might make the same argument that a homo sapiens who is no longer beating his rivals over their heads, or waging war or practicing infanticide, slavery and all the other ghastly practices of our evolutionary past, or for that matter wearing clothes, that which are that someone who adopts a more civilized life style are no longer truly human – which I can only say good.

And likewise, if there is a living world in which lions are pacifistic, if a lion so to speak is lying down with the lamb I would say that is much more civilized.

Compassionate Biology

See this exerpt from The Antispeciesist Revolution:
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive to measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “”Custom will reconcile people to any atrocity””, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants, for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming” carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler – and may well be as sentient if not sapient as adult humans. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether through habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions.

http://www.hedweb.com/transhumanism/antispeciesist.html

Subscribe to the YouTube Channel

Science, Technology & the Future

One Big Misconception About Consciousness – Christof Koch

Christof Koch (Allen Institute for Brain Science) discusses Shannon information and it’s theoretical limitations in explaining consciousness –

Information Theory misses a critical aspect of consciousnessChristof Koch

Christof argues that we don’t need observers to have conscious experiences (other poeple, god, etc), the underlying assumptions behind traditional information theory assumes Shannon information – and that a big misconception about the structure of consciousness stems from this idea – assuming that Shannon information is enough to explain consciousness.  Shannon information is about “sending information from a channel to a receiver – consciousness isn’t about sending anything to anybody.”  So what other kind of information is there?

The ‘information’ in Integrated Information Theory (IIT) does not refer to Shannon information.  Etymologically, the word ‘information’ derives from ‘informare’ – “it refers to information in the original sense of the word ‘Informare’ – to give form to” – that is to give form to a high dimensional structure.

 

 

It’s worth noting that many disagree with Integrated Information Theory – including Scott Aaronson – see here, here and here.

 

See interview below:

“It’s a theory that proceeds from phenomenology to as it were mechanisms in physics”.

IIT is also described in Christof Koch’s Consciousness: Confessions of a Romantic Reductionist’.

Axioms and postulates of integrated information theory

5 axioms / essential properties of experience of consciousness that are foundation to IIT – the intent is to capture the essential aspects of all conscious experience. Each axiom should apply to every possible experience.

  • Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
  • Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
  • Integration: Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.
  • Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.

So, does IIT solve what David Chalmers calls the “Hard Problem of consciousness”?

Christof Koch  is an American neuroscientist best known for his work on the neural bases of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

This interview is a short section of a larger interview which will be released at a later date.

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Juergen Schmidhuber on DeepMind, AlphaGo & Progress in AI

In asking AI researcher Juergen Schmidhuber about his thoughts on progress at DeepMind and about the AlphaGo vs Lee Sedol Go tournament – provided some initial comments. I will be updating this post with further interview.

juergen288x466genova1Juergen Schmidhuber: First of all, I am happy about DeepMind’s success, also because the company is heavily influenced by my former students: 2 of DeepMind’s first 4 members and their first PhDs in AI came from my lab, one of them co-founder, one of them first employee. (Other ex-PhD students of mine joined DeepMind later, including a co-author of our first paper on Atari-Go in 2010.)

Go is a board game where the Markov assumption holds: in principle, the current input (the board state) conveys all the information needed to determine an optimal next move (no need to consider the history of previous states). That is, the game can be tackled by traditional reinforcement learning (RL), a bit like 2 decades ago, when Tesauro used RL to learn from scratch a backgammon player on the level of the human world champion (1994). Today, however, we are greatly profiting from the fact that computers are at least 10,000 times faster per dollar.

In the last few years, automatic Go players have greatly improved. To learn a good Go player, DeepMind’s system combines several traditional methods such as supervised learning (from human experts) and RL based on Monte Carlo Tree Search. It will be very interesting to see the system play against the best human Go player Lee Sedol in the near future.

Unfortunately, however, the Markov condition does not hold in realistic real world scenarios. That’s why games such as football are much harder for machines than Go, and why Artificial General Intelligence (AGI) for RL robots living in partially observable environments will need more sophisticated learning algorithms, e.g., RL for recurrent neural networks.

For a comprehensive history of deep RL, see Section 6 of my survey with 888 references:
http://people.idsia.ch/~juergen/deep-learning-overview.html

Also worth seeing Juergen’s AMA here.

Juergen Schmidhuber’s website.

The Simpsons and Their Mathematical Secrets with Simon Singh

You may have watched hundreds of episodes of The Simpsons (and its sister show Futurama) without ever realizing that cleverly embedded in many plots are subtle references to mathematics, ranging from well-known equations to cutting-edge theorems and conjectures. That they exist, Simon Singh reveals, underscores the brilliance of the shows’ writers, many of whom have advanced degrees in mathematics in addition to their unparalleled sense of humor.

A mathematician is a machine for turning coffee into theorems. Simon Singh, The Simpsons and Their Mathematical Secrets

The Simpsons and their Mathematical SecretsWhile recounting memorable episodes such as “Bart the Genius” and “Homer3,” Singh weaves in mathematical stories that explore everything from p to Mersenne primes, Euler’s equation to the unsolved riddle of P v. NP; from perfect numbers to narcissistic numbers, infinity to even bigger infinities, and much more. Along the way, Singh meets members of The Simpsons’ brilliant writing team—among them David X. Cohen, Al Jean, Jeff Westbrook, and Mike Reiss—whose love of arcane mathematics becomes clear as they reveal the stories behind the episodes.
With wit and clarity, displaying a true fan’s zeal, and replete with images from the shows, photographs of the writers, and diagrams and proofs, The Simpsons and Their Mathematical Secrets offers an entirely new insight into the most successful show in television history.

Buy the book on amazon

An astronomer, a physicist, and a mathematician (it is said) were holidaying in Scotland. Glancing from a train window, they observed a black sheep in the middle of a field. “How interesting,” observed the astronomer, “all Scottish sheep are black!” To which the physicist responded, “No, no! Some Scottish sheep are black!” The mathematician gazed heavenward in supplication, and then intoned, “In Scotland there exists at least one field, containing at least one sheep, at least one side of which is black. Simon Singh, The Simpsons and Their Mathematical Secrets

 

 

Simon Singh is a British author who has specialised in writing about mathematical and scientific topics in an accessible manner. His written works include Fermat’s Last Theorem (in the United States titled Fermat’s Enigma: The Epic Quest to Solve the World’s Greatest Mathematical Problem),The Code Book (about cryptography and its history), Big Bang (about the Big Bang theory and the origins of the universe), Trick or Treatment? Alternative Medicine on Trial[6] (about complementary and alternative medicine) and The Simpsons and Their Mathematical Secrets (about mathematical ideas and theorems hidden in episodes of The Simpsons and Futurama).

Singh has also produced documentaries and works for television to accompany his books, is a trustee of NESTA, the National Museum of Science and Industry and co-founded the Undergraduate Ambassadors Scheme.

Subscribe to the Sci Future Channel

As a society, we rightly adore our great musicians and novelists, yet we seldom hear any mention of the humble mathematician. It is clear that mathematics is not considered part of our culture. Instead, mathematics is generally feared and mathematicians are often mocked. Simon Singh, The Simpsons and Their Mathematical Secrets

Science, Technology & the Future

Julian Savulescu – Government & Surveillance

julian savulescu - surveilanceIf you increase the altruistic motivation of people, you decrease the risk that they will negligently fail to consider the possible harmful effects of their behaviour on their fellow-beings. Being concerned about avoiding such risks is part of what having altruistic concern for these beings consists in. Moreover, the advance of technology will in all probability bring along more effective mechanisms of surveillance, and it is easier for these to pick up people who are negligent rather than evil-doers who are intent on beating them.

“The nutshell: Human societies have grown larger, more diverse, and more technologically complex, and as a result, our moral compasses are no longer up to the task of guiding us, argue Oxford University’s Persson (a philosopher) and Savulescu (an ethicist)—and we’re in danger of destroying ourselves. The severity of the problem demands an equally severe solution: biomedical moral enhancement and increased government surveillance of citizens.” – Slate

julian savulescu white shirtJulian Savulescu (born December 22, 1963) is an Australian philosopher and bioethicist. He is Uehiro Professor of Practical Ethics at the University of Oxford, Fellow of St Cross College, Oxford, Director of the Oxford Uehiro Centre for Practical Ethics, Sir Louis Matheson Distinguished Visiting Professor at Monash University, and Head of the Melbourne–Oxford Stem Cell Collaboration, which is devoted to examining the ethical implications of cloning and embryonic stem cell research. He is the editor of the Journal of Medical Ethics, which is ranked as the #1 journal in bioethics worldwide by Google Scholar Metrics as of 2013. In addition to his background in applied ethics and philosophy, he also has a background in medicine and completed his MBBS (Hons) at Monash University. He completed his PhD at Monash University, under the supervision of renowned bioethicist Peter Singer. Published Jan 30, 2014.

Science, Technology & the Future

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
….
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.

Blockbuster Science! Tech investors reward ‘Breakthough Science’

Blockbuster Science! Its an awesome approach to incentivizing scientists – it’s great that people are applauding for stuff that really matters! People cheer at most ridiculous and inconsequential things – why not funnel this energy into science?

Next step, create high production shorts for real world advances in science (with a tinge of flair) – much like they do to promote blockbuster movies. NY Times stated : “Scientists don’t have the power of celebrities in American society. The Breakthrough Prize tries to change that”

Anne Wojcicki

Biologist Anne Wojcicki attends the 2016 Breakthrough Prize Ceremony

Yuri Milner

Entrepreneur and Investor Yuri Milner

“Yuri Milner, the Russian billionaire, and his high-tech Silicon Valley friends have awarded $29.5 million to seven scientists, a high school student, and a huge team of physics researchers for their varied science achievements.

Milner’s third annual Breakthrough Prizes were financed by his foundation with contributions from Sergey Brin of Google and his wife, 23&Me founder Anne Wojcicki; Mark Zuckerberg of Facebook; and Jack Ma of China’s e-commerce giant Alibaba.

Other prizes went to Ed Boyden, now at MIT, who was Deisseroth’s partner at Stanford developing optogenetics; Helen Hobbs, a University of Texas physician who discovered the roles that variant genes play in cholesterol and lipid levels leading to heart disease; John Hardy, a neuroscientist at University College in London, who discovered genetic mutations in the amyloid genes causing Alzheimer’s disease; and Svante Pääbo, the famed anthropologist at Germany’s Max Planck Institute, who sequenced the genes of Neanderthals and discovered traces of the vanished humans called Denisovans.” said David Perlman at SF Gate.

Yuri Milner did an inspiring interview with New Scientist on the positively huge impacts of fundamental research in science on society. ” If you go far enough into the future, a fundamental discovery leads to some new technology.”, said Yuri Milner.

Ed Boyden develops new strategies for analyzing and engineering brain circuits, using synthetic biology, nanotechnology, chemistry, electrical engineering, and optics to develop broadly applicable methodologies that reveal fundamental mechanisms of complex brain processes. A major goal of his current work is the development of technologies for controlling nerve cells using light – a powerful new technology known as optogenetics that is opening the door to new treatments for conditions such as epilepsy, Parkinson’s disease, and mood disorders.

Ed Boyden develops new strategies for analyzing and engineering brain circuits, using synthetic biology, nanotechnology, chemistry, electrical engineering, and optics to develop broadly applicable methodologies that reveal fundamental mechanisms of complex brain processes. A major goal of his current work is the development of technologies for controlling nerve cells using light – a powerful new technology known as optogenetics that is opening the door to new treatments for conditions such as epilepsy, Parkinson’s disease, and mood disorders.

Ed Boyden is on the closing Breakthrough Prize Panel Discussion hosted by Yuri Milner.

Will the breakthrough accomplishments in science one day outshine a season winning slam dunk?

Athletic heroes loom large in our imagination – though how often do we stop to think about brilliant scientists and the wonderful things they have achieved that make positive tractable difference in our lives and the world around us?

Elon Musk founder of Tesla and SpaceX said: “It is important to celebrate science and to create role models for science that kids want to emulate.. For the benefit of humanity, we want breakthroughs in science that help us improve standards of living, cure disease, make life better… I’d rather a super-smart, creative kid went into developing breakthrough technologies that improve the world rather than, say, went to Wall Street.”

I see this as a positive sign of a general warming to Enlightenment values and the idea that significant civilizational progress in improving the human condition through science.

BlockBuster-Science---Yuri-Milner

What is the Philosophy of Science All About?

Slides [here], See this post by John Wilkins at Evolving Thoughts, the video is a talk John gave at the Philosophy of Science conference in Melbourne 2014.

Every so often, somebody will attack the worth, role or relevance of philosophy on the internets, as I have discussed before. Occasionally it will be a scientist, who usually conflates philosophy with theology. This is as bad as someone assuming that because I do some philosophy I must have the Meaning of Life (the answer is, variously, 12 year old Scotch, good chocolate, or dental hygiene).

But it raises an interesting question or two: what is the reason to do philosophy in relation to science? being the most obvious (and thus set up the context in which you can answer questions like: are there other ways to find truth than science?). So I thought I would briefly give my reasons for that.

When philosophy began around 500BCE, there was no distinction between science and philosophy, nor, for that matter, between religion and philosophy. Arguably, science began when the pre-Socratics started to ask what the natures of things were that made them behave as they did, and equally arguably the first actual empirical scientist was Aristotle (and, I suspect, his graduate students).

wilkins_picBut a distinction between science and philosophy began with the separation between natural philosophy (roughly what we now call science) and moral philosophy, which dealt with things to do with human life and included what we should believe about the world, including moral, theological and metaphysical beliefs. The natural kind was involved in considering the natures or things. A lot gets packed into that simple word, nature: it literally means “in-born” (natus) and the Greek physikos means much the same. Of course, something can be in-born only if it is born that way (yes, folks, she’s playing on some old tropes here!), and most physical things aren’t born at all, but the idea was passed from living to nonliving things, and so natural philosophy was born. That way.

In the period after Francis Bacon, natural philosophy was something that depended crucially on observation, and so the Empiricists arose: Locke, Berkeley, Hobbes, and later Hume. That these names are famous in philosophy suggests something: philosophy does best when it is trying to elucidate science itself. And when William Whewell in 1833 coined the term scientist to denote those who sought scientia or knowledge, science had begun its separation from the rest of philosophy.

Or imperfectly, anyway. For a start the very best scientists of the day, including Babbage, Buckland and Whewell himself wrote philosophical tomes alongside theologians and philosophers. And the tradition continues until now, such as the recent book by Stephen Hawking in which he declares the philosophical enterprise is dead, a decidedly philosophical claim to make. Many scientists seem to find the doing of philosophy inevitable.

So why do I do philosophy of science? Simply because it is where the epistemic action is: science is where we do get knowledge, and I wish to understand how and why, and the limitations. All else flows from this for me. Others I know (and respect) do straight metaphysics and philosophy of language, but I do not. It only has a bite if it gives some clarity to science. I think this is also true of metaphysics, ethics and such matters as philosophy of religion.

Philosoophy of Science 2014