Posts

Exciting progress in Artificial Intelligence – Joscha Bach

Joscha Bach discusses progress made in AI so far, what’s missing in AI, and the conceptual progress needed to achieve the grand goals of AI.
Discussion points:
0:07 What is intelligence? Intelligence as the ability to be effective over a wide range of environments
0:37 Intelligence vs smartness – interesting models vs intelligent behavior
1:08 Models vs behaviors – i.e. Deepmind – solving goals over a wide range of environments
1:44 Starting from a blank slate – how does an AI see an Atari Game compared to a human? Pac Man analogy
3:31 Getting the narrative right as well as the details
3:54 Media fear mongering about AI
4:43 Progress in AI – how revolutionary are the ideas behind the AI that led to commercial success? There is a need for more conceptual progress in AI
5:04 Mental representations require probabilistic algorithms – to make further progress we probably need different means of functional approximation
5:33 Many of the new theories in AI are currently not deployed – we can assume a tremendous shift in every day use of technology in the future because of this
6:07 It’s an exciting time to be an AI researcher

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

Ethical Progress, AI & the Ultimate Utility Function – Joscha Bach

Joscha Bach on ethical progress, and AI – it’s fascinating to think ‘What’s the ultimate utility function?’ – should we seek the answer in our evolved motivations?

Discussion points:
0:07 Future directions in ethical progress
1:13 Pain and suffering – concern for things we cannot regulate or change
1:50 Reward signals – we should only get them for things we can regulate
2:42 As soon as minds become mutable ethics dramatically changes – an artificial mind may be like a Zen master on steroids
2:53 The ultimate utility function – how can we maximize the neg-entropy in this universe?
3:29 Our evolved motives don’t align well to this ultimate utility function
4:10 Systems which only maximize what they can consume – humans are like yeast

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

 

The Grand Challenge of Developing Friendly Artificial Intelligence – Joscha Bach

Joscha Bach discusses problems with achieving AI alignment, the current discourse around AI, and inefficiencies of human cognition & communication.

Discussion points:
0:08 The AI alignment problem
0:42 Asimov’s Laws: Problems with giving AI (rules) to follow – it’s a form of slavery
1:12 The current discourse around AI
2:52 Ethics – where do they come from?
3:27 Human constraints don’t apply to AI
4:12 Human communication problems vs AI – communication costs between minds is much larger than within minds
4:57 AI can change it’s preferences

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Cognitive Biases & In-Group Convergences – Joscha Bach

Joscha Bach discusses biases in group think.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

AI, Consciousness, Science, Art & Understanding – Joscha Bach

Here Joscha Bach discusses consciousness, it’s relationship to qualia and whether an AI or a utility maximizer would do with it.

What is consciousness? “I think under certain circumstances being conscious is an important part of a mind; it’s a model of a model of a model basically. What it means is our mind (our new cortex) produces this dream that we take to be the world based on the sensory data – so it’s basically a hallucination that predicts what next hits your retina – that’s the world. Out there, we don’t know what this is.. The universe is some kind of weird pattern generator with some quantum properties. And this pattern generator throws patterns at us, and we try to find regularity in them – and the hidden layers of this neural network amount to latent variables that are colors people sounds ideas and so on.. And this is the world that we subjectively inhabit – that’s the world that we find meaningful.”

… “I find theories [about consciousness] that make you feel good very suspicious. If there is something that is like my preferred outcome for emotional reasons, I should be realising that I have a confirmation bias towards this – and that truth is a very brutal vector”..

OUTLINE:
0:07 Consciousness and it’s importance
0:47 Phenomenal content
1:43 Consciousness and attention
2:30 When AI becomes conscious
2:57 Mary’s Room – the Knowledge Argument, art, science & understanding
4:07 What is understanding? What is truth?
4:49 What interests an artist? Art as a communicative exercise
5:48 Thomas Nagel: What is it like to be a bat?
6:19 Feel good theories
7:01 Raw feels or no? Why did nature endow us with raw feels?
8:29 What is qualia, and is it important?
9:49 Insight addiction & the aesthetics of information
10:52 Would a utility maximizer care about qualia?

BIO:
Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Posthumanism – Pramod Nayar

Interview with Pramod K. Nayar on #posthumanism ‘as both a material condition and a developing philosophical-ethical project in the age of cloning, gene engineering, organ transplants and implants’. The book ‘Posthumanism’ by Pramod Nayar: https://amzn.to/2OQEA8z Rise of the posthumanities article: https://bit.ly/32Q67Pm
This time, I decided trying to itemize the interview so you can find sections via the time signature links:
0:00 Intro / What got Pramod interested in posthuman studies?
04:16 Defining the terms – what is posthumanism? Cultural framing of natural vs unnatural. Posthumanism is not just bodily or mental enhancement, but involves changing the relationship between humans, non-human lifeforms, technology and non-living matter. Displacement of anthropocentrism. 
08:01 Anthropocentric biases inherited from enlightenment humanist thinking and human exceptionalism. The formation of the transhumanist declaration with part of it focusing on the human with point 7 of the declaration focusing on the well-being of all sentience. The important question of empathy – not limiting it to the human species. The issue of empathy being a good lunching pad for further conversations between the transhumanists and the posthumanists. https://humanityplus.org/philosophy/t… 
11:10 Difficulties in getting everyone to agree on cultural values. Is a utopian ideal posthumanist/transhumanist society possible? 
13:25 Collective societies, hive minds, borganisms. Distributed cognition, the extended mind hypothesis, cognitive assemblages, traditions of knowledge sharing. 
16:58 Does the humanities need some form of reconfiguration to shift it towards something beyond the human? Rejecting some of the value systems that enlightenment humanism claimed to be universal. Julian Savulescu’s work on moral enhancement 
20:58 Colonialism – what is it? 
21:57 Aspects of enlightenment humanism that the critical posthumanists don’t agree with. But some believe the poshumanists to be enlightenment haters that reject rationality – is this accurate? 
24:33 Trying to achieve agreement on shared human values – is vulnerability rather than dignity a usable concept that different groups can agree with? 
26:37 The idea of the monster – people’s fear of what they don’t understand. Thinking past disgust responses to new wearable technologies and more radical bodily enhancements. 
29:45 The future of posthuman morphology and posthuman rights – how might emerging means of upgrading our bodies / minds interfere with rights or help us re-evaluate rights? 
33:42 Personhood beyond the human
35:11 Should we uplift non-human animals? Animals as moral patients becoming moral actors through uplifting? Also once Superintelligent AI is developed, should it uplift us? The question of agency and aspiration – what are appropriate aspirations for different life forms? Species enhancement and Ian Hacking’s idea of ‘Making up people’ – classification and how people come to inhabit the identities that exist at various points in history, or in different environments. https://www.lrb.co.uk/the-paper/v28/n… 
38:10 Measuring happiness – David Pearce’s idea of eliminating suffering and increasing happiness through advanced technology. What does it mean to have welfare or to flourish? Should we institutionalise wellbeing, a gross domestic happiness, world happiness index? 
40:27 Anders Sandberg asks: Transhumanism and posthumanism often do not get along – transhumanism commonly wears its enlightenment roots on its sleeve, and posthumanism often spends more time criticising the current situation than suggesting an out of it. Yet there is no fundamental reason both perspectives could not simultaneously get what they want: a post-human posthumanist concept of humanity and its post-natural environment seem entirely possible. What is Nayar’s perspective on this win-win vision? 
44:14 The postmodern play of endless difference and relativism – what is the good and bad of postmodernism on posthumanist thinking? 
47:16 What does postmodernism have to offer both posthumanism and transhumanism? 
49:17 Thomas Kuhn’s idea of paradigm changes in science happening funeral by funeral. 
58:58 – How has the idea of the singularity influenced transhumanist and posthumanist thinking? Shift’s in perspectives to help us ask the right questions in science, engineering and ethics in order to achieve a better future society. 
1:01:55 – What AI is good and bad at today. Correlational thinking vs causative thinking. Filling the gaps as to what’s required to achieve ‘machine understanding’. 
1:03:26 – Influential literature on the idea of the posthuman – especially that which can help us think about difference and ‘the other’ (or the non-human) 

Judith Campisi – Senolytics for Healthy Longevity

I had the absolute privilege of interviewing Judith Campisi at the Undoing Aging conference in Berlin.  She was so sweet and kind – it was really a pleasure to spend time with her discussing senolytics, regenerative medicine, and the anti-aging movement.

 

 

 

Judith Campisi was humble, open minded, and careful not to overstate the importance of senolytics, and rejuvenation therapy in general.  Though she really is someone who has made an absolutely huge impact in anti-aging research.  I couldn’t have said it better than Reason at Fight Aging!

As one of the authors of the initial SENS position paper, published many years ago now, Judith Campisi is one of the small number of people who is able to say that she was right all along about the value of targeted removal of senescent cells, and that it would prove to be a viable approach to the treatment of aging as a medical condition. Now that the rest of the research community has been convinced of this point – the evidence from animal studies really is robust and overwhelming – the senescent cell clearance therapies known as senolytics are shaping up to be the first legitimate, real, working, widely available form of rejuvenation therapy.

Reason – Philosophy Of Anti Aging: Ethics, Research & Advocacy

Reason was interviewed at the Undoing Aging conference in Berlin 2019 by Adam Ford – focusing on philosophy of anti-aging, ethics, research & advocacy. Here is the audio!

And the video:

Topics include philosophical reasons to support anti-aging, high impact research (senolytics etc), convincing existence proofs that further research is worth doing, how AI can help and how human research (bench-work) isn’t being replaced by AI atm or in the foreseeable future, suffering mitigation and cause prioritization in Effective Altruism – how the EA movement sees anti-aging and why it should advocate for it, population effects (financial & public health) of an aging population and the ethics of solving aging as a problem…and more.

Reason is the founder and primary blogger at FightAging.org

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.

 

 
Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Uncovering the Mysteries of Affective Neuroscience – the Importance of Valence Research with Mike Johnson

Valence in overview

Adam: What is emotional valence (as opposed to valence in chemistry)?

Mike: Put simply, emotional valence is how pleasant or unpleasant something is. A somewhat weird fact about our universe is that some conscious experiences do seem to feel better than others.

 

Adam: What makes things feel the way they do? What makes some things feel better than others?

Mike: This sounds like it should be a simple question, but neuroscience just don’t know. It knows a lot of random facts about what kinds of experiences, and what kinds of brain activation patterns, feel good, and which feel bad, but it doesn’t have anything close to a general theory here.

..the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it.

And the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it. For instance, we know that certain regions of the brain, like the nucleus accumbens and ventral pallidum, seem to be important for pleasure, so we call them “pleasure centers”. But we don’t know what makes something a pleasure center. We don’t even know how common painkillers like acetaminophen (paracetamol) work! Which is kind of surprising.

In contrast, the hypothesis about valence I put forth in Principia Qualia would explain pleasure centers and acetaminophen and many other things in a unified, simple way.

 

Adam: How does the hypothesis about valence work?

Mike: My core hypothesis is that symmetry in the mathematical representation of an experience corresponds to how pleasant or unpleasant that experience is. I see this as an identity relationship which is ‘True with a capital T’, not merely a correlation.  (Credit also goes to Andres Gomez Emilsson & Randal Koene for helping explore this idea.)

What makes this hypothesis interesting is that
(1) On a theoretical level, it could unify all existing valence research, from Berridge’s work on hedonic hotspots, to Friston & Seth’s work on predictive coding, to Schmidhuber’s idea of a compression drive;

(2) It could finally explain how the brain’s so-called “pleasure centers” work– they function to tune the brain toward more symmetrical states!

(3) It implies lots and lots of weird, bold, *testable* hypotheses. For instance, we know that painkillers like acetaminophen, and anti-depressants like SSRIs, actually blunt both negative *and* positive affect, but we’ve never figured out how. Perhaps they do so by introducing a certain type of stochastic noise into acute & long-term activity patterns, respectively, which disrupts both symmetry (pleasure) and anti-symmetry (pain).

 

Adam: What kinds of tests would validate or dis-confirm your hypothesis? How could it be falsified and/or justified by weight of induction?

Mike: So this depends on the details of how activity in the brain generates the mind. But I offer some falsifiable predictions in PQ (Principia Qualia):

  • If we control for degree of consciousness, more pleasant brain states should be more compressible;
  • Direct, low-power stimulation (TMS) in harmonious patterns (e.g. 2hz+4hz+6hz+8hz…160hz) should feel remarkably more pleasant than stimulation with similar-yet-dissonant patterns (2.01hz+3.99hz+6.15hz…).

Those are some ‘obvious’ ways to test this. But my hypothesis also implies odd things such as that chronic tinnitus (ringing in the ears) should product affective blunting (lessened ability to feel strong valence).

Note: see https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/ and http://opentheory.net/2018/08/a-future-for-neuroscience/ for a more up-to-date take on this.

 

Adam: Why is valence research important?

Mike Johnson: Put simply, valence research is important because valence is important. David Chalmers famously coined “The Hard Problem of Consciousness”, or why we’re conscious at all, and “The Easy Problem of Consciousness”, or how the brain processes information. I think valence research should be called “The Important Problem of Consciousness”. When you’re in a conscious moment, the most important thing to you is how pleasant or unpleasant it feels.

That’s the philosophical angle. We can also take the moral perspective, and add up all the human and non-human animal suffering in the world. If we knew what suffering was, we could presumably use this knowledge to more effectively reduce it and make the world a kinder place.

We can also take the economic perspective, and add up all the person-years, capacity to contribute, and quality of life lost to Depression and chronic pain. A good theory of valence should allow us to create much better treatments for these things. And probably make some money while doing it.

Finally, a question I’ve been wondering for a while now is whether having a good theory of qualia could help with AI safety and existential risk. I think it probably can, by helping us see and avoid certain failure-modes.

 

Adam: How can understanding valence could help make future AIs safer? (How to help define how the AI should approach making us happy?, and in terms of a reinforcement mechanism for AI?)

Mike: Last year, I noted a few ways a better understanding of valence could help make future AIs safer on my blog. I’d point out a few notions in particular though:

  • If we understand how to measure valence, we could use this as part of a “sanity check” for AI behavior. If some proposed action would cause lots of suffering, maybe the AI shouldn’t do it.
  • Understanding consciousness & valence seem important for treating an AI humanely. We don’t want to inadvertently torture AIs- but how would we know?
  • Understanding consciousness & valence seems critically important for “raising the sanity waterline” on metaphysics. Right now, you can ask 10 AGI researchers about what consciousness is, or what has consciousness, or what level of abstraction to define value, and you’ll get at least 10 different answers. This is absolutely a recipe for trouble. But I think this is an avoidable mess if we get serious about understanding this stuff.

 

Adam: Why the information theoretical approach?

Mike: The way I would put it, there are two kinds of knowledge about valence: (1) how pain & pleasure work in the human brain, and (2) universal principles which apply to all conscious systems, whether they’re humans, dogs, dinosaurs, aliens, or conscious AIs.

It’s counter-intuitive, but I think these more general principles might be a lot easier to figure out than the human-specific stuff. Brains are complicated, but it could be that the laws of the universe, or regularities, which govern consciousness are pretty simple. That’s certainly been the case when we look at physics. For instance, my iPhone’s processor is super-complicated, but it runs on electricity, which itself actually obeys very simple & elegant laws.

Elsewhere I’ve argued that:

>Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we’re looking at it from the wrong level of abstraction.

 

Adam: What do you think of Thomas A. Bass’s view of ITheory – he thinks that (at least in many cases) it has not been easy to turn data into knowledge. That there is a pathological attraction to information which is making us ‘sick’ – he calls it Information Pathology. If his view offers any useful insights to you concerning avoiding ‘Information Pathology’ – what would they be?

Mike: Right, I would agree with Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

 

Adam: Information Theory frameworks – what is your “Eight Problems” framework and how does it contrast with Giulio Tononi’s Integrated Information Theory (IIT)? How might IIT help address valence in a principled manner? What is lacking IIT – and how does your ‘Eight Problems’ framework address this?

Mike: IIT is great, but it’s incomplete. I think of it as *half* a theory of consciousness. My “Eight Problems for a new science of consciousness” framework describes what a “full stack” approach would look like, what IIT will have to do in order to become a full theory.

The biggest two problems IIT faces is that (1) it’s not compatible with physics, so we can’t actually apply it to any real physical systems, and (2) it says almost nothing about what its output means. Both of these are big problems! But IIT is also the best and only game in town in terms of quantitative theories of consciousness.

Principia Qualia aims to help fix IIT, and also to build a bridge between IIT and valence research. If IIT is right, and we can quantify conscious experiences, then how pleasant or unpleasant this experience is should be encoded into its corresponding mathematical object.

 

Adam: What are the three principles for a mathematical derivation of valence?

Mike: First, a few words about the larger context. Probably the most important question in consciousness research is whether consciousness is real, like an electromagnetic field is real, or an inherently complex, irreducible linguistic artifact, like “justice” or “life”. If consciousness is real, then there’s interesting stuff to discover about it, like there was interesting stuff to discover about quantum mechanics and gravity. But if consciousness isn’t real, then any attempt to ‘discover’ knowledge about it will fail, just like attempts to draw a crisp definition for ‘life’ (elan vital) failed.

If consciousness is real, then there’s a hidden cache of predictive knowledge waiting to be discovered. If consciousness isn’t real, then the harder we try to find patterns, the more elusive they’ll be- basically, we’ll just be talking in circles. David Chalmers refers to a similar distinction with his “Type-A vs Type-B Materialism”.

I’m a strong believer in consciousness realism, as are my research collaborators. The cool thing here is, if we assume that consciousness is real, a lot of things follow from this– like my “Eight Problems” framework. Throw in a couple more fairly modest assumptions, and we can start building a real science of qualia.

Anyway, the formal principles are the following:

  1. Consciousness can be quantified. (More formally, that for any conscious experience, there exists a mathematical object isomorphic to it.)
  2. There is some order, some rhyme & reason & elegance, to consciousness. (More formally, the state space of consciousness has a rich set of mathematical structures.)
  3. Valence is real. (More formally, valence is an ordered property of conscious systems.)

 

Basically, they combine to say: this thing we call ‘valence’ could have a relatively simple mathematical representation. Figuring out valence might not take an AGI several million years. Instead, it could be almost embarrassingly easy.

 

Adam: Does Qualia Structuralism, Valence Structuralism and Valence Realism relate to the philosophy of physics principles of realism and structuralism? If so, is there an equivalent ontic Qualia Structuralism and Valence Structuralism?….

Mike: “Structuralism” is many things to many contexts. I use it in a specifically mathematical way, to denote that the state space of qualia quite likely embodies many mathematical structures, or properties (such as being a metric space).

Re: your question about ontics, I tend to take the empirical route and evaluate claims based on their predictions whenever possible. I don’t think predictions change if we assume realism vs structuralism in physics, so maybe it doesn’t matter. But I can get back to you on this. 🙂

 

Adam: What about the Qualia Research Institute I’ve also recently heard about :D! It seems both you (Mike) and Andrés Gómez Emilson are doing some interesting work there

Mike: We know very little about consciousness. This is a problem, for various and increasing reasons– it’s upstream of a lot of futurist-related topics.

But nobody seems to know quite where to start unraveling this mystery. The way we talk about consciousness is stuck in “alchemy mode”– we catch glimpses of interesting patterns, but it’s unclear how to systematize this into a unified framework. How to turn ‘consciousness alchemy’ into ‘consciousness chemistry’, so to speak.

Qualia Research Institute is a research collective which is working on building a new “science of qualia”. Basically, we think our “full-stack” approach cuts through all the confusion around this topic and can generate hypotheses which are novel, falsifiable, and useful.

Right now, we’re small (myself, Andres, and a few others behind the scenes) but I’m proud of what we’ve accomplished so far, and we’ve got more exciting things in the pipeline. 🙂

Also see the 2nd part, and the 3rd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.