Gero, Singapore AI startup bags $2.2m to create a drug that helps extend human life

Congrats to Gero for the $2.2m of funding to create a drug that helps extend human life!

I did two interviews with Gero in 2019 at Undoing Aging – here, with Peter Fedichev on Quantifying Aging in Large Scale Human Studies:

And here with Ksenia Tsvetkova on Data Driven Longevity:

Doris Yu at Tech In Asia said:

The company observed that as population growth slows down, the average lifespan increases. For example, there will only be 250 million people older than 65 by the end of the decade in China. Countries like Singapore, meanwhile, are not able to attract enough migrants to help offset the aging population.

Gero then wants to provide a medical solution to help extend healthspan as well as improve the overall well-being and productivity of its future customers.

It’s trying to do so by collecting medical and genetic data via a repository of biological samples and creating a database of blood samples collected throughout the last 15 years of patients’ lives. Its proprietary AI platform was able to determine a type of protein that could help with rejuvenation if blocked or removed.

What problem is it solving? “Aging is the most important single risk factor behind the incidence of chronic diseases and death. […] We are ready to slow down – if not reverse – aging with experimental therapies,” Peter Fedichev, co-founder and CEO of Gero, told Tech in Asia. wrote:

Gero, a Singapore-based company that develops new drugs for ageing and other complicated disorders using its proprietary developed artificial intelligence (AI) platform, secured $2.2m in Series A funding.

The round, which brought total capital raised since founding to over $7.5m, was led by Bulba Ventures with participation from previous investors and serial entrepreneurs in the fields of pharmaceuticals, IT, and AI. The co-founder of Bulba Ventures Yury Melnichek joined Gero’s Board of Directors. The company will use the funds to further develop its platform.

Led by founder Peter Fedichev, Gero provides an AI-based platform for analyzing clinical and genetic data to identify treatments for some of the most complicated diseases, such as chronic aging-related diseases, mental disorders, and others. The company’s experts used large datasets of medical and genetic information from hundreds of thousands of people acquired via biobanks and created a proprietary database of blood samples collected throughout the last 15 years of the patients’ lives.

Using this data, the platform determined the protein that circulates in people’s blood whose removal or blockage should lead to rejuvenation. Subsequent experiments at National University of Singapore involved aged animals and demonstrated mortality delay (life-extension) and functional improvements after a single experimental treatment. In the future, this new drug could enable patients to recover after a stroke and could help cancer patients in their fight against accelerated ageing resulting from chemotherapy.

The platform is currently also being utilized to develop drugs in other areas: for example, the group’s efforts to find potential therapies for COVID-19, including those that could reduce mortality from complications related to ageing, has already attracted a great deal of attention from large pharmaceutical companies and leading global media organizations.

How science fails

There is a really interesting Aeon article on what bad science, and how it fails.

What is Bad Science?
According to Imre Lakatosh, science degenerates unless it is both theoretically and experimentally progressive. Can Lakatosh’s ‘scientific programme’ approach, which incorporates merits of both Khunian and Popperian ideas, help solve this problem?

Is our current research tradition adequate and effective enough to solve seemingly intractable scientific problems in a timely manner (i.e. in foundational theoretical physics or climate science)?
Ideas are cheap, but backing them up with sound hypotheses (main and auxiliary) predicting novel stuff and experimental evidence aimed at confirming this stuff _is expensive_ given time/resource constraints means that among other things an ideal experimental progressiveness is sometimes not achievable.

A scientific programme is considered ‘degenerating’ if:
1) it’s theoretically degenerating because it doesn’t predict novel facts (it just accommodates existing facts); no new forecasts
2) it’s experimentally degenerating because none of the predicted novel facts can be tested (i.e. string theory)

Lakatosh’s ideas (that good science is both theoretically and experimentally progressive) may serve as groundwork for further maturing what it means to ‘do science’ where an existing dominant programme is no longer able to respond to accumulating anomalies – which was the reason why Kuhn wrote about changing scientific paradigms – but unlike Kuhn, Lakatos believes that a ‘gestalt-switch’ or scientific revolution should be driven by rationality rather than mob psychology.
Though a scientific programme which looks like it is degenerating may be just around the corner from a breakthrough…

For anyone seeking an unambiguously definitive demarcation criterion, this is a death-knell. On the one hand, scientists doggedly pursuing a degenerating research programme are guilty of an irrational commitment to bad science. But, on the other hand, these same scientists can legitimately argue that they’re behaving quite rationally, as their research programme ‘might still be true’, and salvation might lie just around the next corner (which, in the string theory programme, is typically represented by the particle collider that has yet to be built). Lakatos’s methodology doesn’t explicitly negate this argument, and there is likely no rationale that can.

Lakatos argued that it is up to individual scientists (or their institutions) to exercise some intellectual honesty, to own up to their own degenerating programmes’ shortcomings (or, at least, not ‘deny its poor public record’) and accept that they can’t rationally continue to flog a horse that appears, to all intents and purposes, to be quite dead. He accepted that: ‘It is perfectly rational to play a risky game: what is irrational is to deceive oneself about the risk.’ He was also pretty clear on the consequences for those indulging in such self-deception: ‘Editors of scientific journals should refuse to publish their papers … Research foundations, too, should refuse money.’

This article is totally worth a read…

The Problem of Feral Cats

Feral cats kill about 1 million native animals per day in ecosystems which didn’t evolve to cope with cats.  How should we deal with the problem of feral cats? I hear a lot of ‘kill ’em all’ [1]. When in HK I noticed a lot of cats with one ear slightly smaller.. then found out that there were vans of vets capturing then de-sexing cats, marking them by taking a small slice of their ear, then releasing them. I thought that this was a compassionate approach, though may have cost more to do than just killing the cats.
This issue raises some interesting fundamental questions that humans often seem all to ready to answer with our amygdalas – it’s hard not to, it’s in our nature.  Though we do realize that us humans have had the largest impact on the ecology – and that it’s our own fault feral cats are here.  Despite it being humanity’s fault, the feral cat problem still remains. As long as there are a population of human pet owners won’t be 100% responsible for their cats, the feral cat problem will always exist.  A foolproof morality pill for humans and their pets seems quite far off – so in the mean time, we can’t depend on changing cat and human behaviour.

To date, feral cat eradication has only been successful on small islands – not on mainlands.  Surprisingly, it was accidentally found that low-level culling feral cats may increase their numbers based on observation in the forests of southern tasmania – “Increases in minimum numbers of cats known to be alive ranged from 75% to 211% during the culling period, compared with pre- and post-cull estimates, and probably occurred due to influxes of new individuals after dominant resident cats were removed.”

A study by CSIRO, which advocates considering researching and eventually using gene drives, says:

So far, traditional controls like baiting have not been effective on cats. In fact, the only way land managers have been able to stop cats from getting at our native animals is to construct cat-proof fencing around reserve areas, like those managed by Australian Wildlife Conservancy, then removing all the cats inside and allowing native mammals to flourish. This isn’t considered sustainable in the long term and, outside the fences, this perfect storm of predatory behaviour has continued to darken our biodiversity landscape.

The benefit of gene drives is that it can reduce and even eventually eradicate feral cat populations without killing the cats, but by essentially making it so feral cat offspring all end up male.

…there is hope on the horizon—gene drive technology. Essentially, gene drives are systems that can bias genetic inheritance via sexual reproduction and allow a particular genetic trait to be passed on from a parent organism to all offspring, and therefore the ability of that trait to disperse through a population is greatly enhanced… Using this type of genetic modification (GM) technology, it becomes theoretically possible to introduce cats into the feral populations to produce only male offspring. Over time, the population would die out due to lack of breeding partners.

Research into gene-drives and broader genetics can help solve a lot of other related problems.  Firstly I don’t assume we should  just assume that future tech will be able to solve all our problems, though if we sequenced as much species as possible and kept highly accurate and articulate records of ecosystems, this may help to rejuvenate or even revive species and their habitats at some time in the future – and genetics (esp gene-drives and CRISPR) research has proven to be very powerful – so from the point of view of wildlife / ecosystem preservation, a catalog and revive strategy is surely worthy of serious consideration. One might see it as restoration ecology + time travel.

There are a myriad of considerations but what are the fundamental, ultimate goals of mitigating the negative impacts of feral cats? Two goals may conflict – species preservation and overall suffering reduction. Should we see single goals as totalizing narratives – in practice perhaps not – but great fodder for thought experiments:
1) Species preservation: If this is the ultimate goal, acknowledging that the most upstream cause of feral cats are humans, we could impose staggeringly huge fines on people for not being responsible pet owners – and use that to fund studies and programs for ecosystem preservation – given current technology we can’t resurrect long gone species, though we can try to more deeply catalog species genomes and ecosystem configurations with the hope that one day once we solve human irrationality, perhaps we can then be in a position to choose to engage in efficient comprehensive re-wilding programs – incidentally we may wish to curb the population of pet lovers (for the record, that’s a joke :))
2) If Suffering reduction is the ultimate goal then that really changes things up – there is a ridiculous amount of suffering in the wild, as both David Pearce and Richard Dawkins show. Should we eradicate nature? I’ll stop there.

The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.Richard Dawkins, River Out of Eden: A Darwinian View of Life

Interview with David Pearce on ‘Wild animal suffering – Ethics of Wildlife Management and Conservation Biology’

David Pearce advocates for a benign compassionate stewardship of nature, alleviating suffering in the near and long term futures using high technology (assuming that ultimately the whole world will be computationally accessible to the micromanagement needed for benign hyper-stewardship of nature).

[1] A discussion in a FB group ‘Australian Freethinkers’ – the OP was “What do you think about the feral cats in Australia?

I hear farmers shoot them. They are huge.

They can’t be doing anything good for small rare marsupials.

Should we be aiming to kill them all?”

Event: Stelarc – Contingent & Contestable Futures


Synopsis: In the age of the chimera, uncertainty and ambivalence generate unexpected anxieties. The dead, the near-dead, the brain dead, the yet to be born, the partially living and synthetic life all now share a material and proximal existence, with other living bodies, microbial life, operational machines and executable and viral code. Digital objects proliferate, contaminating the human biome. Bodies become end effectors for other bodies in other places and for machines elsewhere, generating interactive loops and recursive choreographies. There was always a ghost in the machine, but not as a vital force that animates but rather as a fading attestation of the human.


5.45 – Meet, great, and eat.. pub food – it’s actually not bad! Feel free to come early to take advantage of the $8.50 pints from 4.00-6.00.
6.40 – Adam Ford – Introduction
6.50 – Stelarc – Talk: Contingent & Contestable Futures

Where: The Clyde Hotel (upstairs in function room) 385 Cardigan St, Carlton VIC 3053 – bring your appetite, there is a good menu:
When: Thursday July 25th – 5.45 onwards, though a few of us will be there earlier (say 5pm) to take advantage of the $8.50 pints (from 4pm onwards – if you say you are with STF you will get $8.50 pints all night)

*p.s. the event will likely be videoed – if you have any issues with being seen or heard on YouTube, please let us know.


Stelarc experiments with alternative anatomical architectures. His performances incorporate Prosthetics, Robotics, VR and Biotechnology. He is presently surgically constructing and augmenting an ear on his arm. In 1996 he was made an Honorary Professor of Art and Robotics, Carnegie Mellon University and in 2002 was awarded an Honorary Doctorate of Laws by Monash University. In 2010 he was awarded the Ars Electronica Hybrid Arts Prize. In 2015 he received the Australia Council’s Emerging and Experimental Arts Award. In 2016 he was awarded an Honorary Doctorate from the Ionian University, Corfu. His artwork is represented by Scott Livesey Galleries,

Judith Campisi – Senolytics for Healthy Longevity

I had the absolute privilege of interviewing Judith Campisi at the Undoing Aging conference in Berlin.  She was so sweet and kind – it was really a pleasure to spend time with her discussing senolytics, regenerative medicine, and the anti-aging movement.




Judith Campisi was humble, open minded, and careful not to overstate the importance of senolytics, and rejuvenation therapy in general.  Though she really is someone who has made an absolutely huge impact in anti-aging research.  I couldn’t have said it better than Reason at Fight Aging!

As one of the authors of the initial SENS position paper, published many years ago now, Judith Campisi is one of the small number of people who is able to say that she was right all along about the value of targeted removal of senescent cells, and that it would prove to be a viable approach to the treatment of aging as a medical condition. Now that the rest of the research community has been convinced of this point – the evidence from animal studies really is robust and overwhelming – the senescent cell clearance therapies known as senolytics are shaping up to be the first legitimate, real, working, widely available form of rejuvenation therapy.

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.


Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

The Generative Universe Hypothesis

Remembering Lee Smolin’s theory of the dynamical evolution of the universe  where through a form of natural selection, black holes spawn new universes, I thought that if a superintelligent civilization understood its mechanics, they may try to control it, and engineer or bias the physics in the spawned universe – and possibly migrate to this new universe.   Say that they found out how to talk along the parent/child relations between universes, it may be a an energy efficient way achieve some of the outcomes of simulations (as described in Nick Bostrom’s Simulation Hypothesis).

The idea of moving to a more hospitable universe could be such a strong attractor to post-singularity civs that, once discovered, it may be the an obvious choice for a variety of reasons.   A) Better computation by faster/easier networking – Say for instance, that the speed of light were a lot faster, and information could travel over longer distances than in this universe – then network speed may not be as much of a hindrance to developing larger civs, distributed computation, and mega-scale galactic brains. B) As a means of escape – If it so happened that neighbouring alien civs were close enough to pose a threat, then escaping this universe to a new generated universe could be ideal – especially if one could close the door behind, or lay a trap at the opening to the generated universe to capture probes or ships that weren’t ones own.  C) Mere curiosity – it may not be full blown utility maximization that is the lone object of the endeavor,  it could be just simple curiosity about how (stable) universes may operate if fine tuned differently. (How far can you take simulations in this universe to test how hypothetical universes could operate without actually generating and testing the universes?)  D) To escape the ultimate fate of this universe – according to the most popular current estimates, we have about 10100 years until the heat death of this universe. E) Better computation by a ‘cooler’ environment – A colder yet stable universe to compute in – similar to the previous point and the first point.  Some hypothesise that civs may sleep until the universe gets colder when computation can be done far more efficiently, where these civs long for the heat death so that they can get really get started with whatever projects they have in mind that require the computing power only made possible by extremely low temperatures more abundantly available at or near the heat death.  Well, what if you could engineer a universe to achieve temperatures far lower than that which would be available in this universe, while also allowing the benefit of the universe being relatively steady (say that’s something that’s needed) – and if it could be achieved sooner by a generative universe solution than waiting around for this universes heat death then why not?  F) Fault tolerance – distributing a civ across (generated) universes may preserve the civ against risks of the current one going unexpectedly pear shaped – the more fault tolerance the merrier G) Load balancing – if it’s posisble to communicate between parent/child relationships, then civs may generate universes merely to act as containers for computation, helping solve really really really big problems far faster, or scaffold extremely detailed virtual realities far more efficiently – less lag, less jitters – deeper immersion! 

If this Perhaps we will find evidence of alien civs around black holes generating and testing new universes before taking the leap to transcend so to speak.

Why leave the future evolution of universes up to blind natural selection?  Advanced post-singularity alien civs might hypothesize an extremely strict set of criteria to allow for the formation of the right kinds of matter and energy in child universes to either mirror our own universe, or more likely take it up a notch or two;  to new levels of interestingness – while computational capacity is limited if constrained by the laws of this containing universe, it may be that spawning a new universe could allow for more interesting and efficient computation.

It may also be a great way to escape the heat death of the universe 🙂

I spoke about the idea with Andrew Arnel a while ago while out for a drink, where I came up with a really cool name for this idea – though I can’t remember what it was 🙂  perhaps it only sounds good after a few beers – perhaps it was something like the ‘generative’, spawnulation or ‘genulation’ hypothesis…


Update: also more recently I commented about this idea on a FB post by Mike Johnson:
I may have a similar idea relating to smolins darwinistic black-hole universe generation. Why build simulations where it would be more efficient to actually generate new universes not computationally bounded by or contained by the originating universe – by nudging the physics that would emerge in the new universe to be more able to support flourishing life, more computation and wider novelty possibility spaces.

Furthermore I spoke to Sundance Bilson Thomson (a physicist in Australia who was supervised by Lee Smolin) about whether what influenced the physics in the child universes was local phenomena surrounding the black hole in the parent universe, or global phenomena of the parent universe.  He said it was global phenomena based on something to do with the way stars are formed.  So this might lower my credence in the Generative Universe hypothesis as it pertains to Lee Smolin’s idea – though I need to seek out whether the nature of the generated child universes could still be nudged or engineered.

Why Technology Favors a Singleton over a Tyranny

Is democracy loosing its credibility, will it cede to dictatorship?  Will AI out-compete us in all areas of economic usefulness – making us the future useless class?

It’s difficult to get around the bottlenecks of networking and coordination in distributed democracies. In the past quite naturally distributed systems being scattered are more redundant wer in many ways fault tolerant and adaptive – though these payoffs for most of us may dwindle if humans become less and less able to compete with Ex Machina. If the relative efficiency of democracies to dictatorships tips towards the latter nudging a transition to centralized dictatorships, while solving some distribution & coordination problems, the concentration of resource allocation may be exaggerated beyond historical examples of tyranny. Where the ‘once was proletariat’ now new ‘useless class’ have little to no utility to the concentration of power – the top 0.001% – the would be tyrants will likely give up on ruling and tyrannizing – and instead find it easier to cull the resource hungry and rights demanding horde – more efficient that way. Ethics is fundamental to fair progress – ethics is philosophy with a deadline creeping closer – what can we do to increase the odds of a future where the value of life is evaluated beyond it’s economic usefulness?
I found ‘Why Technology Favors Tyranny by Yuval Noah Harari‘ was a good read – I enjoy his writing, and it provokes me to think.  About 5 years ago I did the ‘A Brief History of Humankind’ course via coursera – urging my friends to join me.  Since then Yuval has taken the world by storm.
The biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. […] We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. […]Why Technology Favors Tyranny
I assume AI superintelligence is highly probable if we don’t go extinct first.  For the same reason that the proletariat’s become useless I think ultimately the AI-Human combination will likely become useless too, and cede to Superintelligent AI – so all humans becomes useless. The bourgeoisie elite may initially feel safe in the idea that they don’t need to be useful, they just need to maintain control of power. Though the sliding relative dumbness of bourgeoisie to superintelligence will worry them.. perhaps not long after wiping out the useless class, the elite bourgeoisie will then see the importance of the AI control problem, and that their days are numbered too – at which point will they see ethics and the value of life beyond economic usefulness as important?
However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.Why Technology Favors Tyranny
Yuval Noah Harari believes that we could be heading for a technologically enabled tyranny as AI automates all jobs away – and we become the useless class. Though if superintellignece is likely, then human’s will likely to be a bottleneck in any AI/Human hybrid use case – if tyranny happens, it won’t last for long – what use is a useless class to the elite?

Technology without ethics favors singleton utility monsters – not a tyranny – what use is it to tyrannize over a useless class?

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”


Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .
Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence? Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way. The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts? Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition. Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.
  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this. Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism? Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world. First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made. Second, it would obviously have huge economic & ethical uses. Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’. Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully. A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work. One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends. All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on. Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community. Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?   Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other. But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better. Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)
Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why? Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t. A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.  
Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence? Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on. But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful. Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable. Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why. Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA? Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂
Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.
Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website. ‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary. If you like Mike’s work, consider helping fund it at Patreon.