How science fails

There is a really interesting Aeon article on what bad science, and how it fails.

What is Bad Science?
According to Imre Lakatosh, science degenerates unless it is both theoretically and experimentally progressive. Can Lakatosh’s ‘scientific programme’ approach, which incorporates merits of both Khunian and Popperian ideas, help solve this problem?

Is our current research tradition adequate and effective enough to solve seemingly intractable scientific problems in a timely manner (i.e. in foundational theoretical physics or climate science)?
Ideas are cheap, but backing them up with sound hypotheses (main and auxiliary) predicting novel stuff and experimental evidence aimed at confirming this stuff _is expensive_ given time/resource constraints means that among other things an ideal experimental progressiveness is sometimes not achievable.

A scientific programme is considered ‘degenerating’ if:
1) it’s theoretically degenerating because it doesn’t predict novel facts (it just accommodates existing facts); no new forecasts
2) it’s experimentally degenerating because none of the predicted novel facts can be tested (i.e. string theory)

Lakatosh’s ideas (that good science is both theoretically and experimentally progressive) may serve as groundwork for further maturing what it means to ‘do science’ where an existing dominant programme is no longer able to respond to accumulating anomalies – which was the reason why Kuhn wrote about changing scientific paradigms – but unlike Kuhn, Lakatos believes that a ‘gestalt-switch’ or scientific revolution should be driven by rationality rather than mob psychology.
Though a scientific programme which looks like it is degenerating may be just around the corner from a breakthrough…

For anyone seeking an unambiguously definitive demarcation criterion, this is a death-knell. On the one hand, scientists doggedly pursuing a degenerating research programme are guilty of an irrational commitment to bad science. But, on the other hand, these same scientists can legitimately argue that they’re behaving quite rationally, as their research programme ‘might still be true’, and salvation might lie just around the next corner (which, in the string theory programme, is typically represented by the particle collider that has yet to be built). Lakatos’s methodology doesn’t explicitly negate this argument, and there is likely no rationale that can.

Lakatos argued that it is up to individual scientists (or their institutions) to exercise some intellectual honesty, to own up to their own degenerating programmes’ shortcomings (or, at least, not ‘deny its poor public record’) and accept that they can’t rationally continue to flog a horse that appears, to all intents and purposes, to be quite dead. He accepted that: ‘It is perfectly rational to play a risky game: what is irrational is to deceive oneself about the risk.’ He was also pretty clear on the consequences for those indulging in such self-deception: ‘Editors of scientific journals should refuse to publish their papers … Research foundations, too, should refuse money.’

This article is totally worth a read…

John Wilkins – Comprehension and Compression

“In short, data is not knowledge; knowledge is not comprehension; comprehension is not wisdom”

The standard account of understanding has been, since Aristotle, knowledge of the causes of an event or effect. However, this account fails in cases where the subject understood is not causal. In this paper I offer an account of understanding as pattern recognition in large sets of data without the presumption that the patterns indicate causal chains.

All nervous systems by nature desire to process information. Consequently, entities with nervous systems tend to find information everywhere, and on the principle that if some is good a lot is better, we have come up with “Big Data”, which is often suggested as the solution to the problems of one science or another, although it is unclear exactly what counts as big data and how it is supposed to do this. In this paper I will argue (i) that understanding does not and cannot come from larger and higher dimensionality data sets, but from structure in the data that can be literally comprehended; and (ii) that big data multiplies uncertainties unless it can be summarized. In short, data is not knowledge; knowledge is not comprehension; comprehension is not wisdom.

Slides can be found here:

Event was held at Melbourne Uni in 2019:


Consider supporting SciFuture by Subscribing to the SciFuture YouTube channel:


Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”


Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 


Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.


Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.


Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.


Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.


Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.


Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.


Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.


Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.


Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.



Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).


Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.


Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).


If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.


Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Marching for Science with John Wilkins – a perspective from Philosophy of Science

Recent video interview with John Wilkins!

  • What should marchers for science advocate for (if anything)? Which way would you try to bias the economy of attention to science?
  • Should scientists (as individuals) be advocates for particular causes – and should the scientific enterprise advocate for particular causes?
  • The popular hashtag #AlternativeFacts and Epistemic Relativism – How about an #AlternativeHypotheses hashtag (#AltHype for short 😀 ?)
  • Some scientists have concerns for being involved directly – other scientists say they should have a voice and be heard on issues that matter and stand up and complain when public policy is based on erroneous logic and/or faulty assumptions, bad science. What’s your view? What are the risks?

John Wilkins is a historian and philosopher of science, especially biology. Apple tragic. Pratchett fan. Curmudgeon.

We will cover scientific realism vs structuralism in another video in the near future!
Topics will include:

  • Scientific Realism vs Scientific Structuralism (or Structuralism for short)
  • Ontic (OSR) vs Epistemic (ESR)
  • Does the claim that one can know only the abstract structure of the world trivialize scientific knowledge? (Epistemic Structural Realism and Ontic Structural Realism)
  • If we are in principle happy to accept scientific models (especially those that have graduated form hypothesis to theory) as structurally real – then does this give us reasons never to be overconfident about our assumptions?

Come to the Science March in Melbourne on April 22nd 2017 – bring your friends too 😀

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

Bayeswatch – The Pitfalls of Bayesian Reasoning – Chris Guest

Chris Guest - Headshot 1Bayesian inference is a useful tool in solving challenging problems in many fields of uncertainty. However, inferential arguments presented with a Bayesian formalism should be subject to the same critical scrutiny that we give to informal arguments. After an introduction to Bayes’ theorem, some examples of its misuse in history and theology will be discussed.

Chris is a software developer with an academic background in Philosophy, Mathematics and Machine Learning. He is also President of the Australian Skeptics Victorian Branch. Chris is interested in applying critical reasoning to boundary problems in skepticism and is involved in consumer complaints and skeptical advocacy.


Talk was held at the Philosophy of Science Conference in Melbourne 2014

Video can be found here.

Science vs Pseudoscience – Kevin Korb

Science vs PseuodoscienceScience has a certain common core, especially a reliance on empirical methods of assessing hypotheses. Pseudosciences have little in common but their negation: they are not science.
They reject meaningful empirical assessment in some way or another. Popper proposed a clear demarcation criterion for Science v Rubbish: Falsifiability. However, his criterion has not stood the test of time. There are no definitive arguments against any pseudoscience, any more than against extreme skepticism in general, but there are clear indicators of phoniness.


Science v Non-science – What’s the point? Possible goals for distinguishing btw them: Rhetorical, Political, Social Methodological: aiming at identifying methodolgical virtues and vices; improving practice How to proceed? Traditional: propose and test necessary and sufficient conditions for being science Less ambitious: collect prominent characteristics that support a “family resemblance”

What is Science?

Science is something like the organized (social, intersubjective) attempt to acquire knowledge about the world through interacting with the world. In the Western tradition, this began with the pre-Socratic philosophers and is especially associated with Aristotle.

science-pseudoscienceNature of Science Science contrasts to: Learning: individuals learn about the world. Their brains are wired for that. Mathematics/deduction: a handmaid to science, but powerless to teach us about the world on its own. Dogma, ideology, faith: These may be crucial to driving even scientific projects forward (as are good meals, sleep, etc.), but as they are by definition not tested by evidence, they are not themselves science.

A Potted History of the Philosophy of Science

Wissenschaftsphilosophie – The Vienna Circle Early 20th Century Scientific Major Success Stories: Charles Darwin (evolutionary biology) Gottlob Frege (formal logic) Albert Einstein (physics) The sciences were showing themselves as the most successful human project ever undertaken. In Vienna a group of great philosophers asked themselves: Why? How did this happen? With the Vienna Circle philosophy of science became a discipline, attempting to answer these questions.

The Vienna Circle & Logical Positivism : The beginning was the appointment of Ernst Mach as Professor of the Philosophy of the Inductive Sciences at the University of Vienna, 1895. Thereafter, Mortiz Schlick founded the Vienna Circle (and Logical Positivism) in 1922. Through the helpful activities of Adolf Hitler, the leading philosophers of science introduced the Vienna Circles ideas throughout the English speaking world.
Vienna Circle Ernst Mach Moritz Schlick Rudolf Carnap Hans Reichenbach Karl Popper Paul Feyerabend Noretta Koertge Positivismus Falsifikationismus Anarchismus
The Vienna Circle Basic Principles: Philosophy as logical analysis The logical foundation of science lies in observation & experiment e.g., Rudolf Carnap’s 1928 title: The Logical Construction of the World!! Key: Verifiability Criterion of Meaning What cannot be proven empirically, is meaningless. E.g., metaphysics, religion, superstition. {h, b e1, . . . en; e1, . . . en} verifies h
Karl Popper Objects Many scientific hypotheses are universal: E.g., light always bends near large masses. But {h, b e1, . . . e∞; e1, . . . e∞} is not even a possible state of affairs Aside from that, metaphysics is an ineliminable part of science; all science has fundamental presuppositions.
Karl Popper Falsificationism Key: Demarcation criterion for science What cannot be falsified empirically, is unscientific. E.g., Marxism, religion, psychoanalysis. {h, b e, ¬e} falsifies h Theses: We can make scientific (or social) progress alternating between Bold Conjectures and Refutations. The ideal test (severe test) is guaranteed to falsify one of two (or more) alternative conjectures. Progress: refuting more and more theories; not accumulating more and more knowledge.
Imre Lakatos Sophisticated Falsificationism {h, b e, ¬e} falsifies (h&b) Hypotheses stand or fall in networks, networked to each other and to theories of measurement, etc. = research programmes If a research programme makes novel predictions that come up true, it is progressive If a programme lies in a sea of anomalies and is dominated by ad hoc saving maneuvers, it is degenerating Unfortunately, there’s no definite point at which a degenerating research programme rationally needs to be abandoned.
Thomas Kuhn Scientific Revolutions In The Structure of Scientific Revolutions (1962) he introduced the idea that science moves (not: progresses) from “normal science” through a sea of anomalies to “revolutionary science” to a new “normal science” – from “paradigm” to “paradigm”. According to Kuhn, the process is not rational, but explained in terms of psychology, social processes and power relationships.
Paul Feyerabend Epistemic Anarchy In 1958 Feyerabend went to Berkeley, where he turned against Popper, promoting “Epistemological Anarchism” instead (Against Method, 1974). He embraced the inability to reject research programmes, promoting methodological pluralism instead. Denunciations of witchcraft, pseudosciences, etc. are mere expressions of prejudice.
Ludwig Wittgenstein Open Concepts Natural language concepts have an “open structure”, based on family resemblance, not definition.
Ludwig Wittgenstein Open Concepts One of Wittgenstein’s examples: Define “game”, in terms of the necessary and sufficient conditions. Now let’s play a game involving changing those conditions. . . Socrates’ game of taking some sophist’s definition for “love”, “knowledge”, “good” and poking holes in it could be played forever. Hence, Socrates’ phony humility in claiming that he knew nothing. The reality is that our understanding and use of language doesn’t depend on definitions.
1“Science” is an Open Concept Instead of assembling inadequate necessary and sufficient conditions, let’s collect examples of science and non-science and see what the former share in family resemblances. Leave problematic cases for later. Physics Mathematics Epidemiology Medicine Paleontology Religion Climatology Mining Evolution Theory Creationism Economics Politics Political Science Fox News
“Science” is an Open Concept I’d like to suggest the key family resemblances are: Empiricism: insistance on an empirical base versus ideological dominance Abstraction (generalization) and mathematization (when possible) versus anecdotal evidence Social processes encouraging objectivity, intersubjectivity, peer review, Popperian critical rationality versus authoritarianism
Some Pseudoscientific Arguments AGW/ecology/genetic regulatory/etc models are highly abstract, lose track of detailed reality and so are not scientific. George Box: “All models are wrong, but some are useful.” Any computer model will misrepresent continuity, but does it matter? The question is whether the property of the model of interest (mapping to reality) is preserved under model dynamics, not whether irrelevant details are carried along. The demand for “proof” in science is a good indicator of dishonesty.
Some Pseudoscientific Arguments Similarly: the model predicts overall process ok, but omits some really tiny details and therefore is wrong. Here’s an example I gave a data mining class; 120 years of data on business profits. Looks like three different trends concatenated. Let’s just regress just the points from 80-120.
Some Pseudoscientific Arguments Not bad. But some ornery shareholder says, let’s just try years 109-120 instead.
Some Pseudoscientific Arguments As we can all see profits are hardly moving; let’s turf out the board!!
Some Pseudoscientific Arguments NB: profit = global surface temperature; competitiveness = solar energy.
Some References on Scientific Method F Bacon (1620) Novum Organum Scientiarum. JS Mill (1843) System of Logic. M Gardner (1957) Fads and Fallacies in the Name of Science. Dover. T Kuhn (1962) The Structure of Scientific Revolutions. K Popper (1963) Conjectures and Refutations. R Carnap (1966) An Introduction to the Philosophy of Science. C Hitchcock (2004) Contemporary Debates in Philosophy of Science.

Slides can be found here:


Kevin KorbMy research is in: machine learning, artificial intelligence, philosophy of science, scientific method, Bayesian inference and reasoning, Bayesian networks, artificial life, computer simulation, epistemology, evaluation theory.

See The page is out of date, but accurate as far as it goes.

Email kbkorb [at] gmail {dot} com twitter: @kbkorb

Panel on Skepticism & Science

Panelists: Terry Kelly (Former president of Vic Skeptics), Chris Guest (Current president of Vic Skeptics), Bill Hall (Researcher at the Kororoit Institute)

Discussion includes the history of skepticism, what skepticism is today, the culture of skepticism as a movement and how skepticism relates to broader philosophy.

00:26 Terry discusses Active Skepticism – Where Science, Skepticism & Consumer wrights overlap,  – he brings up hypnotism

01:26 Skepticism does not equal cynicism – including some cool observations about the difference between the empiricism and the plausibility argument.  The issue of plausibility vs empiricism – some issues might seem implausible… some things are so implausible they have to be addressed in that way… but some people bring up the argument that some things may seem counter-intuitive – but end up being likely after empirical observation.

4:14 Chris Guest – Discusses passion about critical thinking – it’s not so much what skeptics believe, it’s the approach to arguments –

4:42 Historical definitions of skepticism – relating to cynicism (ancient greeks).  Though skepticism is not considered cynicism today, ideally they are treated as separate concepts – there are a lot of magicians in the skeptics movement – they have a trained eye – intuitively see past common blind spots and cognitive biases – whereas scientists often take things on face value.

6:22 Bill Hall discusses his background in Popperianism – and pseudoscience and belief vs rational thinking (NOTE: Contrast with Kevin Korb’s presentation on Pseudoscience vs Science – Kevin isn’t a Popperian and thinks that falsificationism is flawed).  The demarcation problem between science and mysticism.   Bill says falsification is part of skepticism – part of debunking false claims.

08:55 Chris Guest discusses group dynamics and belief systems – people reinforce each others beliefs – so Chris tries to be tougher on people they agree with than those whom he disagrees with demanding a higher standard of argument.   Straw man arguments – where someone sets up a really bad representation of an opponents arguments rather than going into the specifics of the opponents arguments.   Steel Man arguments – kind of the opposite of straw man arguments – rather than trying to create a refutable form of the opponents arguments, try to put together the best possible representation of their arguments, even better than the one they are presenting to you – take on the best possible, most charitable arguments.   Value in moving beyond conflicts based on group identity.

11:00 Terry Kelly discusses disproving a persons beliefs – though this often results in them going away and believing harder than before.  Ashley Barnett brought up an example earlier that intelligent people are easier to fool because they had stronger attention – James Randi says academics are easier to fool because they belief if they can’t work it out, since they are so smart then it must be a special power.   Intelligent people will find smart ways to justify their rational beliefs.  So sometimes it’s not so easy to change peoples minds even though you have good evidence.


14:36 Chris Guest discusses approaches to debating climate change deniers – using existing models that make predictions find out what assumptions the climate change deniers disagree with, and ask for an alternative model that gives better predictions.   Then the deniers might claim that the climate alarmists get more funding to create the models as an explanation to why they have the more robust models.

15:35 Q: How people asses the nature of evidence?
Chris Guest: Instead of going head to head with someone who believes in homeopathy, say ‘let’s go to a homeopathy open day and listen to the talks’ – then let people go through their own process of discovery.


17:37 How people become rational – how do people go from magical thinking to being rational?  Turning point or slowly drift into it?


Acoustics made it difficult to hear people asking questions

“Where skeptics get interested is whether people are getting what they paid for” – Terry Kelly



Science & Skepticism - Terry Kelly - Chris Guest - Bill Hall

Many thanks for watching!
– Support Scifuture via Patreon
– Please Subscribe the SciFuture Channel:
Science, Technology & the Future website: