Philosophy & Effective Altruism – Peter Singer, David Pearce, Justin Oakley, Hilary Greaves

Panelists ([from left to right] Hilary Greaves, Peter Singer, Justin Oakley & David Pearce) discuss what they believe are important philosophical aspects of the Effective Altruism movement – from practical philosophy we can use today to possible endpoints implied by various frameworks in applied ethics. The panelists navigate through a wide range of fascinating and important topics that aspiring effective altruists and anyone whom is philosophically inclined will find both useful and enjoyable.

Panel moderated by Kerry Vaughan.

Panel Transcript

(in progress)

0:35 Question “What are the hot topics in philosophy that might change what effective altruists might focus on?”
Hilary Greaves – So, my answer to that – the one I’m most directly familiar with is the one I already mentioned in my talk earlier. I think that population ethics can make a massive difference to a significant proportion of the things we should worry about as EAs. In particular, the thing that gives rise to this is the situation where – at the moment we have lots of moral philosophers who really like their ivory tower abstract theorising – those people have done a lot of discussing this abstract question of ‘ok what is my theory of population ethics’ – then at the real world extreme we have lots of people engaging directly with the real world issues thinking, ok, how should we do our cost-benefit analysis, for example family planning. We have a big gap in the middle – we don’t really have a well developed community of people are both in touch with the background moral philosophy and who are interested in applying it to the real world. So because there is that gap I think there’s a lot of low hanging fruit at the moment for people who have a background in moral philosophy and who are also plugged into the EA community to build these bridges from theory to practise and see what it all means for the real world.

01:56 Peter Singer – I actually agree with that – that population ethics is an important area – and another place that connects to what Hilary was talking about earlier is for the existential risk questions. Because, we need to think about – suppose that the world were destroyed – is what’s so bad about that the fact that 7.5 people have lost their lives or is it the loss of the untold billions of people that Nick Bostrom has (10^56 or something, I don’t know – some vastly unimaginable number) of possible future lives that could have been good, and that would be lost. So that seems to me to be a real issue. If you want something that’s a little more nitty gritty towards what we are talking about today – another issue is – how do we get a grip on the nature and extent of animal suffering? (something that we will be talking a bit about in a moment) It’s really just hard to say – David just talked about factory farming and the vast amount of billions of animals suffering in factory farms – and I totally agree that this is a top priority issue – but in terms of assessing priorities, how do we compare the suffering of a chicken in a factory farm to, let’s say, a mother who has to watch her child dying of malaria? Is there some way we can get a better handle on that?

03:23 Justin Oakley – For me, I think, one of the key issues in ethics at the moment that bears on Effective Altruism at the moment is what’s known as a ‘situationists critique of virtue ethics’ – so trying to understand not only on how having a better character helps people to act well but also what environment they are in. Subtle environmental influences that might either support or subvert a person acting well – in particular having the virtue perhaps of liberality – so there is lots of interesting work being done on that – some people think that debate is beginning to die down – but it seems to be just starting up again with a couple of new books that are coming out looking at a new twist on that. So for me, I’m keen to do that – I guess my own work at Monash I teach a lot of health professionals so keen to look at what environmental influences there are on doctors that impede them having a theraputic relationship on patients – not only thinking about how to help them be more virtuous – I suppose which is not the only thing I aim to do with the doctors that I teach but I hope to have that influence to some extent.

Panel Greaves Singer Oakley Pearce - Orgasmatronium - 1

The Utilitarianism at the End of the Universe – Panelists Hilary Greaves, Peter Singer, Justin Oakley & David Pearce laugh about the possible ‘end games’ of classical utilitarianism.

04:25 David Pearce – Yes well I’m perhaps slightly out of touch now with analytic philosophy – but one issue that I haven’t really seen tackled by analytic philosophy is this disguised implication of classical utilitarianism of what we ought to be doing, which is essentially optimising matter and energy throughout the world – and perhaps the accessible universe – for maximum bliss. A questioner earlier was asking ‘Well, as a negative utilitarian, do you accept this apparent counter-intuitive consequence that one ought to wipe out the world to prevent the suffering of one person.’ But if one is a classical utilitarian then it seems to be a disguised consequence that it’s not good enough to aim merely for a civilization in which all sentient beings could flourish and enjoy gradients of intelligent bliss – instead one must go on remorselessly to when matter and energy is nothing but pure orgasmic bliss.
05:35 Peter Singer – I find it a remorseless and unusual term to describe it
[laughter…] 5:40 David Pearce – Well, I think this is actually rather an appealing idea to me but I know not everyone shares this intuition
[laughter…] 5:50 Question “So Peter I’d be interested to know if you have thoughts on whether you think that’s an implication of classical utilitarianism” – Peter Singer – If I accept that implication? Well – David and I talked about this a little bit earlier over lunch – I sort of, I guess, maybe I accept it but I have difficulty in grasping what it is to talk about converting matter and energy into bliss – unless we assume that there are going to be conscious minds that are going to be experiencing this bliss. And, of course then David would then very strongly agree that conscious minds not only have to experience bliss but also not experience any suffering certainly, presumably minimize anything that they experience other than bliss (because that’s not converting matter and energy into bliss) – so if what I’m being asked to imagine is a universe with a vast number of conscious minds that are experiencing bliss – yeah, maybe I do accept that implication.

06:43 Question “So this is a question mostly for Justin – effective altruists often talk about doing the ‘most good’ – should EAs be committed to doing ‘enough good’ instead of the ‘most good’?”

Justin Oakley – Yeah, that’s a good question to ask – one of the thinks I didn’t emphasize in my talk on virtue ethics is that standardly virtue ethics thinks that we should strive to be an excellent human being, which can fall a little way short of producing the maximum good.  So if you produce an excellent level of liberality or perhaps good or benefit to someone else then that’s enough for virtue.  I guess in some of the examples I was giving in my talk you might choose a partner who – although you’re not the ultimate power couple (you are the sub-optimal power couple) but you are none the less attracted to that other person very strongly – from the perspective of effective altruism it might sound like you are doing the wrong thing – but intuitively it doesn’t seem to be wrong.  That’s one example.

07:51 Hilary Graves – Surely, I mean – something I can say a bit about a similar issue looks like from a more consequentialist perspective – when people think of consequentialism they sometimes assume that consequentialists think that there is a moral imperative to maximize it – you absolutely have to do the most good and anything less than that is wrong.  But it’s worth emphasising that not all consequentialists emphasise that at all – not all consequentialists think that it’s even helpful to buy into this language of right and wrong.  So you don’t have to be a virtue ethicist to somewhat feel alienated from a moral imperative to do the absolute most good – you could just think something like the following: never mind right and wrong, never mind what I should vs not allowed to be doing.  I might just want to make the world better – I might just think I could order all the things I could possibly do in terms of better or worse.   And then you know, if I give away 1% of my income, that’s better thank giving away nothing – if I give 5% that’s better than giving away 1% – if I give away 50% that’s better than anything less – but I don’t have to impose some sharp cutoff and say that I’m doing something morally wrong if I give less than that – I think if we think in this way then we tend to alienate both ourselves and other people less – there’s something very alienating about holding up a very high standard and saying that anybody including ourselves who is falling short of this very high standard is doing something wrong with a capital ‘R’.

Panel including Peter Singer

09.14 Peter Singer – So, in a way I agree – you are talking about a spectrum view (of where we have a spectrum of black to white – or maybe we don’t want to use those terms for it) from one end to the other and you’re somewhere on the spectrum and you try and work your way further up the spectrum perhaps – which, I’m reasonably comfortable with that.  Another way of looking at it (and this goes back to something that Sidgewick also said) is that we ought to be clearer about distinguishing when we regard the act as the right act or the wrong act and when we regard it appropriate to praise or blame people for doing it.  And these are really separate things – especially if you are a consequentialist because praising or blaming someone is an act – and you ought to only do that if it will have good consequences.  So I suppose that we think that somebody in particular personal circumstances ought to  be giving 50% of his earnings away – but he is only giving 10% – but he is living in a society like ours in which by giving 10% he is at the top 99.99% of what people are giving.  Well to blame him saying ‘oh well your only giving 10% – you should be giving more’ looks like it’s going to be very counter productive – you really want to praise him in front of other people so that more people will give 10%.  So I think if we understand it that way that’s another way of looking at it – I’m not sure if it’s nesseccarily better than the spectrum view that you [Hilary] was suggesting – but it is another way of, if you like, softening this black white morality idea that is either is right or is wrong.

10:54 Question “A question for Hilary – You mention that one might find the ‘uncertainty’ that your talk generates kind of paralyzing – but you mention that wasn’t your conclusion – can you expland on why this is (paralyzation) not your conclusion?”
Hilary Greaves


  • Transcribed by Adam Ford


Hilary Greaves - LectureHilary Greaves is an Associate Professor in Philosophy, at Somerville College in the University of Oxford. My current research focusses on various issues in ethics. Hilary’s interests include: foundational issues in consequentialism (‘global’ and ‘two-level’ forms of consequentialism), the debate between consequentialists and contractualists, aggregation (utilitarianism, prioritarianism and egalitarianism), moral psychology and selective debunking arguments, population ethics, the interface between ethics and economics, the analogies between ethics and epistemology, and formal epistemology. Hilary currently (2014-17) directs the project Population Ethics: Theory and Practice, based at the Future of Humanity Institute, and funded by The Leverhulme Trust.

Peter Singer - Non-Human Animal Ethics - EA Global Melbourne 2015Peter Singer is an Australian moral philosopher. He is currently the Ira W. DeCamp Professor of Bioethics at Princeton University, and a Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne. He specializes in applied ethics and approaches ethical issues from a secular, utilitarian perspective.  He is known in particular for his book, Animal Liberation (1975), a canonical text in animal rights/liberation theory. For most of his career, he supported preference utilitarianism, but in his later years became a classical or hedonistic utilitarian, when co-authoring The Point of View of the Universe with Katarzyna de Lazari-Radek.

Justin Oakley - Virtue & Effective Altruism - EA Global Melbourne 2015Justin Oakley is an Associate Professor at Monash University – the School of Philosophical, Historical & International Studies, and Centre for Human Bioethics. Justin has been part of the revival of the ethical doctrine known as virtue ethics, an Aristotelian doctrine which has received renewed interest in the past few decades.  Oakley is particularly well known for his work on professional ethics and also the so-called ‘problem’ of friendship. The problem of friendship looks at how a strict application of impartialist ethical doctrines, such as utilitarianism and Kantianism, conflicts with our notions of friendship or ‘true friendship’.

David PearceDavid Pearce is a British philosopher who promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

Panel on Skepticism & Science

Panelists: Terry Kelly (Former president of Vic Skeptics), Chris Guest (Current president of Vic Skeptics), Bill Hall (Researcher at the Kororoit Institute)

Discussion includes the history of skepticism, what skepticism is today, the culture of skepticism as a movement and how skepticism relates to broader philosophy.

00:26 Terry discusses Active Skepticism – Where Science, Skepticism & Consumer wrights overlap,  – he brings up hypnotism

01:26 Skepticism does not equal cynicism – including some cool observations about the difference between the empiricism and the plausibility argument.  The issue of plausibility vs empiricism – some issues might seem implausible… some things are so implausible they have to be addressed in that way… but some people bring up the argument that some things may seem counter-intuitive – but end up being likely after empirical observation.

4:14 Chris Guest – Discusses passion about critical thinking – it’s not so much what skeptics believe, it’s the approach to arguments –

4:42 Historical definitions of skepticism – relating to cynicism (ancient greeks).  Though skepticism is not considered cynicism today, ideally they are treated as separate concepts – there are a lot of magicians in the skeptics movement – they have a trained eye – intuitively see past common blind spots and cognitive biases – whereas scientists often take things on face value.

6:22 Bill Hall discusses his background in Popperianism – and pseudoscience and belief vs rational thinking (NOTE: Contrast with Kevin Korb’s presentation on Pseudoscience vs Science – Kevin isn’t a Popperian and thinks that falsificationism is flawed).  The demarcation problem between science and mysticism.   Bill says falsification is part of skepticism – part of debunking false claims.

08:55 Chris Guest discusses group dynamics and belief systems – people reinforce each others beliefs – so Chris tries to be tougher on people they agree with than those whom he disagrees with demanding a higher standard of argument.   Straw man arguments – where someone sets up a really bad representation of an opponents arguments rather than going into the specifics of the opponents arguments.   Steel Man arguments – kind of the opposite of straw man arguments – rather than trying to create a refutable form of the opponents arguments, try to put together the best possible representation of their arguments, even better than the one they are presenting to you – take on the best possible, most charitable arguments.   Value in moving beyond conflicts based on group identity.

11:00 Terry Kelly discusses disproving a persons beliefs – though this often results in them going away and believing harder than before.  Ashley Barnett brought up an example earlier that intelligent people are easier to fool because they had stronger attention – James Randi says academics are easier to fool because they belief if they can’t work it out, since they are so smart then it must be a special power.   Intelligent people will find smart ways to justify their rational beliefs.  So sometimes it’s not so easy to change peoples minds even though you have good evidence.


14:36 Chris Guest discusses approaches to debating climate change deniers – using existing models that make predictions find out what assumptions the climate change deniers disagree with, and ask for an alternative model that gives better predictions.   Then the deniers might claim that the climate alarmists get more funding to create the models as an explanation to why they have the more robust models.

15:35 Q: How people asses the nature of evidence?
Chris Guest: Instead of going head to head with someone who believes in homeopathy, say ‘let’s go to a homeopathy open day and listen to the talks’ – then let people go through their own process of discovery.


17:37 How people become rational – how do people go from magical thinking to being rational?  Turning point or slowly drift into it?


Acoustics made it difficult to hear people asking questions

“Where skeptics get interested is whether people are getting what they paid for” – Terry Kelly



Science & Skepticism - Terry Kelly - Chris Guest - Bill Hall

Many thanks for watching!
– Support Scifuture via Patreon
– Please Subscribe the SciFuture Channel:
Science, Technology & the Future website:

Panel: The Demarcation Problem – What is Science, and What Isn’t Science?

The demarcation problem in the philosophy of science is about how to distinguish between science and nonscience, including between science, pseudoscience, other activities, and beliefs. The debate continues after over a century of dialogue among philosophers of science and scientists in various fields, and despite broad agreement on the basics of scientific method.

For Popper, the distinguishing characteristic of science is that it seeks to falsify, not to confirm, its hypotheses.  Do you agree with Popper?

Will the specter of metaphysics continue to haunt the hunter on the quest for a sound a demarcation criterion?

Can we distinguish, ina principled way,between sciences and pseudosciences?