Posts

Marching for Science with John Wilkins – a perspective from Philosophy of Science

Recent video interview with John Wilkins!

  • What should marchers for science advocate for (if anything)? Which way would you try to bias the economy of attention to science?
  • Should scientists (as individuals) be advocates for particular causes – and should the scientific enterprise advocate for particular causes?
  • The popular hashtag #AlternativeFacts and Epistemic Relativism – How about an #AlternativeHypotheses hashtag (#AltHype for short 😀 ?)
  • Some scientists have concerns for being involved directly – other scientists say they should have a voice and be heard on issues that matter and stand up and complain when public policy is based on erroneous logic and/or faulty assumptions, bad science. What’s your view? What are the risks?

John Wilkins is a historian and philosopher of science, especially biology. Apple tragic. Pratchett fan. Curmudgeon.

We will cover scientific realism vs structuralism in another video in the near future!
Topics will include:

  • Scientific Realism vs Scientific Structuralism (or Structuralism for short)
  • Ontic (OSR) vs Epistemic (ESR)
  • Does the claim that one can know only the abstract structure of the world trivialize scientific knowledge? (Epistemic Structural Realism and Ontic Structural Realism)
  • If we are in principle happy to accept scientific models (especially those that have graduated form hypothesis to theory) as structurally real – then does this give us reasons never to be overconfident about our assumptions?

Come to the Science March in Melbourne on April 22nd 2017 – bring your friends too 😀

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
….
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

Bayeswatch – The Pitfalls of Bayesian Reasoning – Chris Guest

Chris Guest - Headshot 1Bayesian inference is a useful tool in solving challenging problems in many fields of uncertainty. However, inferential arguments presented with a Bayesian formalism should be subject to the same critical scrutiny that we give to informal arguments. After an introduction to Bayes’ theorem, some examples of its misuse in history and theology will be discussed.

Chris is a software developer with an academic background in Philosophy, Mathematics and Machine Learning. He is also President of the Australian Skeptics Victorian Branch. Chris is interested in applying critical reasoning to boundary problems in skepticism and is involved in consumer complaints and skeptical advocacy.

 

Talk was held at the Philosophy of Science Conference in Melbourne 2014

Video can be found here.

Science vs Pseudoscience – Kevin Korb

Science vs PseuodoscienceScience has a certain common core, especially a reliance on empirical methods of assessing hypotheses. Pseudosciences have little in common but their negation: they are not science.
They reject meaningful empirical assessment in some way or another. Popper proposed a clear demarcation criterion for Science v Rubbish: Falsifiability. However, his criterion has not stood the test of time. There are no definitive arguments against any pseudoscience, any more than against extreme skepticism in general, but there are clear indicators of phoniness.

Demarcation

Science v Non-science – What’s the point? Possible goals for distinguishing btw them: Rhetorical, Political, Social Methodological: aiming at identifying methodolgical virtues and vices; improving practice How to proceed? Traditional: propose and test necessary and sufficient conditions for being science Less ambitious: collect prominent characteristics that support a “family resemblance”

What is Science?

Science is something like the organized (social, intersubjective) attempt to acquire knowledge about the world through interacting with the world. In the Western tradition, this began with the pre-Socratic philosophers and is especially associated with Aristotle.

science-pseudoscienceNature of Science Science contrasts to: Learning: individuals learn about the world. Their brains are wired for that. Mathematics/deduction: a handmaid to science, but powerless to teach us about the world on its own. Dogma, ideology, faith: These may be crucial to driving even scientific projects forward (as are good meals, sleep, etc.), but as they are by definition not tested by evidence, they are not themselves science.

A Potted History of the Philosophy of Science

Wissenschaftsphilosophie – The Vienna Circle Early 20th Century Scientific Major Success Stories: Charles Darwin (evolutionary biology) Gottlob Frege (formal logic) Albert Einstein (physics) The sciences were showing themselves as the most successful human project ever undertaken. In Vienna a group of great philosophers asked themselves: Why? How did this happen? With the Vienna Circle philosophy of science became a discipline, attempting to answer these questions.

The Vienna Circle & Logical Positivism : The beginning was the appointment of Ernst Mach as Professor of the Philosophy of the Inductive Sciences at the University of Vienna, 1895. Thereafter, Mortiz Schlick founded the Vienna Circle (and Logical Positivism) in 1922. Through the helpful activities of Adolf Hitler, the leading philosophers of science introduced the Vienna Circles ideas throughout the English speaking world.
Vienna Circle Ernst Mach Moritz Schlick Rudolf Carnap Hans Reichenbach Karl Popper Paul Feyerabend Noretta Koertge Positivismus Falsifikationismus Anarchismus
The Vienna Circle Basic Principles: Philosophy as logical analysis The logical foundation of science lies in observation & experiment e.g., Rudolf Carnap’s 1928 title: The Logical Construction of the World!! Key: Verifiability Criterion of Meaning What cannot be proven empirically, is meaningless. E.g., metaphysics, religion, superstition. {h, b e1, . . . en; e1, . . . en} verifies h
Karl Popper Objects Many scientific hypotheses are universal: E.g., light always bends near large masses. But {h, b e1, . . . e∞; e1, . . . e∞} is not even a possible state of affairs Aside from that, metaphysics is an ineliminable part of science; all science has fundamental presuppositions.
Karl Popper Falsificationism Key: Demarcation criterion for science What cannot be falsified empirically, is unscientific. E.g., Marxism, religion, psychoanalysis. {h, b e, ¬e} falsifies h Theses: We can make scientific (or social) progress alternating between Bold Conjectures and Refutations. The ideal test (severe test) is guaranteed to falsify one of two (or more) alternative conjectures. Progress: refuting more and more theories; not accumulating more and more knowledge.
Imre Lakatos Sophisticated Falsificationism {h, b e, ¬e} falsifies (h&b) Hypotheses stand or fall in networks, networked to each other and to theories of measurement, etc. = research programmes If a research programme makes novel predictions that come up true, it is progressive If a programme lies in a sea of anomalies and is dominated by ad hoc saving maneuvers, it is degenerating Unfortunately, there’s no definite point at which a degenerating research programme rationally needs to be abandoned.
Thomas Kuhn Scientific Revolutions In The Structure of Scientific Revolutions (1962) he introduced the idea that science moves (not: progresses) from “normal science” through a sea of anomalies to “revolutionary science” to a new “normal science” – from “paradigm” to “paradigm”. According to Kuhn, the process is not rational, but explained in terms of psychology, social processes and power relationships.
Paul Feyerabend Epistemic Anarchy In 1958 Feyerabend went to Berkeley, where he turned against Popper, promoting “Epistemological Anarchism” instead (Against Method, 1974). He embraced the inability to reject research programmes, promoting methodological pluralism instead. Denunciations of witchcraft, pseudosciences, etc. are mere expressions of prejudice.
Ludwig Wittgenstein Open Concepts Natural language concepts have an “open structure”, based on family resemblance, not definition.
Ludwig Wittgenstein Open Concepts One of Wittgenstein’s examples: Define “game”, in terms of the necessary and sufficient conditions. Now let’s play a game involving changing those conditions. . . Socrates’ game of taking some sophist’s definition for “love”, “knowledge”, “good” and poking holes in it could be played forever. Hence, Socrates’ phony humility in claiming that he knew nothing. The reality is that our understanding and use of language doesn’t depend on definitions.
1“Science” is an Open Concept Instead of assembling inadequate necessary and sufficient conditions, let’s collect examples of science and non-science and see what the former share in family resemblances. Leave problematic cases for later. Physics Mathematics Epidemiology Medicine Paleontology Religion Climatology Mining Evolution Theory Creationism Economics Politics Political Science Fox News
“Science” is an Open Concept I’d like to suggest the key family resemblances are: Empiricism: insistance on an empirical base versus ideological dominance Abstraction (generalization) and mathematization (when possible) versus anecdotal evidence Social processes encouraging objectivity, intersubjectivity, peer review, Popperian critical rationality versus authoritarianism
Some Pseudoscientific Arguments AGW/ecology/genetic regulatory/etc models are highly abstract, lose track of detailed reality and so are not scientific. George Box: “All models are wrong, but some are useful.” Any computer model will misrepresent continuity, but does it matter? The question is whether the property of the model of interest (mapping to reality) is preserved under model dynamics, not whether irrelevant details are carried along. The demand for “proof” in science is a good indicator of dishonesty.
Some Pseudoscientific Arguments Similarly: the model predicts overall process ok, but omits some really tiny details and therefore is wrong. Here’s an example I gave a data mining class; 120 years of data on business profits. Looks like three different trends concatenated. Let’s just regress just the points from 80-120.
Some Pseudoscientific Arguments Not bad. But some ornery shareholder says, let’s just try years 109-120 instead.
Some Pseudoscientific Arguments As we can all see profits are hardly moving; let’s turf out the board!!
Some Pseudoscientific Arguments NB: profit = global surface temperature; competitiveness = solar energy.
Some References on Scientific Method F Bacon (1620) Novum Organum Scientiarum. JS Mill (1843) System of Logic. M Gardner (1957) Fads and Fallacies in the Name of Science. Dover. T Kuhn (1962) The Structure of Scientific Revolutions. K Popper (1963) Conjectures and Refutations. R Carnap (1966) An Introduction to the Philosophy of Science. C Hitchcock (2004) Contemporary Debates in Philosophy of Science.

Slides can be found here:

 

Kevin KorbMy research is in: machine learning, artificial intelligence, philosophy of science, scientific method, Bayesian inference and reasoning, Bayesian networks, artificial life, computer simulation, epistemology, evaluation theory.

See http://www.csse.monash.edu.au/~korb/ The page is out of date, but accurate as far as it goes.

http://theconversation.com/is-passing-a-turing-test-a-true-measure-of-artificial-intelligence-27801

Email kbkorb [at] gmail {dot} com twitter: @kbkorb
http://theconversation.com/profiles/kevin-korb-115721

Panel on Skepticism & Science

Panelists: Terry Kelly (Former president of Vic Skeptics), Chris Guest (Current president of Vic Skeptics), Bill Hall (Researcher at the Kororoit Institute)

Discussion includes the history of skepticism, what skepticism is today, the culture of skepticism as a movement and how skepticism relates to broader philosophy.

00:26 Terry discusses Active Skepticism – Where Science, Skepticism & Consumer wrights overlap,  – he brings up hypnotism

01:26 Skepticism does not equal cynicism – including some cool observations about the difference between the empiricism and the plausibility argument.  The issue of plausibility vs empiricism – some issues might seem implausible… some things are so implausible they have to be addressed in that way… but some people bring up the argument that some things may seem counter-intuitive – but end up being likely after empirical observation.

4:14 Chris Guest – Discusses passion about critical thinking – it’s not so much what skeptics believe, it’s the approach to arguments –

4:42 Historical definitions of skepticism – relating to cynicism (ancient greeks).  Though skepticism is not considered cynicism today, ideally they are treated as separate concepts – there are a lot of magicians in the skeptics movement – they have a trained eye – intuitively see past common blind spots and cognitive biases – whereas scientists often take things on face value.

6:22 Bill Hall discusses his background in Popperianism – and pseudoscience and belief vs rational thinking (NOTE: Contrast with Kevin Korb’s presentation on Pseudoscience vs Science – Kevin isn’t a Popperian and thinks that falsificationism is flawed).  The demarcation problem between science and mysticism.   Bill says falsification is part of skepticism – part of debunking false claims.

08:55 Chris Guest discusses group dynamics and belief systems – people reinforce each others beliefs – so Chris tries to be tougher on people they agree with than those whom he disagrees with demanding a higher standard of argument.   Straw man arguments – where someone sets up a really bad representation of an opponents arguments rather than going into the specifics of the opponents arguments.   Steel Man arguments – kind of the opposite of straw man arguments – rather than trying to create a refutable form of the opponents arguments, try to put together the best possible representation of their arguments, even better than the one they are presenting to you – take on the best possible, most charitable arguments.   Value in moving beyond conflicts based on group identity.

11:00 Terry Kelly discusses disproving a persons beliefs – though this often results in them going away and believing harder than before.  Ashley Barnett brought up an example earlier that intelligent people are easier to fool because they had stronger attention – James Randi says academics are easier to fool because they belief if they can’t work it out, since they are so smart then it must be a special power.   Intelligent people will find smart ways to justify their rational beliefs.  So sometimes it’s not so easy to change peoples minds even though you have good evidence.

 

14:36 Chris Guest discusses approaches to debating climate change deniers – using existing models that make predictions find out what assumptions the climate change deniers disagree with, and ask for an alternative model that gives better predictions.   Then the deniers might claim that the climate alarmists get more funding to create the models as an explanation to why they have the more robust models.

15:35 Q: How people asses the nature of evidence?
Chris Guest: Instead of going head to head with someone who believes in homeopathy, say ‘let’s go to a homeopathy open day and listen to the talks’ – then let people go through their own process of discovery.

 

17:37 How people become rational – how do people go from magical thinking to being rational?  Turning point or slowly drift into it?

 

Acoustics made it difficult to hear people asking questions

“Where skeptics get interested is whether people are getting what they paid for” – Terry Kelly

 

 

Science & Skepticism - Terry Kelly - Chris Guest - Bill Hall

Many thanks for watching!
– Support Scifuture via Patreon
– Please Subscribe the SciFuture Channel:
Science, Technology & the Future website:

Philosophy of Science – What & Why?

Interview with John Wilkins:

John-Wilkins---Phil-Sci-IntroEvery so often, somebody will attack the worth, role or relevance of philosophy on the internets, as I have discussed before. Occasionally it will be a scientist, who usually conflates philosophy with theology. This is as bad as someone assuming that because I do some philosophy I must have the Meaning of Life (the answer is, variously, 12 year old Scotch, good chocolate, or dental hygiene).

But it raises an interesting question or two: what is the reason to do philosophy in relation to science? being the most obvious (and thus set up the context in which you can answer questions like: are there other ways to find truth than science?). So I thought I would briefly give my reasons for that.

When philosophy began around 500BCE, there was no distinction between science and philosophy, nor, for that matter, between religion and philosophy. Arguably, science began when the pre-Socratics started to ask what the natures of things were that made them behave as they did, and equally arguably the first actual empirical scientist was Aristotle (and, I suspect, his graduate students).

But a distinction between science and philosophy began with the separation between natural philosophy (roughly what we now call science) and moral philosophy, which dealt with things to do with human life and included what we should believe about the world, including moral, theological and metaphysical beliefs. The natural kind was involved in considering the natures or things. A lot gets packed into that simple word, nature: it literally means “in-born” (natus) and the Greek physikos means much the same. Of course, something can be in-born only if it is born that way (yes, folks, she’s playing on some old tropes here!), and most physical things aren’t born at all, but the idea was passed from living to nonliving things, and so natural philosophy was born. That way.

In the period after Francis Bacon, natural philosophy was something that depended crucially on observation, and so the Empiricists arose: Locke, Berkeley, Hobbes, and later Hume. That these names are famous in philosophy suggests something: philosophy does best when it is trying to elucidate science itself. And when William Whewell in 1833 coined the term scientist to denote those who sought scientia or knowledge, science had begun its separation from the rest of philosophy.

Or imperfectly, anyway. For a start the very best scientists of the day, including Babbage, Buckland and Whewell himself wrote philosophical tomes alongside theologians and philosophers. And the tradition continues until now, such as the recent book by Stephen Hawking in which he declares the philosophical enterprise is dead, a decidedly philosophical claim to make. Many scientists seem to find the doing of philosophy inevitable.

So why do I do philosophy of science? Simply because it is where the epistemic action is: science is where we do get knowledge, and I wish to understand how and why, and the limitations. All else flows from this for me. Others I know (and respect) do straight metaphysics and philosophy of language, but I do not. It only has a bite if it gives some clarity to science. I think this is also true of metaphysics, ethics and such matters as philosophy of religion.

Now there are those who think that science effectively exhausts our knowledge-gathering. This, too, is a philosophical position, which has to be defended, and elaborated (thus causing more philosophy to be done). I don’t object to that view, but for me, it is better to be positive (say that science gives us knowledge even if other activities may do) than to be negative (deny that anything but science gives us knowledge). It may be that we get to the latter position after considering the former; if so, that would be a philosophical result.

I am fascinated by science. It allows us to do things no ancient Greek (or West Semitic) thinker would have been even able to conceive of. It means we make fewer mistakes. Philosophy is, and ought only to be, in the service of knowledge (I’m sure somebody has said that before). Science is a good first approximation of that.

But scientists who reject philosophy, as if that very rejection is not a philosophical stance (probably taken unreflectively or on the basis of half-digested emotive appeals), them I have no time for as philosophers. They should perhaps stick to their last and not make fools of themselves.

Not, of course, that every philosopher is worth reading. Sturgeon’s Law (90% of everything is crap) applies here too. But lest any scientist get too smug, recall that 99% of all scientific papers are never cited again many scientific papers are uncited . In philosophy, that ratio is perhaps lower… probably almost down to the Sturgeon limit.

See this post by John Wilkins at Evolving Thoughts: http://evolvingthoughts.net/2011/07/why-do-philosophy-of-science.

Life, Knowledge and Natural Selection – How Life (Scientifically) Designs its Future – Bill Hall

Bill HallStudies of the nature of life, evolutionary epistemology, anthropology and history of technology leads me reluctantly to the conclusion that Moore’s Law is taking us towards some kind of post-human singularity. The presentation explores fundamental aspects of life and knowledge, based on a fusion of Karl Popper’s (1972) evolutionary epistemology and Maturana and Varela’s (1980) autopoietic theory of life to show that knowledge and life must co-evolve, and that this co-evolution leads to exponential growth of knowledge and capabilities to control a planet (and the Universe???). The initial pace, based on changes to genetic heredity, is geologically slow. The addition of the capacity of living cognition for cultural heredity, changes the pace of significant change from millions of years, to millennia. Externalization of cultural knowledge to writing and printing increases the pace to centuries and decades. Networking virtual cultural knowledge at light speed via the internet, increases the pace to years or even months. In my lifetime I have seen the first generation digital computers evolve into the Global Brain.

As long as the requisites for live are available, competition for limiting resources inevitably leads to increasing complexity. Through most of the history of life, a species/individuals’ knowledge was embodied in its dynamic structure (e.g., of the nervous system) and genetic heritage that controls the development and regulation of structure. Some vertebrates evolved sufficient neural complexity to support the development of culture and cultural heredity. A few lineages, such as corvids (crows and their relatives), and two largely arboreal primate lineages (African apes and South American capuchin monkeys) independently evolved cultures able to transmit the knowledge to make and use increasingly complex tools from one generation to the next. Hominins, a lineage of tool-using apes forced by climate change around 4-5 million years ago to learn how to survive by extractive foraging and hunting on grassy savannas developed increasingly complex and sophisticated tool-kits for hunting and gathering, such that by around 2.5 million years ago our ancestors replaced most species of what was originally a substantial ecological guild of large carnivores.

Tools extend the physical and cognitive capabilities of the tool-users. In an ecological sense, hominin groups are defined by their shared survival knowledge, and inevitably compete to control limiting resources. Competition among groups led to the slow development of increasingly better stone and organic tools, and a genetically-based cognitive capacity to make and use tools. Homo heidelbergensis, that split into African (H. sapiens), European (Neanderthals), and Asian (Denisovans) some 200,000 years ago evolved complex linguistic capabilities that greatly increased the bandwidth for transmitting cultural knowledge. Some 70,000 years ago H. sapiens (“humans”) exited Africa to spread throughout Eurasia and quickly replace all other surviving hominin lineages. By ~ 50,000 years ago humans were making complex tools like bows and arrows, which put a premium on the capacity to remember the rapidly increasing volume of survival knowledge. At some point before the end of the last Ice Age, mnemonic tools were developed (“method of loci”, “songlines”) to extend the capacity of living memory by at least one order of magnitude and some 10,000 years ago as agriculture became practical in the “Fertile Crescent” monumental theaters of the mind (such as Göbekli Tepe and Stonehenge) and specialized knowledge management guilds such as the Masons provided the cultural capacity to enable the Agricultural Revolution. 7-4,000 years ago technologies for writing and the use of books and libraries enabled storing and sharing of cultural knowledge in material form external, facilitating the emergence of empires and nation-states.
Around 550 years ago printing enabled the mass production of books and widespread dissemination of bodies of knowledge to fuel the Reformation, Scientific and Industrial revolutions. Around 60 years ago the invention of the digital computer increasingly externalized cognitive processes and controls over other kinds of tools. Databases, word processing and the internet developed over the last ~30 years enabled knowledge to be created in the virtual world and then shared globally at light speed. Personal technologies developed in the last 10 years (e.g., smartphones) are allowing the emergence of post-human cyborgs. Moore’s Law of exponential growth suggests the capacity for a few orders of magnitude more before we reach the outer limits of quantum computing.

What happens next is anyone’s guess.

Slides available here:

 

 

The Shaky Foundations of Science: An Overview of the Big Issues – James Fodor

James Fodor 2013Many people think about science in a fairly simplistic way: collect evidence, formulate a theory, test the theory. By this method, it is claimed, science can achieve objective, rational knowledge about the workings of reality. In this presentation I will question the validity of this understanding of science. I will consider some of the key controversies in philosophy of science, including the problem of induction, the theory-ladenness of observation, the nature of scientific explanation, theory choice, and scientific realism, giving an overview of some of the main questions and arguments from major thinkers like Popper, Quine, Kuhn, Hempel, and Feyerabend. I will argue that philosophy of science paints a much richer and messier picture of the relationship between science and truth than many people commonly imagine, and that a familiarity with the key issues in the philosophy of science is vital for a proper understanding of the power and limits of scientific thinking.

Slides to the presentation available here: