Posts

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

The Revolutions of Scientific Structure – Colin Hales

colin hales orange bg“The Revolutions of Scientific Structure” reveals an empirically measured discovery, by science, about the natural world that is the human scientist. The book’s analysis places science at the cusp of a major developmental transformation caused by science targeting the impossible: the science of consciousness, which was started in the late 1980s by a science practice that cannot, in principle, ever succeed. This impossible science must fail, not because it is malformed, but because it cannot deliver to engineers what is needed to build artificial consciousness.

The book formally reveals how fully expressed scientific behaviour actually has two faces, like the Roman god Janus. Currently we only use one face, the ‘Appearance-Aspect’ and it is measured and properly documented by the book for the first time. Where some scientists accidentally use the other, the two faces are shown to be confused as one. There are actually two fundamental kinds of ‘laws of nature’ that jointly account for the one underlying natural world. The recognition and addition of the second kind, the ‘Structure-Aspect’, is the book’s proposed transformation of science.

The upgraded framework is called ‘Dual Aspect Science’ and is posited as the adult form of science that had to wait for computers before it could emerge a fully formed butterfly from its millennial larval form that is single (appearance)-aspect science. Only ‘Structure-Aspect’ computation can scientifically reveal the principles underlying the nature of consciousness — in the form of the consciousness that is/underlies scientific observation. While this outcome ultimately affects all scientists, initially only neuroscience and physics are those that, together, have the responsibility for the empirical work needed for the introduction of Dual-Aspect science. This is not philosophy. This is empirical science.

More information on this title can be found at: http://www.worldscientific.com/worldscibooks/10.1142/9211#t=aboutBook .

Document of presentation available here:

Life, Knowledge and Natural Selection – How Life (Scientifically) Designs its Future – Bill Hall

Bill HallStudies of the nature of life, evolutionary epistemology, anthropology and history of technology leads me reluctantly to the conclusion that Moore’s Law is taking us towards some kind of post-human singularity. The presentation explores fundamental aspects of life and knowledge, based on a fusion of Karl Popper’s (1972) evolutionary epistemology and Maturana and Varela’s (1980) autopoietic theory of life to show that knowledge and life must co-evolve, and that this co-evolution leads to exponential growth of knowledge and capabilities to control a planet (and the Universe???). The initial pace, based on changes to genetic heredity, is geologically slow. The addition of the capacity of living cognition for cultural heredity, changes the pace of significant change from millions of years, to millennia. Externalization of cultural knowledge to writing and printing increases the pace to centuries and decades. Networking virtual cultural knowledge at light speed via the internet, increases the pace to years or even months. In my lifetime I have seen the first generation digital computers evolve into the Global Brain.

As long as the requisites for live are available, competition for limiting resources inevitably leads to increasing complexity. Through most of the history of life, a species/individuals’ knowledge was embodied in its dynamic structure (e.g., of the nervous system) and genetic heritage that controls the development and regulation of structure. Some vertebrates evolved sufficient neural complexity to support the development of culture and cultural heredity. A few lineages, such as corvids (crows and their relatives), and two largely arboreal primate lineages (African apes and South American capuchin monkeys) independently evolved cultures able to transmit the knowledge to make and use increasingly complex tools from one generation to the next. Hominins, a lineage of tool-using apes forced by climate change around 4-5 million years ago to learn how to survive by extractive foraging and hunting on grassy savannas developed increasingly complex and sophisticated tool-kits for hunting and gathering, such that by around 2.5 million years ago our ancestors replaced most species of what was originally a substantial ecological guild of large carnivores.

Tools extend the physical and cognitive capabilities of the tool-users. In an ecological sense, hominin groups are defined by their shared survival knowledge, and inevitably compete to control limiting resources. Competition among groups led to the slow development of increasingly better stone and organic tools, and a genetically-based cognitive capacity to make and use tools. Homo heidelbergensis, that split into African (H. sapiens), European (Neanderthals), and Asian (Denisovans) some 200,000 years ago evolved complex linguistic capabilities that greatly increased the bandwidth for transmitting cultural knowledge. Some 70,000 years ago H. sapiens (“humans”) exited Africa to spread throughout Eurasia and quickly replace all other surviving hominin lineages. By ~ 50,000 years ago humans were making complex tools like bows and arrows, which put a premium on the capacity to remember the rapidly increasing volume of survival knowledge. At some point before the end of the last Ice Age, mnemonic tools were developed (“method of loci”, “songlines”) to extend the capacity of living memory by at least one order of magnitude and some 10,000 years ago as agriculture became practical in the “Fertile Crescent” monumental theaters of the mind (such as Göbekli Tepe and Stonehenge) and specialized knowledge management guilds such as the Masons provided the cultural capacity to enable the Agricultural Revolution. 7-4,000 years ago technologies for writing and the use of books and libraries enabled storing and sharing of cultural knowledge in material form external, facilitating the emergence of empires and nation-states.
Around 550 years ago printing enabled the mass production of books and widespread dissemination of bodies of knowledge to fuel the Reformation, Scientific and Industrial revolutions. Around 60 years ago the invention of the digital computer increasingly externalized cognitive processes and controls over other kinds of tools. Databases, word processing and the internet developed over the last ~30 years enabled knowledge to be created in the virtual world and then shared globally at light speed. Personal technologies developed in the last 10 years (e.g., smartphones) are allowing the emergence of post-human cyborgs. Moore’s Law of exponential growth suggests the capacity for a few orders of magnitude more before we reach the outer limits of quantum computing.

What happens next is anyone’s guess.

Slides available here: