Posts

The Singularity & Prediction – Can there be an Intelligence Explosion? – Interview with Marcus Hutter

Can there be an Intelligence Explosion?  Can Intelligence Explode?
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. What could it mean for intelligence to explode?
We need to provide more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Irving John Good - 'Good Thinking: The Foundations of Probability and Its Applications'

team-marcus-hutterPaper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf
http://arxiv.org/abs/1202.6177

See also:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/

High Impact Technologies with Andrew Barron

Well that’s an open-ended question: What technologies will have high impact in the future?
I think what we are seeing at the moment – and we are seeing it quite rapidly – is the fusion of the biological sciences and the information sciences period – so it goes beyond AI.
We’re seeing a capacity to manipulate and rewrite genomes – again we are actually need an involvement of an AI to do that properly – but we really are truly seeing a fusion of the biological and information sciences which is opening absolutely transformative technologies – that we probably can’t quite properly predict or name currently – but I imagine that the future would see explosive growth in this area and the emergence disciplines that we can’t even imagine currently.

Andrew Barron - High Impact Technology.New Capacities to manipulate genomes
Yep.. exactly… exactly – so new capacities to manipulate genomes – and equally we are getting much smarter about the risks of that and much more careful with that – but we are realizing also that the genome itself is highly self-organized and massively data-heavy – and yet this fusion of biology and information sciences is liberating entirely new disciplines – and it’s happening at such a pace.  I mean, just in terms of my life as a scientist – we’ve gone from – when I was at school we were told that the human genome was impossible – informatically impossible – there would never be enough computing power in the world to sequence the human genome.  Now we’re sequencing genomes for just over $1000 very very quickly and easily – our challenge now is to be intelligent in what to do with that data.

Andrew Barron - High Impact Technology- Significant Strides in Technology
What’s interesting about the iPhone is not the technology itself – it’s the way that it has changed human behavior.  So we’ve suddenly adapted very very very rapidly to a carryable device that enables us to have immediate communication / immediate access to databases and reference libraries – and their capacity to store endless amounts of images if we choose to do so – and we’ve adapted to that seamlessly – to a point when people feel lost and panic if their phone is broken or is taken away from them.  That’s the more interesting interesting impact of the iPhone – and I think what that says is that we’re going to see humans adapt very quickly easily to other forms of wearable or insert-able technologies – I think we’ve shown by the iPhone example that we have a capacity embrace that kind of change – if it offers convenience and ease and improves our connectivity and quality of life.
In terms of technology though I think that the biggest strides will come from biological – there is research that fuses biology and technology – I think that we are on the cusp of that – the more interesting technological changes will come not through simple technology – but by an understanding of how our brains work – by understanding the human brain.  If we can actually crack that and then interface that with technology – that will get completely transformative technological solutions.
So in the far future could we see humans being a mixture of technology and organic solutions – and would be basically see a re-imagining of humanity – in a far future?  Again I see no reason why not in a far future.

Andrew Barron - AI.00_03_16_08.Still001

Andrew Barron is an Associate Professor in the Department of Biological Sciences at Macquarie University. With his team at Macquarie they are exploring the neurobiology of major behavioural systems such as memory, goal-directed behaviour and stress from a comparative and evolutionary perspective. In 2015 Andrew was awarded an ARC Future Fellowship to develop a computational model of the honey bee brain.

Andrew’s PhD (Department of Zoology, University of Cambridge 1999) considered the possibility of the retention of memory through metamorphosis in Drosophila. Prior to his move to Macquarie in 2007 Andrew had the opportunity to work with and be mentored by Prof. Ben Oldroyd (University of Sydney), Prof. Gene Robinson (University of Illinois), Prof. Mandayam Srinivasan and Prof. Ryszard Maleszka (Australian National University).

Andrew Barron - High Impact Technology. - title

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Panel - Ray Kurzweil Stuart Russell Max Tegmark Harry Shum - mod Margaret BodenShould science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
….
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.

7th Annual Conference of the Australasian Bayesian Network Modelling Society (ABNMS2015)

November 23 – 24, 2015: Pre-Conference Workshop
November 25 – 26, 2015: Conference

[Official Website Here]

Location: Monash University, Caulfield, Melbourne (Australia)
Promo vid | Contact: abnms2015@abnms.org

Keynote Speakers: The conference organisers are pleased to announce that Dr Bruce Marcot of the US Forest Service, Dan Ababei from Lighttwist Software, Netherlands and Assoc Prof Jonathan Keith from Monash University will deliver the keynote address.

You will be able to register for the tutorials and the conference separately or together.

Bayesian Intelligence blog about the conf

– Dr. Kevin B. Korb is a Director and co-founder of Bayesian Intelligence, and a reader at Monash University. He specializes in the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic. Email: kevin.korb (at) bayesian-intelligence.com

Seventh Annual Conference of the Australasian Bayesian Network Modelling Society - Ann E Nicholson– Prof. Ann E. Nicholson is a Director and co-founder of Bayesian Intelligence and a professor at Monash University who specializes in Bayesian network modelling. She is an expert in dynamic Bayesian networks (BNs), planning under uncertainty, user modelling, Bayesian inference methods and knowledge engineering BNs. Email: ann (dot) nicholson (at) bayesian-intelligence (dot) com

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
– Science, Technology & the Future website: http://scifuture.org

Vernor Vinge on the Technological Singularity

What is the Singularity? Vernor Vinge speaks about technological change, offloading cognition from minds into the environment, and the potential of Strong Artificial Intelligence.

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” – “The Coming Technological SingularityVernor Vinge 1993

Vernor Vinge popularised and coined the term “Technological Singularity” in his 1993 essay “The Coming Technological Singularity“, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended,” such that no current models of reality are sufficient to predict beyond it.

courtesy of the Imaginary Foundation

courtesy of the Imaginary Foundation

Vinge published his first short story, “Bookworm, Run!”, in the March 1966 issue of Analog Science Fiction, then edited by John W. Campbell. The story explores the theme of artificially augmented intelligence by connecting the brain directly to computerised data sources. He became a moderately prolific contributor to SF magazines in the 1960s and early 1970s. In 1969, he expanded two related stories, (“The Barbarian Princess”, Analog, 1966 and “Grimm’s Story”, Orbit 4, 1968) into his first novel, Grimm’s World. His second novel, The Witling, was published in 1975.

Vinge came to prominence in 1981 with his novella True Names, perhaps the first story to present a fully fleshed-out concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others.

 

Vernor Vinge

Image Courtesy – Long Now Foundation

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

On Artificial Intelligence – Tim Josling

Tim Josling discusses AI, the Singularity, the way the public might react, whether they would be prepared, John Searle’s Chinese Room thought experiment, and consciousness.

Filmed in the majestic Blue Mountains a couple of hours out of Sydney in Australia. Here are some photos I took while I was there.

Also see Tim’s talk at H+ @Melbourne 2012

Tim’s Bio

Tim Josling - On Artificial IntelligenceTim Josling studied Law, Anthopology, Philosophy and Mathematics before switching to Computer Science at the dawn of the computer era. He worked on implementing some of the first transactional systems in Australia, later worked on the first ATM networks and was the chief architect for one of the first Internet Banking applications in Australia, and designed an early message switching (“middleware”) application in the USA. During his career he specialised in making large scale applications reliable and fast, saving several major projects from being cancelled due to poor performance and excessive running costs. This led to an interest in the progress of computer hardware and in Moore’s Law, which states that the power of computers grows roughly 10-fold every 5 years. In his spare time he contributed to various open source projects such as the GNU Compiler Collection. After attending the first Singularity Summit in Australia, he decided to retire so he could devote himself full-time to researching Artificial Intelligence, the Technological Singularity and Trans-humanism. He is currently working on applying AI techniques to financial and investment applications.
Talk: The Surprising Rate of Progress in Artificial Intelligence Research

Subscribe to the SciFuture Youtube Channel:

Science, Technology & the Future

Understanding the New Statistics

Geoff discusses statistics, confidence intervals, Bayesian approaches, meta-analysis, and problems with the use of ‘P’ values in significance testing.

Geoff Cumming v2.00_00_19_07.Still003Discussion points:
– Describe your background and involvement in statistics.
– How have orthodox statistics helped psychology (& science)? How has it harmed the science?
– What methods, models and tools do you commonly use in data analysis and why do you choose them?
– What is the dance of the p values? How do you cope with dancing p’s?
– What is meta-analysis & how is it done? How have meta-analysts coped with the bias in publishing data and results? What has the profession done about it?
– Confidence intervals help compared to p’s, by providing info about variation. Do they help enough? Why not credible intervals? Do you see a role for Bayesian statistics in day-to-day science?
– Where is statistical inference heading? Is there a next big thing and, if so, what is it?
– Does every student need to learn computer programming (“coding”) nowadays?

Interviewed by Kevin Korb and Adam Ford at Monash University Clayton.

Geoff’s YouTube Channel can be found here.
About the book:
Cumming, G. (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge

–    Explains estimation, with many examples.
–    Designed for any discipline that uses statistical significance testing.
–    For advanced undergraduate and graduate students, and researchers.
–    Comes with free ESCI software.
–    May be the first evidence-based statistics textbook.
–    Assumes only prior completion of any intro statistics course.
–    See the dance of the confidence intervals, and many other intriguing things.

The main message of the book is summarised in two short magazine articles, in The Conversation, and InPsych.
Here is an interview on ABC Radio.

Buy ‘Understanding the New Statistics’ from Amazon

his is the first book to introduce the new statistics – effect sizes, confidence intervals, and meta-analysis – in an accessible way. It is chock full of practical examples and tips on how to analyze and report research results using these techniques. The book is invaluable to readers interested in meeting the new APA Publication Manual guidelines by adopting the new statistics – which are more informative than null hypothesis significance testing, and becoming widely used in many disciplines.

Geoff Cumming - The New StatisticsAccompanying the book is the Exploratory Software for Confidence Intervals (ESCI) package, free software that runs under Excel and is accessible at www.thenewstatistics.com. The book’s exercises use ESCI’s simulations, which are highly visual and interactive, to engage users and encourage exploration. Working with the simulations strengthens understanding of key statistical ideas. There are also many examples, and detailed guidance to show readers how to analyze their own data using the new statistics, and practical strategies for interpreting the results. A particular strength of the book is its explanation of meta-analysis, using simple diagrams and examples. Understanding meta-analysis is increasingly important, even at undergraduate levels, because medicine, psychology and many other disciplines now use meta-analysis to assemble the evidence needed for evidence-based practice.

The book’s pedagogical program, built on cognitive science principles, reinforces learning:

  • Boxes provide “evidence-based” advice on the most effective statistical techniques.
  • Numerous examples reinforce learning, and show that many disciplines are using the new statistics.
  • Graphs are tied in with ESCI to make important concepts vividly clear and memorable.
  • Opening overviews and end of chapter take-home messages summarize key points.
  • Exercises encourage exploration, deep understanding, and practical applications.

This highly accessible book is intended as the core text for any course that emphasizes the new statistics, or as a supplementary text for graduate and/or advanced undergraduate courses in statistics and research methods in departments of psychology, education, human development , nursing, and natural, social, and life sciences. Researchers and practitioners interested in understanding the new statistics, and future published research, will also appreciate this book. A basic familiarity with introductory statistics is assumed.

Many thanks for watching!
Support this website via Patreon
Please Subscribe to the YouTube Channel
Science, Technology & the Future