Vernor Vinge on the Turing Test, Artificial Intelligence

Preface

the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.

 

Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.

 

AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.

 

AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.

 

AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)

 

AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.

 

AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.

 

AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.

 

AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).

 

AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).

 

The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!

 

AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.

 

AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.

 

AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂

 

AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.

 

AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.

 

AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.

 

AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.

 

AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.

 

Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.

 

AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.

 

AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.

 

AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.

 

Implications

AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.

 

AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.

 

AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.

vernor_vinge_LosCon

Vernor Vinge at Los Con 2012

Notes:
* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

Exciting progress in Artificial Intelligence – Joscha Bach

Joscha Bach discusses progress made in AI so far, what’s missing in AI, and the conceptual progress needed to achieve the grand goals of AI.
Discussion points:
0:07 What is intelligence? Intelligence as the ability to be effective over a wide range of environments
0:37 Intelligence vs smartness – interesting models vs intelligent behavior
1:08 Models vs behaviors – i.e. Deepmind – solving goals over a wide range of environments
1:44 Starting from a blank slate – how does an AI see an Atari Game compared to a human? Pac Man analogy
3:31 Getting the narrative right as well as the details
3:54 Media fear mongering about AI
4:43 Progress in AI – how revolutionary are the ideas behind the AI that led to commercial success? There is a need for more conceptual progress in AI
5:04 Mental representations require probabilistic algorithms – to make further progress we probably need different means of functional approximation
5:33 Many of the new theories in AI are currently not deployed – we can assume a tremendous shift in every day use of technology in the future because of this
6:07 It’s an exciting time to be an AI researcher

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

Ethical Progress, AI & the Ultimate Utility Function – Joscha Bach

Joscha Bach on ethical progress, and AI – it’s fascinating to think ‘What’s the ultimate utility function?’ – should we seek the answer in our evolved motivations?

Discussion points:
0:07 Future directions in ethical progress
1:13 Pain and suffering – concern for things we cannot regulate or change
1:50 Reward signals – we should only get them for things we can regulate
2:42 As soon as minds become mutable ethics dramatically changes – an artificial mind may be like a Zen master on steroids
2:53 The ultimate utility function – how can we maximize the neg-entropy in this universe?
3:29 Our evolved motives don’t align well to this ultimate utility function
4:10 Systems which only maximize what they can consume – humans are like yeast

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

 

The Grand Challenge of Developing Friendly Artificial Intelligence – Joscha Bach

Joscha Bach discusses problems with achieving AI alignment, the current discourse around AI, and inefficiencies of human cognition & communication.

Discussion points:
0:08 The AI alignment problem
0:42 Asimov’s Laws: Problems with giving AI (rules) to follow – it’s a form of slavery
1:12 The current discourse around AI
2:52 Ethics – where do they come from?
3:27 Human constraints don’t apply to AI
4:12 Human communication problems vs AI – communication costs between minds is much larger than within minds
4:57 AI can change it’s preferences

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Cognitive Biases & In-Group Convergences – Joscha Bach

Joscha Bach discusses biases in group think.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

AI, Consciousness, Science, Art & Understanding – Joscha Bach

Here Joscha Bach discusses consciousness, it’s relationship to qualia and whether an AI or a utility maximizer would do with it.

What is consciousness? “I think under certain circumstances being conscious is an important part of a mind; it’s a model of a model of a model basically. What it means is our mind (our new cortex) produces this dream that we take to be the world based on the sensory data – so it’s basically a hallucination that predicts what next hits your retina – that’s the world. Out there, we don’t know what this is.. The universe is some kind of weird pattern generator with some quantum properties. And this pattern generator throws patterns at us, and we try to find regularity in them – and the hidden layers of this neural network amount to latent variables that are colors people sounds ideas and so on.. And this is the world that we subjectively inhabit – that’s the world that we find meaningful.”

… “I find theories [about consciousness] that make you feel good very suspicious. If there is something that is like my preferred outcome for emotional reasons, I should be realising that I have a confirmation bias towards this – and that truth is a very brutal vector”..

OUTLINE:
0:07 Consciousness and it’s importance
0:47 Phenomenal content
1:43 Consciousness and attention
2:30 When AI becomes conscious
2:57 Mary’s Room – the Knowledge Argument, art, science & understanding
4:07 What is understanding? What is truth?
4:49 What interests an artist? Art as a communicative exercise
5:48 Thomas Nagel: What is it like to be a bat?
6:19 Feel good theories
7:01 Raw feels or no? Why did nature endow us with raw feels?
8:29 What is qualia, and is it important?
9:49 Insight addiction & the aesthetics of information
10:52 Would a utility maximizer care about qualia?

BIO:
Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Professor Peter Doherty – COVID19 Pandemic: Research & Action

Fascinating interview with Nobel Laureate Professor Peter Doherty on the COVID-19 pandemic, the nature of COVID-19, where it came from, it’s similarities to influenza other coronaviruses (i.e. SARS, MERS), how infectivity works, what we as citizens can do to stay safe and help minimise the burden on our health systems, achieving rapid responses to pandemics, a strategic infection strategy (variolation) in lieu of an actual vaccination, rejuvenating the thymus to help boost our immunity as we age, computer modelling of disease, and what we can hope to have learned from this ordeal after the pandemic is over.

Audio:

Youtube (video choppy, audio fine):

Peter’s book ‘Pandemics: What everyone needs to know’ can be found at Dymocks and Amazon.

 

Biography

Peter Charles Doherty, AC FRS FMedSci is an Australian veterinary surgeon and researcher in the field of medicine. He received the Albert Lasker Award for Basic Medical Research in 1995, the Nobel Prize in Physiology or Medicine jointly with Rolf M. Zinkernagel in 1996 and was named Australian of the Year in 1997. In the Australia Day Honours of 1997, he was named a Companion of the Order of Australia for his work with Zinkernagel. He is also a National Trust Australian Living Treasure. In 2009 as part of the Q150 celebrations, Doherty’s immune system research was announced as one of the Q150 Icons of Queensland for its role as an iconic “innovation and invention”.

https://en.wikipedia.org/wiki/Peter_C._Doherty

https://www.doherty.edu.au
https://www.nobelprize.org/prizes/medicine/1996/doherty/biographical/

#COVID_19 #Coronavirus #Pandemics #COVID19

Gero, Singapore AI startup bags $2.2m to create a drug that helps extend human life

Congrats to Gero for the $2.2m of funding to create a drug that helps extend human life!

I did two interviews with Gero in 2019 at Undoing Aging – here, with Peter Fedichev on Quantifying Aging in Large Scale Human Studies:

And here with Ksenia Tsvetkova on Data Driven Longevity:

Doris Yu at Tech In Asia said:

The company observed that as population growth slows down, the average lifespan increases. For example, there will only be 250 million people older than 65 by the end of the decade in China. Countries like Singapore, meanwhile, are not able to attract enough migrants to help offset the aging population.

Gero then wants to provide a medical solution to help extend healthspan as well as improve the overall well-being and productivity of its future customers.

It’s trying to do so by collecting medical and genetic data via a repository of biological samples and creating a database of blood samples collected throughout the last 15 years of patients’ lives. Its proprietary AI platform was able to determine a type of protein that could help with rejuvenation if blocked or removed.

What problem is it solving? “Aging is the most important single risk factor behind the incidence of chronic diseases and death. […] We are ready to slow down – if not reverse – aging with experimental therapies,” Peter Fedichev, co-founder and CEO of Gero, told Tech in Asia.

Explorebit.io wrote:

Gero, a Singapore-based company that develops new drugs for ageing and other complicated disorders using its proprietary developed artificial intelligence (AI) platform, secured $2.2m in Series A funding.

The round, which brought total capital raised since founding to over $7.5m, was led by Bulba Ventures with participation from previous investors and serial entrepreneurs in the fields of pharmaceuticals, IT, and AI. The co-founder of Bulba Ventures Yury Melnichek joined Gero’s Board of Directors. The company will use the funds to further develop its platform.

Led by founder Peter Fedichev, Gero provides an AI-based platform for analyzing clinical and genetic data to identify treatments for some of the most complicated diseases, such as chronic aging-related diseases, mental disorders, and others. The company’s experts used large datasets of medical and genetic information from hundreds of thousands of people acquired via biobanks and created a proprietary database of blood samples collected throughout the last 15 years of the patients’ lives.

Using this data, the platform determined the protein that circulates in people’s blood whose removal or blockage should lead to rejuvenation. Subsequent experiments at National University of Singapore involved aged animals and demonstrated mortality delay (life-extension) and functional improvements after a single experimental treatment. In the future, this new drug could enable patients to recover after a stroke and could help cancer patients in their fight against accelerated ageing resulting from chemotherapy.

The platform is currently also being utilized to develop drugs in other areas: for example, the group’s efforts to find potential therapies for COVID-19, including those that could reduce mortality from complications related to ageing, has already attracted a great deal of attention from large pharmaceutical companies and leading global media organizations.

Posthumanism – Pramod Nayar

Interview with Pramod K. Nayar on #posthumanism ‘as both a material condition and a developing philosophical-ethical project in the age of cloning, gene engineering, organ transplants and implants’. The book ‘Posthumanism’ by Pramod Nayar: https://amzn.to/2OQEA8z Rise of the posthumanities article: https://bit.ly/32Q67Pm
This time, I decided trying to itemize the interview so you can find sections via the time signature links:
0:00 Intro / What got Pramod interested in posthuman studies?
04:16 Defining the terms – what is posthumanism? Cultural framing of natural vs unnatural. Posthumanism is not just bodily or mental enhancement, but involves changing the relationship between humans, non-human lifeforms, technology and non-living matter. Displacement of anthropocentrism. 
08:01 Anthropocentric biases inherited from enlightenment humanist thinking and human exceptionalism. The formation of the transhumanist declaration with part of it focusing on the human with point 7 of the declaration focusing on the well-being of all sentience. The important question of empathy – not limiting it to the human species. The issue of empathy being a good lunching pad for further conversations between the transhumanists and the posthumanists. https://humanityplus.org/philosophy/t… 
11:10 Difficulties in getting everyone to agree on cultural values. Is a utopian ideal posthumanist/transhumanist society possible? 
13:25 Collective societies, hive minds, borganisms. Distributed cognition, the extended mind hypothesis, cognitive assemblages, traditions of knowledge sharing. 
16:58 Does the humanities need some form of reconfiguration to shift it towards something beyond the human? Rejecting some of the value systems that enlightenment humanism claimed to be universal. Julian Savulescu’s work on moral enhancement 
20:58 Colonialism – what is it? 
21:57 Aspects of enlightenment humanism that the critical posthumanists don’t agree with. But some believe the poshumanists to be enlightenment haters that reject rationality – is this accurate? 
24:33 Trying to achieve agreement on shared human values – is vulnerability rather than dignity a usable concept that different groups can agree with? 
26:37 The idea of the monster – people’s fear of what they don’t understand. Thinking past disgust responses to new wearable technologies and more radical bodily enhancements. 
29:45 The future of posthuman morphology and posthuman rights – how might emerging means of upgrading our bodies / minds interfere with rights or help us re-evaluate rights? 
33:42 Personhood beyond the human
35:11 Should we uplift non-human animals? Animals as moral patients becoming moral actors through uplifting? Also once Superintelligent AI is developed, should it uplift us? The question of agency and aspiration – what are appropriate aspirations for different life forms? Species enhancement and Ian Hacking’s idea of ‘Making up people’ – classification and how people come to inhabit the identities that exist at various points in history, or in different environments. https://www.lrb.co.uk/the-paper/v28/n… 
38:10 Measuring happiness – David Pearce’s idea of eliminating suffering and increasing happiness through advanced technology. What does it mean to have welfare or to flourish? Should we institutionalise wellbeing, a gross domestic happiness, world happiness index? 
40:27 Anders Sandberg asks: Transhumanism and posthumanism often do not get along – transhumanism commonly wears its enlightenment roots on its sleeve, and posthumanism often spends more time criticising the current situation than suggesting an out of it. Yet there is no fundamental reason both perspectives could not simultaneously get what they want: a post-human posthumanist concept of humanity and its post-natural environment seem entirely possible. What is Nayar’s perspective on this win-win vision? 
44:14 The postmodern play of endless difference and relativism – what is the good and bad of postmodernism on posthumanist thinking? 
47:16 What does postmodernism have to offer both posthumanism and transhumanism? 
49:17 Thomas Kuhn’s idea of paradigm changes in science happening funeral by funeral. 
58:58 – How has the idea of the singularity influenced transhumanist and posthumanist thinking? Shift’s in perspectives to help us ask the right questions in science, engineering and ethics in order to achieve a better future society. 
1:01:55 – What AI is good and bad at today. Correlational thinking vs causative thinking. Filling the gaps as to what’s required to achieve ‘machine understanding’. 
1:03:26 – Influential literature on the idea of the posthuman – especially that which can help us think about difference and ‘the other’ (or the non-human) 

How science fails

There is a really interesting Aeon article on what bad science, and how it fails.

What is Bad Science?
According to Imre Lakatosh, science degenerates unless it is both theoretically and experimentally progressive. Can Lakatosh’s ‘scientific programme’ approach, which incorporates merits of both Khunian and Popperian ideas, help solve this problem?

Is our current research tradition adequate and effective enough to solve seemingly intractable scientific problems in a timely manner (i.e. in foundational theoretical physics or climate science)?
Ideas are cheap, but backing them up with sound hypotheses (main and auxiliary) predicting novel stuff and experimental evidence aimed at confirming this stuff _is expensive_ given time/resource constraints means that among other things an ideal experimental progressiveness is sometimes not achievable.

A scientific programme is considered ‘degenerating’ if:
1) it’s theoretically degenerating because it doesn’t predict novel facts (it just accommodates existing facts); no new forecasts
OR
2) it’s experimentally degenerating because none of the predicted novel facts can be tested (i.e. string theory)

Lakatosh’s ideas (that good science is both theoretically and experimentally progressive) may serve as groundwork for further maturing what it means to ‘do science’ where an existing dominant programme is no longer able to respond to accumulating anomalies – which was the reason why Kuhn wrote about changing scientific paradigms – but unlike Kuhn, Lakatos believes that a ‘gestalt-switch’ or scientific revolution should be driven by rationality rather than mob psychology.
Though a scientific programme which looks like it is degenerating may be just around the corner from a breakthrough…

For anyone seeking an unambiguously definitive demarcation criterion, this is a death-knell. On the one hand, scientists doggedly pursuing a degenerating research programme are guilty of an irrational commitment to bad science. But, on the other hand, these same scientists can legitimately argue that they’re behaving quite rationally, as their research programme ‘might still be true’, and salvation might lie just around the next corner (which, in the string theory programme, is typically represented by the particle collider that has yet to be built). Lakatos’s methodology doesn’t explicitly negate this argument, and there is likely no rationale that can.

Lakatos argued that it is up to individual scientists (or their institutions) to exercise some intellectual honesty, to own up to their own degenerating programmes’ shortcomings (or, at least, not ‘deny its poor public record’) and accept that they can’t rationally continue to flog a horse that appears, to all intents and purposes, to be quite dead. He accepted that: ‘It is perfectly rational to play a risky game: what is irrational is to deceive oneself about the risk.’ He was also pretty clear on the consequences for those indulging in such self-deception: ‘Editors of scientific journals should refuse to publish their papers … Research foundations, too, should refuse money.’

This article is totally worth a read…

https://aeon.co/essays/imre-lakatos-and-the-philosophy-of-bad-science

The Problem of Feral Cats

Feral cats kill about 1 million native animals per day in ecosystems which didn’t evolve to cope with cats.  How should we deal with the problem of feral cats? I hear a lot of ‘kill ’em all’ [1]. When in HK I noticed a lot of cats with one ear slightly smaller.. then found out that there were vans of vets capturing then de-sexing cats, marking them by taking a small slice of their ear, then releasing them. I thought that this was a compassionate approach, though may have cost more to do than just killing the cats.
This issue raises some interesting fundamental questions that humans often seem all to ready to answer with our amygdalas – it’s hard not to, it’s in our nature.  Though we do realize that us humans have had the largest impact on the ecology – and that it’s our own fault feral cats are here.  Despite it being humanity’s fault, the feral cat problem still remains. As long as there are a population of human pet owners won’t be 100% responsible for their cats, the feral cat problem will always exist.  A foolproof morality pill for humans and their pets seems quite far off – so in the mean time, we can’t depend on changing cat and human behaviour.

To date, feral cat eradication has only been successful on small islands – not on mainlands.  Surprisingly, it was accidentally found that low-level culling feral cats may increase their numbers based on observation in the forests of southern tasmania – “Increases in minimum numbers of cats known to be alive ranged from 75% to 211% during the culling period, compared with pre- and post-cull estimates, and probably occurred due to influxes of new individuals after dominant resident cats were removed.”

A study by CSIRO, which advocates considering researching and eventually using gene drives, says:

So far, traditional controls like baiting have not been effective on cats. In fact, the only way land managers have been able to stop cats from getting at our native animals is to construct cat-proof fencing around reserve areas, like those managed by Australian Wildlife Conservancy, then removing all the cats inside and allowing native mammals to flourish. This isn’t considered sustainable in the long term and, outside the fences, this perfect storm of predatory behaviour has continued to darken our biodiversity landscape.

The benefit of gene drives is that it can reduce and even eventually eradicate feral cat populations without killing the cats, but by essentially making it so feral cat offspring all end up male.

…there is hope on the horizon—gene drive technology. Essentially, gene drives are systems that can bias genetic inheritance via sexual reproduction and allow a particular genetic trait to be passed on from a parent organism to all offspring, and therefore the ability of that trait to disperse through a population is greatly enhanced… Using this type of genetic modification (GM) technology, it becomes theoretically possible to introduce cats into the feral populations to produce only male offspring. Over time, the population would die out due to lack of breeding partners.

Research into gene-drives and broader genetics can help solve a lot of other related problems.  Firstly I don’t assume we should  just assume that future tech will be able to solve all our problems, though if we sequenced as much species as possible and kept highly accurate and articulate records of ecosystems, this may help to rejuvenate or even revive species and their habitats at some time in the future – and genetics (esp gene-drives and CRISPR) research has proven to be very powerful – so from the point of view of wildlife / ecosystem preservation, a catalog and revive strategy is surely worthy of serious consideration. One might see it as restoration ecology + time travel.

There are a myriad of considerations but what are the fundamental, ultimate goals of mitigating the negative impacts of feral cats? Two goals may conflict – species preservation and overall suffering reduction. Should we see single goals as totalizing narratives – in practice perhaps not – but great fodder for thought experiments:
1) Species preservation: If this is the ultimate goal, acknowledging that the most upstream cause of feral cats are humans, we could impose staggeringly huge fines on people for not being responsible pet owners – and use that to fund studies and programs for ecosystem preservation – given current technology we can’t resurrect long gone species, though we can try to more deeply catalog species genomes and ecosystem configurations with the hope that one day once we solve human irrationality, perhaps we can then be in a position to choose to engage in efficient comprehensive re-wilding programs – incidentally we may wish to curb the population of pet lovers (for the record, that’s a joke :))
2) If Suffering reduction is the ultimate goal then that really changes things up – there is a ridiculous amount of suffering in the wild, as both David Pearce and Richard Dawkins show. Should we eradicate nature? I’ll stop there.

The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.Richard Dawkins, River Out of Eden: A Darwinian View of Life

Interview with David Pearce on ‘Wild animal suffering – Ethics of Wildlife Management and Conservation Biology’

David Pearce advocates for a benign compassionate stewardship of nature, alleviating suffering in the near and long term futures using high technology (assuming that ultimately the whole world will be computationally accessible to the micromanagement needed for benign hyper-stewardship of nature).

https://www.spca.org.hk/en/animal-birth-control/cat-colony-care-programme

[1] A discussion in a FB group ‘Australian Freethinkers’ – the OP was “What do you think about the feral cats in Australia?

I hear farmers shoot them. They are huge.

They can’t be doing anything good for small rare marsupials.

Should we be aiming to kill them all?”