Vernor Vinge on the Turing Test, Artificial Intelligence

Preface

the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.

 

Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.

 

AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.

 

AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.

 

AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)

 

AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.

 

AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.

 

AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.

 

AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).

 

AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).

 

The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!

 

AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.

 

AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.

 

AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂

 

AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.

 

AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.

 

AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.

 

AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.

 

AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.

 

Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.

 

AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.

 

AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.

 

AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.

 

Implications

AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.

 

AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.

 

AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.

vernor_vinge_LosCon

Vernor Vinge at Los Con 2012

Notes:
* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

Joscha Bach – GPT-3: Is AI Deepfaking Understanding?

Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more!


Discussion points:
02:40 What’s missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand – what’s missing?
08:35 Symbol grounding – does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can’t write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data – video, audio, text etc
26:00 GPT-3 a universal chat-bot – conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience – it can’t plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters – Amazon may be doing something similar – future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world – no reason why GPT-3 can’t be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it’s relationship to mathematics?
59:30 Stateless systems vs step by step Computation – Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can’t describe a consistent reality without contradictions
1:06:04 Stevan Harnad’s understanding of computation
1:08:32 Causation / answering ‘why’ questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain – would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit – spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models – parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features – predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 ‘Category’ is a useful concept – gradients are often hard to compute – so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term ‘general intelligence’ inherits it’s essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color – natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction

Exciting progress in Artificial Intelligence – Joscha Bach

Joscha Bach discusses progress made in AI so far, what’s missing in AI, and the conceptual progress needed to achieve the grand goals of AI.
Discussion points:
0:07 What is intelligence? Intelligence as the ability to be effective over a wide range of environments
0:37 Intelligence vs smartness – interesting models vs intelligent behavior
1:08 Models vs behaviors – i.e. Deepmind – solving goals over a wide range of environments
1:44 Starting from a blank slate – how does an AI see an Atari Game compared to a human? Pac Man analogy
3:31 Getting the narrative right as well as the details
3:54 Media fear mongering about AI
4:43 Progress in AI – how revolutionary are the ideas behind the AI that led to commercial success? There is a need for more conceptual progress in AI
5:04 Mental representations require probabilistic algorithms – to make further progress we probably need different means of functional approximation
5:33 Many of the new theories in AI are currently not deployed – we can assume a tremendous shift in every day use of technology in the future because of this
6:07 It’s an exciting time to be an AI researcher

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

Ethical Progress, AI & the Ultimate Utility Function – Joscha Bach

Joscha Bach on ethical progress, and AI – it’s fascinating to think ‘What’s the ultimate utility function?’ – should we seek the answer in our evolved motivations?

Discussion points:
0:07 Future directions in ethical progress
1:13 Pain and suffering – concern for things we cannot regulate or change
1:50 Reward signals – we should only get them for things we can regulate
2:42 As soon as minds become mutable ethics dramatically changes – an artificial mind may be like a Zen master on steroids
2:53 The ultimate utility function – how can we maximize the neg-entropy in this universe?
3:29 Our evolved motives don’t align well to this ultimate utility function
4:10 Systems which only maximize what they can consume – humans are like yeast

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

 

The Grand Challenge of Developing Friendly Artificial Intelligence – Joscha Bach

Joscha Bach discusses problems with achieving AI alignment, the current discourse around AI, and inefficiencies of human cognition & communication.

Discussion points:
0:08 The AI alignment problem
0:42 Asimov’s Laws: Problems with giving AI (rules) to follow – it’s a form of slavery
1:12 The current discourse around AI
2:52 Ethics – where do they come from?
3:27 Human constraints don’t apply to AI
4:12 Human communication problems vs AI – communication costs between minds is much larger than within minds
4:57 AI can change it’s preferences

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Cognitive Biases & In-Group Convergences – Joscha Bach

Joscha Bach discusses biases in group think.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

AI, Consciousness, Science, Art & Understanding – Joscha Bach

Here Joscha Bach discusses consciousness, it’s relationship to qualia and whether an AI or a utility maximizer would do with it.

What is consciousness? “I think under certain circumstances being conscious is an important part of a mind; it’s a model of a model of a model basically. What it means is our mind (our new cortex) produces this dream that we take to be the world based on the sensory data – so it’s basically a hallucination that predicts what next hits your retina – that’s the world. Out there, we don’t know what this is.. The universe is some kind of weird pattern generator with some quantum properties. And this pattern generator throws patterns at us, and we try to find regularity in them – and the hidden layers of this neural network amount to latent variables that are colors people sounds ideas and so on.. And this is the world that we subjectively inhabit – that’s the world that we find meaningful.”

… “I find theories [about consciousness] that make you feel good very suspicious. If there is something that is like my preferred outcome for emotional reasons, I should be realising that I have a confirmation bias towards this – and that truth is a very brutal vector”..

OUTLINE:
0:07 Consciousness and it’s importance
0:47 Phenomenal content
1:43 Consciousness and attention
2:30 When AI becomes conscious
2:57 Mary’s Room – the Knowledge Argument, art, science & understanding
4:07 What is understanding? What is truth?
4:49 What interests an artist? Art as a communicative exercise
5:48 Thomas Nagel: What is it like to be a bat?
6:19 Feel good theories
7:01 Raw feels or no? Why did nature endow us with raw feels?
8:29 What is qualia, and is it important?
9:49 Insight addiction & the aesthetics of information
10:52 Would a utility maximizer care about qualia?

BIO:
Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Professor Peter Doherty – COVID19 Pandemic: Research & Action

Fascinating interview with Nobel Laureate Professor Peter Doherty on the COVID-19 pandemic, the nature of COVID-19, where it came from, it’s similarities to influenza other coronaviruses (i.e. SARS, MERS), how infectivity works, what we as citizens can do to stay safe and help minimise the burden on our health systems, achieving rapid responses to pandemics, a strategic infection strategy (variolation) in lieu of an actual vaccination, rejuvenating the thymus to help boost our immunity as we age, computer modelling of disease, and what we can hope to have learned from this ordeal after the pandemic is over.

Audio:

Youtube (video choppy, audio fine):

Peter’s book ‘Pandemics: What everyone needs to know’ can be found at Dymocks and Amazon.

 

Biography

Peter Charles Doherty, AC FRS FMedSci is an Australian veterinary surgeon and researcher in the field of medicine. He received the Albert Lasker Award for Basic Medical Research in 1995, the Nobel Prize in Physiology or Medicine jointly with Rolf M. Zinkernagel in 1996 and was named Australian of the Year in 1997. In the Australia Day Honours of 1997, he was named a Companion of the Order of Australia for his work with Zinkernagel. He is also a National Trust Australian Living Treasure. In 2009 as part of the Q150 celebrations, Doherty’s immune system research was announced as one of the Q150 Icons of Queensland for its role as an iconic “innovation and invention”.

https://en.wikipedia.org/wiki/Peter_C._Doherty

https://www.doherty.edu.au
https://www.nobelprize.org/prizes/medicine/1996/doherty/biographical/

#COVID_19 #Coronavirus #Pandemics #COVID19

Gero, Singapore AI startup bags $2.2m to create a drug that helps extend human life

Congrats to Gero for the $2.2m of funding to create a drug that helps extend human life!

I did two interviews with Gero in 2019 at Undoing Aging – here, with Peter Fedichev on Quantifying Aging in Large Scale Human Studies:

And here with Ksenia Tsvetkova on Data Driven Longevity:

Doris Yu at Tech In Asia said:

The company observed that as population growth slows down, the average lifespan increases. For example, there will only be 250 million people older than 65 by the end of the decade in China. Countries like Singapore, meanwhile, are not able to attract enough migrants to help offset the aging population.

Gero then wants to provide a medical solution to help extend healthspan as well as improve the overall well-being and productivity of its future customers.

It’s trying to do so by collecting medical and genetic data via a repository of biological samples and creating a database of blood samples collected throughout the last 15 years of patients’ lives. Its proprietary AI platform was able to determine a type of protein that could help with rejuvenation if blocked or removed.

What problem is it solving? “Aging is the most important single risk factor behind the incidence of chronic diseases and death. […] We are ready to slow down – if not reverse – aging with experimental therapies,” Peter Fedichev, co-founder and CEO of Gero, told Tech in Asia.

Explorebit.io wrote:

Gero, a Singapore-based company that develops new drugs for ageing and other complicated disorders using its proprietary developed artificial intelligence (AI) platform, secured $2.2m in Series A funding.

The round, which brought total capital raised since founding to over $7.5m, was led by Bulba Ventures with participation from previous investors and serial entrepreneurs in the fields of pharmaceuticals, IT, and AI. The co-founder of Bulba Ventures Yury Melnichek joined Gero’s Board of Directors. The company will use the funds to further develop its platform.

Led by founder Peter Fedichev, Gero provides an AI-based platform for analyzing clinical and genetic data to identify treatments for some of the most complicated diseases, such as chronic aging-related diseases, mental disorders, and others. The company’s experts used large datasets of medical and genetic information from hundreds of thousands of people acquired via biobanks and created a proprietary database of blood samples collected throughout the last 15 years of the patients’ lives.

Using this data, the platform determined the protein that circulates in people’s blood whose removal or blockage should lead to rejuvenation. Subsequent experiments at National University of Singapore involved aged animals and demonstrated mortality delay (life-extension) and functional improvements after a single experimental treatment. In the future, this new drug could enable patients to recover after a stroke and could help cancer patients in their fight against accelerated ageing resulting from chemotherapy.

The platform is currently also being utilized to develop drugs in other areas: for example, the group’s efforts to find potential therapies for COVID-19, including those that could reduce mortality from complications related to ageing, has already attracted a great deal of attention from large pharmaceutical companies and leading global media organizations.

Posthumanism – Pramod Nayar

Interview with Pramod K. Nayar on #posthumanism ‘as both a material condition and a developing philosophical-ethical project in the age of cloning, gene engineering, organ transplants and implants’. The book ‘Posthumanism’ by Pramod Nayar: https://amzn.to/2OQEA8z Rise of the posthumanities article: https://bit.ly/32Q67Pm
This time, I decided trying to itemize the interview so you can find sections via the time signature links:
0:00 Intro / What got Pramod interested in posthuman studies?
04:16 Defining the terms – what is posthumanism? Cultural framing of natural vs unnatural. Posthumanism is not just bodily or mental enhancement, but involves changing the relationship between humans, non-human lifeforms, technology and non-living matter. Displacement of anthropocentrism. 
08:01 Anthropocentric biases inherited from enlightenment humanist thinking and human exceptionalism. The formation of the transhumanist declaration with part of it focusing on the human with point 7 of the declaration focusing on the well-being of all sentience. The important question of empathy – not limiting it to the human species. The issue of empathy being a good lunching pad for further conversations between the transhumanists and the posthumanists. https://humanityplus.org/philosophy/t… 
11:10 Difficulties in getting everyone to agree on cultural values. Is a utopian ideal posthumanist/transhumanist society possible? 
13:25 Collective societies, hive minds, borganisms. Distributed cognition, the extended mind hypothesis, cognitive assemblages, traditions of knowledge sharing. 
16:58 Does the humanities need some form of reconfiguration to shift it towards something beyond the human? Rejecting some of the value systems that enlightenment humanism claimed to be universal. Julian Savulescu’s work on moral enhancement 
20:58 Colonialism – what is it? 
21:57 Aspects of enlightenment humanism that the critical posthumanists don’t agree with. But some believe the poshumanists to be enlightenment haters that reject rationality – is this accurate? 
24:33 Trying to achieve agreement on shared human values – is vulnerability rather than dignity a usable concept that different groups can agree with? 
26:37 The idea of the monster – people’s fear of what they don’t understand. Thinking past disgust responses to new wearable technologies and more radical bodily enhancements. 
29:45 The future of posthuman morphology and posthuman rights – how might emerging means of upgrading our bodies / minds interfere with rights or help us re-evaluate rights? 
33:42 Personhood beyond the human
35:11 Should we uplift non-human animals? Animals as moral patients becoming moral actors through uplifting? Also once Superintelligent AI is developed, should it uplift us? The question of agency and aspiration – what are appropriate aspirations for different life forms? Species enhancement and Ian Hacking’s idea of ‘Making up people’ – classification and how people come to inhabit the identities that exist at various points in history, or in different environments. https://www.lrb.co.uk/the-paper/v28/n… 
38:10 Measuring happiness – David Pearce’s idea of eliminating suffering and increasing happiness through advanced technology. What does it mean to have welfare or to flourish? Should we institutionalise wellbeing, a gross domestic happiness, world happiness index? 
40:27 Anders Sandberg asks: Transhumanism and posthumanism often do not get along – transhumanism commonly wears its enlightenment roots on its sleeve, and posthumanism often spends more time criticising the current situation than suggesting an out of it. Yet there is no fundamental reason both perspectives could not simultaneously get what they want: a post-human posthumanist concept of humanity and its post-natural environment seem entirely possible. What is Nayar’s perspective on this win-win vision? 
44:14 The postmodern play of endless difference and relativism – what is the good and bad of postmodernism on posthumanist thinking? 
47:16 What does postmodernism have to offer both posthumanism and transhumanism? 
49:17 Thomas Kuhn’s idea of paradigm changes in science happening funeral by funeral. 
58:58 – How has the idea of the singularity influenced transhumanist and posthumanist thinking? Shift’s in perspectives to help us ask the right questions in science, engineering and ethics in order to achieve a better future society. 
1:01:55 – What AI is good and bad at today. Correlational thinking vs causative thinking. Filling the gaps as to what’s required to achieve ‘machine understanding’. 
1:03:26 – Influential literature on the idea of the posthuman – especially that which can help us think about difference and ‘the other’ (or the non-human) 

How science fails

There is a really interesting Aeon article on what bad science, and how it fails.

What is Bad Science?
According to Imre Lakatosh, science degenerates unless it is both theoretically and experimentally progressive. Can Lakatosh’s ‘scientific programme’ approach, which incorporates merits of both Khunian and Popperian ideas, help solve this problem?

Is our current research tradition adequate and effective enough to solve seemingly intractable scientific problems in a timely manner (i.e. in foundational theoretical physics or climate science)?
Ideas are cheap, but backing them up with sound hypotheses (main and auxiliary) predicting novel stuff and experimental evidence aimed at confirming this stuff _is expensive_ given time/resource constraints means that among other things an ideal experimental progressiveness is sometimes not achievable.

A scientific programme is considered ‘degenerating’ if:
1) it’s theoretically degenerating because it doesn’t predict novel facts (it just accommodates existing facts); no new forecasts
OR
2) it’s experimentally degenerating because none of the predicted novel facts can be tested (i.e. string theory)

Lakatosh’s ideas (that good science is both theoretically and experimentally progressive) may serve as groundwork for further maturing what it means to ‘do science’ where an existing dominant programme is no longer able to respond to accumulating anomalies – which was the reason why Kuhn wrote about changing scientific paradigms – but unlike Kuhn, Lakatos believes that a ‘gestalt-switch’ or scientific revolution should be driven by rationality rather than mob psychology.
Though a scientific programme which looks like it is degenerating may be just around the corner from a breakthrough…

For anyone seeking an unambiguously definitive demarcation criterion, this is a death-knell. On the one hand, scientists doggedly pursuing a degenerating research programme are guilty of an irrational commitment to bad science. But, on the other hand, these same scientists can legitimately argue that they’re behaving quite rationally, as their research programme ‘might still be true’, and salvation might lie just around the next corner (which, in the string theory programme, is typically represented by the particle collider that has yet to be built). Lakatos’s methodology doesn’t explicitly negate this argument, and there is likely no rationale that can.

Lakatos argued that it is up to individual scientists (or their institutions) to exercise some intellectual honesty, to own up to their own degenerating programmes’ shortcomings (or, at least, not ‘deny its poor public record’) and accept that they can’t rationally continue to flog a horse that appears, to all intents and purposes, to be quite dead. He accepted that: ‘It is perfectly rational to play a risky game: what is irrational is to deceive oneself about the risk.’ He was also pretty clear on the consequences for those indulging in such self-deception: ‘Editors of scientific journals should refuse to publish their papers … Research foundations, too, should refuse money.’

This article is totally worth a read…

https://aeon.co/essays/imre-lakatos-and-the-philosophy-of-bad-science