Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds
Book file PDF easily for everyone and every device.
You can download and read online Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds file PDF Book only if you are registered here.
And also you can download or read online all Book PDF file that related with Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds book.
Happy reading Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds Bookeveryone.
Download file Free Book PDF Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds at Complete PDF Library.
This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats.
Here is The CompletePDF Book Library.
It's free to register here to get Book file PDF Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds Pocket Guide.
I got very interested in how you could rehabilitate some of these ideas from good old-fashioned symbolic AI in a contemporary neural network context.
That's the kind of thing that I've been interested in. How can we build artificial intelligence and hopefully move towards increasingly sophisticated AI by combining elements of classical symbolic AI and neural network deep learning? That's one technical question I'm very interested in. Then there are philosophical questions that have exercised me since I was a child. These technical questions have exercised me since I was a teenager, although I didn't have the means to address them properly. The philosophical questions are all philosophy of mind questions. I'm particularly interested in consciousness.
I have to preface what I'm about to say with something very important, which is I don't think anybody is about to create human-level artificial intelligence or anything where the question of consciousness is applicable to AI, yet. We're a long way from being able to do that. Of course, philosophy speculates about what's possible in principle, and I'm deeply interested in the question of whether we could ever build AI that had consciousness. If we could, what would it be like? I'm very interested in some of these deep questions like Chalmers' so-called hard problem of consciousness: "How is it that it's possible for something that has experience to arise out of pure matter at all?
How does that question arise and play out in the context of artificial intelligence?
I have a very Wittgensteinian outlook on this. Is it conscious or not? Intuitively, we feel that there must be a yes or no answer to that; it's not just something that we decide. Similarly, if you're looking at a painting and you ask, "Is it beautiful or not?
Why AI may fail to help us with solutions to our problems
It's all relative, depending on your culture and where you're coming from. Consciousness, though, whether or not it's like something to be something, it doesn't seem to be in that kind of space. It seems to be the kind of thing that there must be a fact of the matter. Either it is capable of suffering or it's not.
Murray Shanahan - Google 학술검색 서지정보
A Wittgensteinian perspective challenges that and tries to make us rethink the very idea of consciousness, rethink it in terms of the way we use consciousness language, and undermines the idea that there has to be a fact of the matter. When Rod Brooks launched his critique of the current methodology in artificial intelligence back in the late '80s and early '90s, many of the points that he made then are now very much mainstream and orthodox.
For example, the idea that we need to deal with whole agents interacting with complex environments if we ever want to build sophisticated AI, that's become more or less an orthodoxy. There were other aspects of his critique that are much less accepted.
- Schaums Outline of Basic Circuit Analysis (2nd Edition) (Schaums Outlines Series).
- Advances in Aeronautical Sciences. Proceedings of the Second International Congress in the Aeronautical Sciences, Zürich, 12–16 September 1960!
- [PDF] Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds.
- The Debt of Tamar: A Novel.
The more radical end of it was that he rejected the idea of representations altogether. Your two schools of AI at the time were the symbolic approach, which obviously had representation at the heart of it, and the neural network approach, which had representations as an important part of the way they were thinking as well, but it was a very different kind of representing, a distributed representation. Both of those schools of thinking thought that representation was important.
Today, with neural networks being so important, people now in the neural network community talk about representations all the time and don't feel embarrassed by that. That aspect of Brooks' critique is no longer very potent today. He himself would have backtracked away from that a bit.
I had the privilege of having breakfast with Daniel Kahneman at a conference. We were the first two to turn up to breakfast, so we sat down together and I had a chance to chat with him about his work, where he talks about system one and system two. He doesn't talk about consciousness there. He perhaps still prefers to avoid this philosophically difficult concept. There's a lot of wisdom in avoiding that term and the concept. I had this conversation with him, comparing his ideas with Bernie Baars' ideas.
- Energy storage for smart grids : planning and operation for renewable and variable energy resources (VERs).
- You Know Me Al (A Bushers Letters);
I have this longstanding interest in consciousness. I got particularly interested in Bernie Baars' ideas about global workspace theory. One of the leading contenders for the basis of a scientific theory of consciousness is global workspace theory.
The Space of Possible Minds
In global workspace theory, you have this clear division between conscious processing and unconscious processing. The idea is that conscious processing is mediated by this global workspace, which activates the whole brain, if we're talking in a neural context. The idea is that when you consciously perceive something, then the influence of that stimulus that you're perceiving pervades the whole brain via the global workspace, via some kind of broadcast mechanism.
That's the essence of it. Whereas, if there's unconscious processing of a stimulus by the brain, then it's just localized processing. Then you can draw up a little table of the properties of conscious processing versus unconscious processing, things like conscious processing is slow and flexible, whereas unconsciousness processing is fast but stereotypical, and so on. This little collection of properties matches very closely with system one and system two in Daniel Kahneman's work, which I mentioned to him. I hope that I'm not misrepresenting him there. I'm a big fan of Dan Dennett's thinking.
Probably of all the philosophers around today who are working on consciousness, his views are closest to mine, I'd say. He's also very influenced by Wittgenstein.
He's not a fan of the hard problem-easy problem distinction, and neither am I. It very much comes back to Wittgenstein and the idea that this hard problem-easy problem distinction is an artifact of our language; it's a manufactured philosophical problem that isn't real if you think about the things that are in front of us, which are human beings behaving in complex ways. It's only when we sit down and start to use language in a peculiar way that we start to think that there's some kind of issue here, and there's some kind of metaphysical division between inner and outer, between subjective and objective, and hard problem and easy problem.
It's a very difficult territory, so a few trite sentences like that don't help very much. When we're building artificial intelligence, it's natural to ask what intelligence is. If we want to think about human-level artificial intelligence, what do we mean by intelligence?
Special order items
There's this phrase artificial general intelligence, AGI, that's current, and that wasn't current twenty years ago. That little word, "general," is the critical thing. Generality is key to real intelligence. I'm happy to venture a definition of intelligence: Intelligence is the ability to solve problems and attain goals in a wide variety of environments. The key there is the variety of environments.
If you have an agent that is able to deal with a completely novel, unseen type of environment and adapt itself to be able to deal with that, then that is a sign of intelligence. You can almost quantify this mathematically. Indeed, this is very much Shane Legg's definition of intelligence. What I mean by an agent is a computer program or a robot. The agent is the bit in the middle that's deciding how to act on the basis of what it perceives. I was very interested in how you use logic to formalize aspects of common sense—how to represent actions, events, space, and things like using logical formalisms.
I worked in that area for many years as a postdoc before I got a faculty position at Imperial College again. That's when I started to move sideways a little bit.