I’ve just finished reading James Lovelock’s book, Novacene. He wrote it about 2 years ago and he’s only recently reached his 100th birthday. So as a thinker and author, I don’t think he’s doing too badly really.
The book is about what he sees as the coming age of hyper intelligent machines. These will be created by us mere humans when we have finally cracked the mysteries of AI, or artificial intelligence to you and me.
He seems to firmly believe that AI will take us, the world and the Cosmos into a new age. From the Anthropocene, which he considers began with the invention of the steam engine, to what he has christened the Novacene. He also considers that, once the first intelligent machine is activated, it’s first act will be to reproduce itself and as it does so, the speed and intelligence of each replication will increase exponentially. Until it gets to the point of being so far in advance of its creators that it considers them to be no more intelligent than, say, plants.
Now, all the time I was reading this something was nagging me in the back of my mind. I was reminded of Marvin the android in Douglas Adams book, The Hitchhikers Guide to the Galaxy. Marvin had a brain the size of a planet but was always deeply depressed. So I began to think about artificial intelligence a bit differently. In all the information I have come across about AI, it all seems to be a bit one dimensional. In that the focus seems to be on thought and thinking in isolation. It all seems to be about rationality and logic. The goal seems to be a machine that can think for itself, independent of human intervention.
I’ve always thought that logic is only part of intelligence, and I’m wondering if alongside AI we also need to be thinking about EI, and by that I mean Emotional Intelligence. It’s not something that I’ve ever come across in anything about autonomous thinking machines.
Human emotions are part of the experiential learning curve of life. As we grow up we need to not only feel but also to express things like joy, anger, grief and fear in appropriate ways. We also need to feel the opposites of these feelings so that we may learn the difference. So for example, how will we know we’re happy if we don’t also feel sad?
I may be waxing philosophical here, but is pure logical thought enough on its own for any sentient creature? Is it possible to create EI alongside AI before this hyper intelligent machine is switched on? Or might it be the case that EI can only be developed experientially through the living of a life? Not just brought into existence at the flick of a switch.