I happen to love science fiction and lately the stuff that really grabs me is anything to do with that classical question, “What does it mean to be human?” I have really been enjoying the Walking Dead, HUMANS, Ex Machina, Black Mirror, Orphan Black, and other recent additions to the genre (for those of you who think the Walking Dead is just a zombie show, I beg to differ). These explorations into what makes humanity unique from animals, robots, aliens, and monsters aren’t new, but they seem increasingly relevant. I tend to be a bit of a Romantic about human nature, believing that there are some elements that will not be replaceable by strong AI. I’m actually working on some short science fiction along those lines, but that is literally another story. On the other hand, I also believe that technology (while itself morally neutral), can be used to accelerate humanity toward both good and bad ends (I am leaving the terms “good” and “bad” in a very generic context here in order to let you come to your own conclusions before we come back around to it).
So that brings me to the question at hand: As humans innovate, are we correctly anticipating how humans will leverage new technology in good and bad ways?
Here are some assumptions I have about human nature, based primarily on my own limited experience. Take it with several grains of salt:
- power brings out the best and worst of human nature, and in anonymous environments, more of the worst
- people generally desire community with other people who understand and love them
- people generally desire to be a part of a story which is larger than their own
- Will people evolve or progress beyond these traits?
- Would strong AI desire the same things as us or somehow escape typical human desires?
- If progress is deemed to be towards something that humans would classify as “not good”, is it still progress?
- What is “the greater good”?