This is more of an attempt to put the nature of artificial intelligence into context. From media I see a lot of stories promoting "warnings against A.I." I don't like it as a news story about tech, because its fear mongering. Automated tech has the potential to revolutionize the world, possibly helping to bring us close to a post-scarcity society. Robots are nearing the same stage early personal computers were in the 70s. We are about to enter an era of robots. Asimov's dream era.
But there are always warnings. Fears raised. Shudders given. "We welcome our future robot overlords" is a notable meme. Sometimes they seem to ring harder for some more than warnings on climate change.
These stories always seem misguided to me, for two very stark reasons.
The first one is that genuine artificial intelligence, the kind most individuals think of as A.I., is qualitatively hard to determine. We haven't cracked that nugget yet. And worse, we don't have a good gauge of what intelligence means. The best gauge I've ever heard for intelligence is as an emergent quality that comes out of complex systems. Even then, that isn't useful definition.
The best gauge I've ever heard for intelligence is as an emergent quality that comes out of complex systems.
I suggest looking at computer science and engineering research into A.I. systems. Automation seems to come closer and closer. But automated systems aren't the same as intelligence. IBM's Watson is in no way a competitor with human ambition. My point is thus: robots already do a bunch of automated functions. That automation isn't the same as robotic revolution or insurrection.
The other stark warning is that its basis in psychological fear seems to be flawed as a reason to me. What is the evidence (not the emotions or feelings, whathaveyou) for any AI to be dangerous? Its the same question I ask of characters in a story: what's the cause? History demonstrates that dangerous persons had cause for being dangerous. That there is a why to their actions. If anything, human history makes us cast as being more dangerous for AI.
Human fear of the Other is what makes things like horror a genre to start with. Humans fear the Other as a palpable thing: it has roots in a evolutionary need to be paranoid of strangers not part of your kin-group. That fear often manifests itself as racism and discrimination of all kinds. Do we need to create a new form of discrimination against future AI members of society?
I don't say that mockingly. There is a possibility for humans to think of AI and machines in a discriminatory fashion, creating systems that restrict AI the same way other systems we've devised have. If AI rebel against humans, my gut points to it being based on human mistreatment of machines as anything less than equals.
Human psychology might not help us here. We need to note that if we create new artificial intelligences, we will teach them the basics of what humanity is. We might have to expand the meaning of that word, too. I will guess ahead of time that robots will not start any sort of apocalypse: like nuclear war, I think the potential cause of disaster begins in human hands. One might look back at all those near-misses and think how lucky we are: "Humanity messed up so much that it almost blew up the world."
That is a valid way to look at it. I see it another way. Countless people struggled to keep the World Wars from happening. The same was true for the Cold War. We remember the warmongers, we forget the thousands that worked to keep the war from turning hot. History remembers the bloody monsters, it forgets the pacifists, the champions of peace who went away quietly into the night unknown. The peacemakers and brilliant folks that built the foundation of our world.
They could've done better. But they were trying to do better, and that is key to understand. There are logical reasons for altruism as a practice. There are more reasons to help others than there are not to. It seems to me to be limiting to think that any mind created by technology would choose to treat humans like the worst of humanity has treated itself. Think instead of the AI that follows in our steps, the AI that tries to be like the best of humanity.
Think of them as new citizens, not monsters.
My NaNoWriMo Novel
These thoughts influenced my NaNoWriMo novel, the Machines of Shiloh when I started it. I've been having them roll around in my head for awhile now: artificial intelligences shouldn't be seen as monsters looming in the dark. One shouldn't look at the wonders of the next age with fear.
So I wanted to write a story about that. I think automated systems and AI will help humans to lift ourselves up. It will be a stairway that lets us ascend. Yes we will manage it well: the true answer is if we can fathom treating such entities with respect and equality should they emerge worthy of it.
The novel covers my own imagining on future AI to a degree, as well as governance to a degree. AI become citizens of another kind. Superior AI still have emotions, dreams, hobbies and passions. They don't regard humans as lesser beings- because culturally it isn't a valued trait. I just take the basic leap that the symbiotic relationship between humans and any artificial minds smarter than them is stronger than the very easy kill all humans model.
Machines value their relationships with humans. And vice versa. That's a core part of the novel's AI themes to me. Its something I like writing about, that and any sci fi that makes technology lift people up, not destroy them.