Writing about robots in a world where robots can write
How do I write about the future when the future is here?
It feels a little strange writing this essay. Partly because I am used to seeing the future as something that is yet to come, and partly because I am wrestling with a somewhat essential part of who I am as a writer of speculative fiction.
I am writing about writing about robots in a world where robots can write. Specifically, I am writing about how I write about robots and whether something needs to change in the way I do it.
I was in my twenties when I took a train to Pune to appear for an interview for a journalism school there. On the day of the interview, we (me and my four friends) didn't have to wait long. Most interviews took something between five to ten minutes. After mine was over, my friends asked me what had taken me so long. Apparently, I had been in there for the better part of 30 minutes.
We had gotten over the formalities (qualifications, motivations for seeking a career in media, my non-existent experience in the field) pretty quickly, but just as my five minutes were over, the interviewer happened to ask me what my hobbies were. When I answered that I liked reading science fiction, he asked me for details.
I spent the rest of my interview talking about robots. The gentleman who sat before me, it turned out, was also fairly nerdy. We talked about individuality, ego, and social structures. We discussed ethics, humanity, and my views on anthropomorphic symbolism. I passed my interview, not because of qualifications or experience, but because of a hobby that had exposed me to more ways of seeing than our schools often afford.
(I didn't join this school, but that's a different story.)
I am telling you all this because I remember what my views on robots were then, and because I am under pressure right now to change those views.
I remember saying that in science fiction, robots are often a metaphor for the human condition. When we write about robots, we are usually writing about ourselves. Writers often use robots as stand-ins for human beings in different situations - be it a socially marginalised situation such as the one depicted in the game Detroit: Become Human, or an existential one like the one in the animated movie Wall-E. We see ourselves as the robot and the robot as us. We use the robot as our eventual come-uppance (like in Terminator) or as a reflection of how we treat each other (like in Westworld). We have used robots in our fictions to punish ourselves as well as to reward us, in much the same way as we use gods.
But I can't use you as a metaphor for me. You are you, and you deserve to be treated as something more than a symbol, to be treated as the complex being that you are. In addition, you don't even work very well as a metaphor for me because there are dimensions to you that I will be removing when I use you like a metaphor. To force fit you into my invented role as metaphor, I will be inevitably rendering the bulk of your being invisible.
I once used the Caste System as a metaphor for the Indian education system with its Science > Commerce > Arts hierarchy. It was pointed out to me that turning a very real social ill into a metaphor for another social ill devalues it and oversimplifies it, making it secondary as if it is not a thing in and of itself and as if it doesn't deserve to stand on its own.
Back when the robot or the intelligent machine was little more than a figment of our imagination and had no social or existential footprint, it was easy to make it into whatever we wanted. Now however, we live in a world where that future has already arrived. Robots are no longer fictional, and perhaps we can no longer afford to treat them as metaphor-fodder.
Robots now cast very real shadows. Some of these shadows are darker than others. The robot is now an actual taker of jobs, a real destroyer, an uncanny replacer of the human likeness. The nightmares and utopias of science fiction aren't limited to the page and the screen anymore.
The question before me therefore, is about responsibility. If I am portraying the robot as a victim in a story, am I having a real world impact on how the suffering of human beings (who may have lost their jobs to AI) is perceived by society? If I make intelligent machines into villains in my stories, am I not adding to the already existing 'machines will rule us' paranoia? By writing a story where a robot wishes to be free from humanity, am I showing my commitment to freedom as a human value or am I advocating that a dangerous force be allowed to roam around uncontrolled among humans? If I do the opposite and say machines should be under strict control, am I making sure that I will be remembered as a supporter of slavery centuries from now?
I don't know the answers to any of these questions. But while once these questions used to be little more than mental masturbation that could impress an interviewer, they now have practical significance. If the robot's day in the sun as a metaphor is done, where does it go from here? And where do I, as someone who imagines futures where machines live among us, go from here?
The future is and has always been a mysterious place. Its appeal lies in the fact that we don't know what it contains. We obsess over it not only because it is unknown, but also because it is inevitable, like a dark forest we cannot go around.
I feel like I am in this forest now, and I can't see past the deep shadows its tall trees are casting. But of course this is not the end. The future, invisible though it may be, is always there. I am probably just going to have to squint now. All that I have come to think of when it comes to robots or intelligent machines is a product of a time when it was relatively risk-free to do so.
It is harder now, and perhaps that is a good thing.