Characters don’t live in an emotional vacuum they need an emotional journey the reader can relate to even if they are inferring it. By this I mean that a character doesn’t have to specifically state their emotional state, they may even be character that claims not to have one or we infer doesn’t. An example of this would be in sci-fi stories where we are often presented with robotic, android or otherwise computerised characters who we might assume would not have an emotional state. In The Hitchhiker’s Guide to the Galaxy there’s Marvin the depressive and sarcastic robot, Stars Wars has the friendship of C3-PO and R2-D2, Star Trek has Data and Seven-of-Nine, then there’s Do Androids Dream of Electric Sheep and Blade Runner or even the Tin Man (granted not technically a robot) searching for his heart in The Wizard of Oz. The list could go on and on.
The general theory is that emotion is something human; a common plot theme is that they aspire to be human and feel emotion or if being able to feel makes artificial intelligence human or, in the case of cyborg characters like Seven-of-Nine, rediscovering lost humanity.
If we consider a character like Data from Star Trek: Then Next Generation he is a completely artificial life form who aspires to be more human and feel emotion. However, despite the fact that he constantly states that he has no emotions it’s easy for the audience to infer emotion in his relationships with the people around him and his cat Spot. As the story progresses he increasingly demonstrates friendship, loyalty, family affection, arguably even love, to name a few emotions. But we might wonder if he is or if we, as the audience, are inferring these emotions of him. Before he get his ‘emotion chip’ he never says ‘I love my cat’ yet the audience might suppose from his actions that he does. He refers to Geordie as his friend on a regular basis but he never states a particular emotion and yet the idea that they are friends suggests an emotional connection.
At the opposite end of the scale is Do Androids Dream of Electric Sheep? where Detective Dekker is presented to us as a fully-functioning human, or fully-functioning as any fictional detective tends to be, but he may not be (if you want to know you’ll have to read the book). The principle of the novel is that when androids masquerade as humans they betray themselves by an inability to answer emotional questions, however, there may be a type of android that can. This allows the reader to make interpretations such as does this mean that they are therefore human if emotion is the only separation between one state and the next? Or does their inherent artificiality mean they don’t qualify for human rights? If it is possible for androids to feel and believe themselves to be human than how does Dekker know he is human? Is his dream of own a real sheep a human dream or an artificial one?
My point is that whether the characters believe they can or can’t feel emotion there must be something there for a reader or audience to build on. The interpretative space we keep returning to. Portraying emotion is not necessarily saying ‘I feel’ but showing through action to allow the reader to interpret.