For the next installment of the informal TechCrunch book club, we are reading the fourth story in Ted Chiangs Exhalation. The goal of this book club is to expand our minds to new worlds, ideas, and vistas, and The Lifecycle of Software Objects doesnt disappoint. Centered in a future world where virtual worlds and generalized AI have become commonplace, its a fantastic example of speculative fiction that forces us to confront all kinds of fundamental questions.
If youve missed the earlier parts in this book club series, be sure to check out:
Some questions for the fifth story in the collection, Daceys Patent Automatic Nanny, are included below.
And as always, some more notes:

  • Want to join the conversation? Feel free to email me your thoughts at danny+bookclub@techcrunch.com or join some of the discussions on Reddit or Twitter.
  • Follow these informal book club articles here: https://techcrunch.com/book-review/. That page also has a built-in RSS feed for posts exclusively in the Book Review category, which is very low volume.
  • Feel free to add your comments in our TechCrunch comments section below this post.

Thinking about The Lifecycle of Software Objects
This is a much more sprawling story than the earlier short stories in Exhalation, with much more of a linear plot than the fractal koans we experienced before. That wider canvas offers us an enormous buffet of topics to discuss, from empathy, the meaning of humanity, and the values we vouch for to artificial entities, the economics of the digital future, and onwards to the futures of romance, sex, children, and death. I have pages of notes from this story, but we cant cover it all, so I want to zoom in on just two threads that I found particularly deep and rewarding.
One core objective of this story is to really interrogate the meaning of a person. Chiang sets up our main character Ana as a mother of a digital entity (a digient) who was a zookeeper in a past life. That career history gives us a nice framing: it allows us via Ana to compare humans to animals, and therefore to contextualize the personhood debate around the digients throughout the story.
On one hand, humans uniquely value themselves as a species, and even the most dedicated digient owner eventually moves on. As one particularly illuminating passage discusses when a digient’s owner announces that his wife is pregnant:
Obviously youre going to have your hands full, says Ana, but what do you think about adopting Lolly? It would be fascinating to see Lollys reaction to a pregnancy.
No, says Robyn, shaking her head. Im past digients now.
Youre past them?
Im ready for the real thing, you know what I mean?
Carefully, Ana says, Im not sure that I do.
Cats, dogs, digients, theyre all just substitutes for what were supposed to be caring for.
This owner has made a clear distinction: there is only one form of entity worth caring for, only one thing that a human can consider a person, and that is another human.
Indeed, throughout this short story, Chiang constantly notes how the tastes, values, norms, rules, and laws of human society are designed almost exclusively with humans in mind. Yet, the story never takes a definitive stance, and even Ana is not at all convinced of any one point of view, even right up to the end of the story. However, the narrative does offer us one model to think through that I thought was valuable, and thats around experience.
What separates humans from other animals is that we base decisions on our own prior experiences. We collect these experiences, and use them to guide our actions and drive us toward the right outcomes that we also from experience desire. We might want to make money (because experience tells us that money is good), and so we decide to go to college to get the right kind of learning in order to compete effectively in the job market. Essential to that whole decision is lived experience.
Chiang makes a very clear point here when it comes to a company called Exponential, which is interested in finding superhuman AI that comes without the work that Ana and the other owners of digients have put in to raise their entities. Ana eventually realizes that they can never find what they are looking for:
They want something that responds like a person, but isnt owed the same obligations as a person, and thats something that she cant give them.
No one can give it to them, because its an impossibility. The years she spent raising Jax didnt just make him fun to talk to, didnt just provide him with hobbies and a sense of humor. They were what gave him all the attributes Exponential is looking for: fluency at navigating the real world, creativity at solving new problems, judgment you could entrust with an important decision. Every quality that made a person more valuable than a database was a product of experience.
She wants to tell them that Blue Gamma was more right than it knew: experience isnt merely the best teacher; its the only teacher experience is algorithmically incompressible.
Indeed, as the owners start to think about when they might offer their digients independence to make their own decisions, experience becomes the key watchword. Their ability to make their own decisions in the context of past experiences is what defines their personhood.
And so when we think about generalized artificial intelligence and the hope of creating a sentient artificial life, I think this litmus test starts to get at the real challenge what this technology can even be. Can we train an AI purely through algorithms, or will we have to guide these AIs with their open but empty minds every step of the way? Chiang discusses this a bit earlier in the story:
Theyre blind to a simple truth: complex minds cant develop on their own. If they could, feral children would be like any others. And minds dont grow the way weeds do, flourishing under indifferent attention; otherwise all children in orphanages would thrive. For a mind to even approach its full potential, it needs cultivation by other minds.
Indeed, Ana and the other main character Derek are forced to keep pushing their digients along, assigning them homework and guiding them to new activities to continue propelling them to get the kind of experience they need to succeed in the world. Why should we assume a generalized AI wouldnt be any less lazy than a child today? Why would we expect that it can teach itself when humans cant teach themselves?
Speaking about children, I want to head over to the other thread in this story I found particularly trenchant. Clearly, there is a whole parallel to real-life human childrearing that is sort of intrinsic to the whole story. I think thats obvious, and while interesting, a lot of the conclusions and meanings from that concept are obvious.
Whats more interesting is what affection and bonding signifies in a world where entities dont have to be real. Ana is a zookeeper who had deep affection for the animals under her care (Her eyes still tear up when she thinks about the last time she saw her apes, wishing that she could explain to them why they wouldnt see her again, hoping that they could adapt to their new homes.) She vigorously defends her relationship with those animals, as she does with the digients throughout the story.
But why are some entities loved more than others if they are all just code running in the cloud? The main digients featured in the book were literally designed to be attractive to humans. As Blue Gamma scans through the thousands of algorithmically-generated digients, it carefully selects the ones that will attract owners. Its partly been a search for intelligence, but just as much its been a search for temperament, the personality that wont frustrate customers.
The reason of course is obvious: these creatures need attention to thrive, but they wont get it if they are not adorable and desirable. Derek spends his time animating the avatars of the digients to make them more attractive, generating spontaneous and serendipitous facial expressions to create a bond between their human owners and them.
Yet, the story pushes so much harder on this theme in layers that connect with each other. Derek is attracted to Ana throughout the story, even as Ana stays focused on developing her own digient and keeping her relationship with her boyfriend Kyle going. Derek eventually realizes that his own obsession with Ana has become untenable, which is a subtle parallel to Anas own obsession with her digients:
He no longer has a wife who might complain about this, and Anas boyfriend, Kyle, doesnt seem to mind, so he can call her up without recrimination. Its a painful sort of pleasure to spend this much time with her; it might be healthier for him if they interacted less, but he doesnt want to stop.
Indeed, the books strongest thesis may be that this sort of love just isnt reproducible. Ana wants to join a company called Polytope in order to raise funding to port her digient to a new digital platform. As part of the employer agreement, she is expected to wear a smart transdermal called InstantRapport that uses chemical alterations in the brain to rewire a humans reward centers to love a specific individual automatically. Anas love for her digient pushes her to consider rewiring her own brain to get the resources she needs.
And yet, the digients eventually develop similar thought processes. Marco and Polo, two digients owned by Derek, eventually agree to be copied as sex toys, in order to provide funding for the port. Their clones will have their reward maps rewired to make them love the customer that purchases them.
The story gives us a haunting reminder that we are ultimately a bunch of neurons that respond to stimuli. Some of that stimuli is under control, but much of it is not, instead programmed by our experiences without our conscious intervention. And there we see how these two threads come entwined together it is only through experience that we can create affection, and it is precisely affection and therefore experience that creates a person in the first place.
Some questions for Daceys Patent Automatic Nanny

  • Can machines play a meaningful role in childrearing?
  • Did the scientific method work in this instance?
  • Connecting this story to the Lifecycle of Software Objects, what is Chiang trying to say about childrearing? Are there similarities or differences between these two stories conceptions of children and parents?
  • Should we be concerned if a child only wants to talk to a machine? Do we care what entities a human feels comfortable socializing with?