Even if you already know what is going to happen, your pulse will race and your palms sweat when you read a thrilling novel.
But why do we experience such an intense emotion when reading a book?
That’s the question a team of English grad students and Mark Algee-Hewitt, an assistant professor at Stanford University, are trying to answer.
“The big goal of the research is to try and explain why we feel suspense when confronted with certain aesthetic objects, even if we know the outcome of them,” Algee-Hewitt says.
“We wound up figuring out a set of features that do a very good job of predicting suspense.”
In fact, the continued experience of suspense for readers even when they know what happens in the plot has been a central question for this type of literary study, he adds.
Although the project is still ongoing, the group’s central finding so far is that suspense is characterized by the presence of words that convey how things appear to be rather than how they really are, such as “seemed,” “perceived,” or “observed.”
These words generate an “epistemological uncertainty,” Algee-Hewitt says.
“Suspense texts appear to be able to create a virtual space in which the reader can experience uncertainty without necessarily having this kind of ontological uncertainty about the text, or forgetting the ontological certainty of the text that he or she already knows,” he says.
In other words, even if you already know what is going to happen next, the text’s description of how things “seem” still triggers a feeling of uncertainty and suspense.
Studying suspense with digital humanities methodologies posed a problem from the beginning because suspense is so dependent on the reader’s emotional experience, Algee-Hewitt says.
To incorporate the reader’s response to the text into their data, the group used various methods of tracking their reading experience, such as rating paragraphs by the level of suspense they felt while reading on a scale of one to ten.
“It’s an exciting way not to jettison concepts like taste or suspense or attachment in the name of objectivity, but instead to take them as an object of study that you can track and quantify,” says Hannah Walser, an English doctoral candidate.
We agree on suspense
When they compared their individual ratings of suspense in a set of short stories, it turned out that the group did agree on the points at which suspense increased or decreased, though they varied in their ratings of the degree of suspense felt.
“We discovered that we agree in general on what is suspenseful and what isn’t, at least in terms of the ups and downs of the narrative,” says Andrew Shephard, an English doctoral candidate.
They then turned to digital methods to assess what words and topics were most commonly associated with the moments of high suspense they had identified. They found that suspenseful passages were characterized by words relating to the imagination (e.g., “thought”), the senses (“saw”), and movement (“struggled”) and topics such as “assault,” “guns,” “crime,” and “dramatic weather.”
“We wound up figuring out a set of features that do a very good job of predicting suspense,” Algee-Hewitt says. They decided the next step would be to develop a virtual reader using a neural network that could identify suspense based on these features.
‘This was not supposed to work’
A neural network is a computer program that receives inputs on how to categorize certain objects and can be trained to learn how to identify new objects on its own. In this case, they trained the neural network to recognize suspenseful passages based on the sets of words and topics associated with suspense and their data on what readers had identified as moments of high suspense.
The neural network achieved 81 percent accuracy in identifying passages it had never seen before as either suspenseful or non-suspenseful.
“I was shocked. This was not supposed to work,” says Algee-Hewitt. “The very fact that something that is so subjective and affective can be somewhat accurately predicted based on formal features of a text is one of the more surprising things we’ve come across.”
Although creating a program that can detect suspense was not an initial goal of the project, it did help the scholars understand what truly creates suspense. By analyzing what words and topics the neural network relied on most when identifying suspenseful passages, they arrived at their conclusion about words that create what Algee-Hewitt calls epistemological uncertainty.
“This is one of the places where digital humanities research for me is most exciting, because very frequently, either the results will not be what you expect or you wind up finding something you weren’t looking for to begin with,” he says.
Source: Stanford University