Stargate SG1: Sentiment is not Enough for the Techno Bugs

One of the really great things as your kids get older is that you can share with them stuff you thought was cool when you were young, had ideals and spare time. One of those things my siblings and I spent a lot of that spare time doing was watching Stargate SG1. Now I’ve got kids and they love the show too.

When I sat down to watch the series the first time I was a history student in my first year of university, so Daniel’s fascination with languages and cultures was my interest too. Ironically, at the time I was also avoiding taking my first compulsory econometrics course because I was going to hate it SO MUCH.

Approximately one million years, a Ph.D. in econometrics and possibly an alternate reality later, I’m a completely different person. With Julia Silge’s fabulous Austen analyses fresh in my mind (for a start, see here and then keep exploring) I rewatched the series. I wondered: how might sentiment work for transcripts, rather than print-only media like a novel?

In my view, this is something like an instrumental variables problem. A transcript of a TV show is only part of the medium’s signal: imagery and sound round out the full product. So a sentiment analysis on a transcript is only an analysis of part of the presented work. But because dialogue is such an intrinsic and important part of the medium, might it give a good representation?

What is sentiment analysis?

If you’re not a data scientist, or you’re new to natural language processing, you may not know what sentiment analysis is. Basically, sentiment analysis compares a list of words (like you may find in a transcript, a speech or a novel) to a dictionary that measures the emotions the words convey. In its most simple form, we talk about positive and negative sentiment.

Here’s an example of a piece of text with a positive sentiment:

“I would like to take this opportunity to express my admiration for your cause. It is both honourable and brave.” – Teal’c to Garshaw, Season Two: The Tokra Part II.

Now here’s an example of a piece of dialogue with a negative sentiment:

“I mean, one wrong move, one false step, and a whole fragile world gets wiped out?” – Daniel, Season Two: One False Step

This is an example of a fairly neutral piece of text:

“Gentlemen, these planets designated P3A-575 and P3A-577 have been submitted by Captain Carter’s team as possible destinations for your next mission.”- General Hammond, Season Two: The Enemy Within.

It’s important to understand that sentiment analysis in its simplest form doesn’t really worry about how the words are put together. Picking up sarcasm, for example isn’t really possible by just deciding which words are negative and which are positive.

Sentiment analyses like this can’t measure the value of a text: they are abbreviations of a text. In the same way we use statistics like a mean or a standard deviation to describe a dataset, a sentiment analysis can be used to succinctly describe a text.

If you’d like to find out more about how sentiment analysis works, check out Julia Silge’s blog post here which provided a lot of the detailed code structure and inspiration for this analysis.

What does sentiment analysis show for SG1?

I analysed the show on both a by-episode and by-series basis. With over 200 episodes and 10 series, the show covered a lot of ground with its four main characters. I found a couple of things that were interesting.

The sentiment arc for most shows is fairly consistent.

Most shows open with a highly variable sentiment as the dilemma is explained, sarcastic, wry humour is applied and our intrepid heroes set out on whatever journey/quest/mission they’re tasked with. Season One’s Within the Serpent’s Grasp is a pretty good example. Daniel finds himself in an alternate reality where everything is, to put it mildly, stuffed.

Within the Serpent’s Grasp, Season 1. Daniel crosses over to an alternate reality where earth is invaded by evil parasitic aliens with an OTT dress sense.

According to these charts however, about three quarters of the way through it all gets a bit “meh”.

Below is the sentiment chart for Season Two’s In the Line of Duty, where Sam Carter has an alien parasite in control of her body. If that’s not enough for any astrophysicist to deal with, an alien assassin is also trying to kill Sam.

If we take the sentiment chart below literally, nobody really cares very much at all about the impending murder of a major character. Except, that’s clearly not what’s happening in the show: it’s building to the climax.

 

In the Line of Duty, Season 2. Sam Carter gets a parasite in her head and if that’s not enough, another alien is trying to kill her.

So why doesn’t sentiment analysis pick up on these moments of high drama?

I think the answer here is that this is a scifi/adventure show: tension and action isn’t usually achieved through dialogue. It’s achieved by blowing stuff up in exciting and interesting ways, usually.

The Season Three cliffhanger introduced “the replicators” for precisely this purpose. SG1 always billed itself as a family-friendly show. Except for an egregious full frontal nude scene in the pilot, everyone kept their clothes on. Things got blown up and people got thrown around the place by Really Bad Guys, but limbs and heads stayed on and the violence wasn’t that bad. SG1 was out to make the galaxy a better place with family values, a drive for freedom and a liberal use of sarcasm.

But scifi/adventure shows thrive on two things: blowing stuff up and really big guns. So the writers introduced the replicators, a “race” of self-generating techno lego that scuttled around in bug form for the most part.

In response to this new galactic terror, SG1 pulled out the shot guns, the grenades and had a delightful several seasons blasting them blood-free. The show mostly maintained its PG rating.

The chart below shows the sentiment chart for the replicators’ introductory episode, Nemesis. The bugs are heading to earth to consume all technology in their path. The Asgard, a race of super-advanced Roswell Greys have got nothing and SG1 has to be called in to save the day. With pump action shotguns, obviously.

The replicator bugs don’t speak. The sound of them crawling around inside a space ship and dropping down on people is pretty damn creepy: but not something to be picked up by using a transcript as an instrument for the full product.

Nemesis, Season 3 cliffhanger: the rise of the techno bugs.

Season Four’s opener, Small Victories solved the initial techno bug crisis, but not before a good half hour of two of our characters flailing around inside a Russian submarine with said bugs. Again, the sentiment analysis found it all a little “whatever” towards the end.

Small Victories, Season 4 series opener. Techno bugs are temporarily defeated.

Is sentiment analysis useless for TV transcripts then?

Actually, no. It’s just that in those parts of the show where dialogue is only of secondary importance to the other elements of the work obscures the usefulness of the transcript as an instrument. In order for the transcript to be a useful instrument, we need to do what we’d ideally do in many instrumental variables cases: look at a bigger sample size.

Let’s take a look at the sentiment chart for the entire sixth season. This is the one where Daniel Jackson is dead, but is leading a surprisingly active and fulfilling life for a dead man. We can see the overall structure of the story arc for the season below. The season starts with something of a bang as new nerd Jonas is introduced just in time for old nerd Daniel to receive a life-ending dose of explosive radiation. The tension goes up and down throughout. It’s most negative at about the middle of the season where there’s usually a double-episode cliffhanger and smooths out towards the end of the series until tension increases with the final cliffhanger.

 

Series Six: Daniel is dead and new guy Jonas has to pick up the vacant nerd space.

Season Eight, in which the anti-establishment Jack O’Neill has become the establishment follows a broadly similar pattern. (Jack copes with the greatness thrust upon him as a newly-starred general by being more himself than ever before.)

Note the end-of-series low levels of sentiment. This is caused by a couple of things: as with the episodes, moments of high emotion get big scores and this obscures the rest of the distribution. I considered normalising it all between 0 and 1. This would be a good move for comparing between episodes and seasons, but didn’t seem necessary in this case.

The other issue going on here is the narrative structure of the overall arc. In these cases, I think the season is slowing down a little in preparation for the big finale.

Both of these issues were also apparent in the by-episode charts as well.

Your turn now

For the fun of it, I built a Shiny app which will allow you to explore the sentiment of each episode and series on your own. I’ve also added simple word clouds for each episode and series. It’s an interesting look at the relative importance each character has and how that changed over the length of the show.

The early series were intensely focussed on Jack, but as the show progressed the other characters got more and more nuanced development.

Richard Dean Anderson made the occasional guest appearance after Season Eight, but was no longer a regular role on the show after Season Nine started. The introduction of new characters Vala Mal Doran and Cameron Mitchell took the show into an entirely new direction. The word clouds show those changes.

You can play around with the app below, or find a full screen version here. Bear in mind it’s a little slow to load at first: the corpus of SG1 transcripts comes with the first load, and that’s a lot of episodes. Give it a few minutes, it’ll happen!

 

The details.

The IMSDB transcript database provided the transcripts for this analysis, but not for all episodes: the database only had 182 of the more than 200 episodes that were filmed on file. I have no transcripts to any episode in Season 10 and only half of Season 9! If anyone knows where to find more or if the spinoff Stargate Atlantis transcripts are online somewhere, I’d love to know.

A large portion of this analysis and build used Julia Silge and David Robinson’s Tidy Text package in R. They also have a book coming out shortly, which I have on preorder. If you’re considering learning about Natural Language Processing, this is the book to have, in my opinion.

You can find the code I wrote for the project on Github here.

 

Australia Votes: Only Six Days to Go

It’s been painful, frankly pretty lame on the policy front and we’re over it. We all go to the national quadrennial BBQ election next week. While we’re standing in line clutching our sausage sandwiches and/or delightful local baked goods, it’d be nice to have an idea of what the people we’re voting for have had to say.

So another word cloud it is, because neither side has dared offer a policy that might stray from the narrative that “we’re all good blokes, really”.

This time, I requested up to 20 tweets from Turnbull and Shorten to see what’s been going on in the last couple of weeks. I got 18 back from both. Shorten (in red, below) has been talking about voting (surprise!), been screaming about medicare and apparently has an intense interest in trades with mentions of “brick” and “nails”. I hope that’s real tradies he’s talking about. Standard pollie speak “government”, “people”, “liberals”, “Turnbull” made it into the word cloud. Marriage equality also figured in the discussion.

Screen Shot 2016-06-25 at 10.18.46 PM

Turnbull (below, blue) was making a point about his relationship with the Australian muslim community, mentioning the Kirribilli house iftar and multifaith Australia. Standard coalition topics such as “investment”, “stable leaders”, “plan”, “economic”, “jobs” were all present. The AMP issue I touched on briefly last time. He appears to be trying to avoid the subject of marriage equality as much as possible.

Screen Shot 2016-06-25 at 10.19.02 PM

So there we have it: jobs and growth, the promise of stability, an Iftar in Kirribilli, marriage equality and a fascination with how we define a real or a fake tradie. If we all keep smiling fixedly, maybe we can forget about Brexit.

Q&A vs the Leaders’ Debate: is everyone singing from the same song sheet?

The election campaign is in full swing here in Australia and earlier this week the leaders of the two main parties, Malcolm Turnbull and Bill Shorten, faced off in a heavily scripted debate in which few questions were answered and the talking points were well practiced. An encounter described as “diabolical” and “boring“, fewer Australians tuned in compared to recent years. Possibly this was because they expected to hear what they had already heard before.

Since the song sheet was well rehearsed, this seemed like the perfect opportunity for another auspol word cloud. The transcript of the debate was made available on Malcolm Turnbull’s website and it was an easy enough matter of poking around and seeing what could be found. Chris Ullmann, who moderator, was added to the stop words list as he was a prominent feature in earlier versions of the cloud.

debate word cloud

The song sheet was mild: the future tense “will” was in the middle with Shorten, labor, plan, people and Turnbull. Also featured were tax, economic, growth, change and other economic nouns like billion, (per)cent, economy, budget, superannuation. There was mention of climate, (people) smugglers, fair and action, but these were relatively isolated as topics.

In summary, this word cloud is not that different to that generated from the carefully strategised twitter feeds of Turnbull and Shorten I looked at last week.

The ABC’s program Q and A could be a better opportunity for politicians to depart from the song sheet and offer less scripted insight: why not see what the word cloud throws up?

This week’s program aired the day after the leader’s debate and featured Steve Ciobo (Liberal: minister for trade), Terri Butler (Labor: shadow parliamentary secretary for child safety and prevention of family violence), Richard di Natale (Greens, leader, his twitter word cloud is here), Nick Xenophon (independent senator) and Jacqui Lambie (independent senator).  Tony Jones hosted the program and suffered the same fate as Chris Uhlmann.

QandA word cloud

The word cloud picked up on the discursive format of the show: names of panellists feature prominently. Interestingly, Richard di Natale appears in the centre. Also prominent are election related words such as Australia, government, country, question, debate.

Looking at other topics thrown up by the word cloud, there is a broad range: penalty rates, coal, senate, economy, businesses, greens, policy, money, Queensland, medicare, politician, commission.

Two different formats, two different panels and two different sets of topics. Personally, I prefer it when the song sheet has a few more pages.

Social Networks: The Aeneid Again

Applying social network analysis techniques to the Aeneid provides an opportunity to visualise literary concepts that Virgil envisaged for the text. It occurred to me that this was a great idea when I saw this social network analysis of Game of Thrones. If there is a group of literary figures more blood thirsty, charming and messed around by cruel fate than the denizens of Westeros, it would be those in the golden age of Roman literature.

Aeneid social network

This is a representation of the network of characters in the Aeneid. Aeneas and Turnus, both prominent figures in the wordcloud I created for the Aeneid are also prominent in the network. Connected to Aeneas is his wife Lavinia, his father Anchises, the king of the Latins (Latinus) and Pallas, the young man placed into Aeneas’ care.

Turnus is connected to Aeneas directly along with his sister Juturna, Evander (father of Pallas. Cliff notes version: the babysitting did not go well) and Allecto, a divine figure of rage.

Between Aeneas and Turnus is the “Trojan contingent”. Virgil deliberately created parallels between the stories surrounding the fall of Troy and Aeneas’ story. Achilles, the tragic hero, is connected to Turnus directly, while Aeneas is connected to Priam (king of Troy) and Hector (the great defender of Troy). Andromache is Hector’s widow whom Aeneas meets early in the epic.

Also of note is the divine grouping: major players in directing the action of the epic. Jupiter, king of the gods and Apollo the sun god are directly connected to our hero. Venus, Neptune, Minerva and Cupid are all present. In a slightly different grouping, Juno, Queen of the Gods and Aeneas’ enemy is connected to Dido, Aeneas’ lover. Suffice it to say, the relationship was not a “happily ever after”.

I used this list of the characters in the Aeneid as a starting point and later removed all characters who were peripheral to the social network. If you’re interested in trying this yourself, I posted the program I used here. Once again, the text used is the translation by J.W. Mackail and you can download it from Project Gutenberg here.

There were a number of resources I found useful for this project:

  • This tutorial from R DataMining provided a substantive amount of the code required for the social network analysis
  • While this tutorial from the same place was very helpful for creating a text document matrix. I’ve used it previously a number of times.
  • This article from R Bloggers on using igraph was also very useful
  • There were a number of other useful links and I’ve documented those in the R script.

Whilst text mining is typically applied to modern issues, the opportunity to visualise an ancient text is an interesting one. I was interested in how the technique grouped the characters together. These groupings were by and large consistent not only with the surface interpretation of the text, but also deeper levels of political and moral meaning within the epic.

More word clouds: Auspol

Whilst I love text mining a classic work of western literature, this time I decided to stay in the present century with the twitter feeds of the leaders of the three major parties heading into a downunder federal election.

Turnbull word cloud

Malcolm Turnbull is the prime minister and leads the liberal party, he’s in blue. Bill Shorten is the opposition leader and head of the labor party, he’s in red. Richard di Natale is the leader of the Greens party, he’s in green because I had no choice there.

The outcomes are pretty interesting: apparently AMP was A.Big.Deal. lately. “Jobs” and “growth” were the other words playing on repeat for the two major parties.
Shorten Word Cloud

Each one has a distinct pattern, however. Turnbull is talking about AMP, plans, jobs, future, growth. Shorten is talking about labor, AMP, medicare, budget, schools and jobs. Di Natale is talking about the Greens, AMP (again), electricity, and auspol itself. Unlike the others, Di Natale was also particularly interested in farmers, warming and science.Word cloud Di Natale

The programming and associated sources are pretty much the same as for the Aeneid word cloud, except I used the excellent twitteR package you can find out about here. This tutorial on R Data mining was the basis of the project. The size of the corpi (that would be the plural of corpus, if you speak Latin) presented a problem and these resources here and here were particularly helpful.

For reference, I pulled the tweets from the leaders’ timelines on the evening of the 23/05/16. The same code gave me 83 tweets from Bill Shorten, 59 from Malcolm Turnbull and 33 from Richard Di Natale: all leaders are furious tweeters, so if anyone has any thoughts on why twitteR responded like that, I’d be grateful to hear.

The minimum frequencies for entering the word clouds were 3 per word for Shorten with a greater number of tweets picked up, but only 2 for Di Natale and Turnbull, due to the smaller number of available words.

I’ll try this again later in the campaign and see what turns up.

Text Mining: Word Clouds

I’ve been exploring text mining in greater depth lately. As an experiment, I decided to create a word cloud based on Virgils Aeneid, one of the great works of Roman literature. Mostly because it can’t all be business cases and twitter analyses. The translation I used was by J.W. Mackail and you can download it here.

word cloud

Aeneas (the protagonist) and Turnus (the main antagonist) feature prominently. “Father” also makes a prominent appearance, as part of the epic is about Aeneas’ relationship with his elderly father. However, neither of Aeneas’ wives or his lover, Dido, appear in the word cloud. “Death”, “gods”, “blood”, “sword”, “arms” and “battle” all feature. That sums the epic up: it’s a rollicking adventure about the fall of Troy, the founding of Rome and a trip to the underworld as well.

The choice to downplay the role of romantic love in the story had particular political implications for the epic as a piece of propaganda. You can read more about it here and here. I found it interesting that the word cloud echoed this.

What I learnt from this experiment was that stop words matter. The cloud was put together from an early 20th century translation of a 2000 year old text using 21st century methods and stop words. Due to the archaic English used in the translation, I added a few stop words of my own: things like thee, thou, thine. This resulted in a much more informative cloud.

I did create a word cloud using the Latin text, but without a set of Latin stop words easily available it yields a cloud helpfully describing the text with prominent features like “but”, “so”, “and”, “until”.

The moral of the story is: the stop words we use matter. Choosing the right set describes the text accurately.

If you’re interested in creating your own clouds, I found these resources particularly helpful:

  • Julia Silge’s analysis of Jane Austen inspired me to think about data mining in relation to Roman texts, you can see it here, it’s great!
  • The Gutenbergr package for accessing texts by Ropenscilabs available on GitHub.
  • This tutorial on data mining from RDatamining.
  • Preparing literary data for text mining by Jeff Rydberg-Cox.
  • A great word cloud tutorial you can view here on STHDA.

There were a number of other tutorials and fixes that were helpful, I noted these in the Rscript. The script is up on github: if you want to try it yourself, you can find it here.