Correlation vs Causation

Correlation vs causation. I find this is an issue that is quite simple, from a technical point of view, but widely misunderstood. Statistical significance does not imply causation. Correlation implies there may be a direct or indirect relationship, but does not imply causation. In fact, very few things imply causation. My simple version of the differences is below.

If you want to know why this is far more than a stoush to be had in an academic tea room, check out Tyler Vigen’s collection. If the age of Miss America can be significantly and strongly correlated with murders by steam, hot vapours and objects; then in any practical analysis there are many options for other less obvious spurious correlations. In a big data context, knowing the difference could be millions of dollars.

Occasionally, people opine that causation vs correlation doesn’t matter (especially in a big data and sometimes a machine learning context). I’d argue this is completely the wrong view to take: just because you have all the power that matters doesn’t mean we should ignore these issues because a randomised control trial is impractical in a lot of ways. It just means deciding when, how and why you’re going to do so in the knowledge of what you’re doing. Spurious correlations are common, hard to detect and difficult to deal with. It’s a bear hunt worth setting out on.

 Causation vs correlation

 

Yes, you can: learn data science

Douglas Adams had it right in Dirk Gently’s Holistic Detective Agency. Discussing the mathematical complexity of the natural world, he writes:

… the mind is capable of understanding these matters in all their complexity and in all their simplicity. A ball flying through the air is responding to the force and direction with which it was thrown, the action of gravity, the friction of the air which it must expend its energy on overcoming, the turbulence of the air around its surface, and the rate and direction of the ball’s spin. And yet, someone who might have difficulty consciously trying to work out what 3 x 4 x 5 comes to would have no trouble in doing differential calculus and a whole host of related calculations so astoundingly fast that they can actually catch a flying ball.

If you can catch a ball, you are performing complex calculus instinctively. All we are doing in formal mathematics and data science is putting symbols and a syntax around the same processes you use to catch that ball.

Maybe you’ve spent a lot of your life believing you “can’t” or are “not good at” mathematics, statistics or whatever bugbear of the computational arts is getting to you. These are concepts we begin to internalise at a very early age and often carry them through our lives.

The good news is yes you can. If you can catch that ball (occasionally at least!) then there is a way for you to learn data science and all the things that go with it. It’s just a matter of finding the one that works for you.

Yes you can.

Tracking Democracy Sausage

It’s a fine tradition here in Australia where every few years communities manfully attempt to make up funding gaps in the selling and eating of #democracysausage to the captured audience of compulsory voters.

For fun, I decided to see if we could track the interest in the hashtag on twitter over time. I’ve exported the frequencies out to excel for this graph making exercise, because I’ll be teaching a class on stats entirely in excel in a few weeks and this will make for some fun discussion.

Democracy sausage line graph

As we can see, as of last night (2 more sleeps until #democracysausage day), interest on twitter was increasing. I’ll bring you a democracy sausage update tomorrow.

Technical notes: the API I’m using would only pull a maximum of 350 tweets featuring the hashtag on any given day: I suspect we may be missing some interest in sausages. I’ll look into other ways of doing this.

One very useful resource formed the bulk of the programming required: this blog post on R bloggers takes you through the basics required to do the same to any hashtag you may be interested in exploring.

Happy democracy sausage day!

Tutorials and Guides: A curated list

This post is a curated list of my favourite tutorials and guides because “that one where Hadley Wickham was talking about cupcakes” isn’t the most effective search term. You can find my list of cheat sheets here. There are a lot of great resources on data science (I’ve included my top picks), so I don’t intend to reinvent the wheel here. This is just a list of my favourites all laid out so I can find them again or point other people in their direction when it comes up in conversation. I’ve also added a number of the “how to” type posts I’ve written on this blog as I often answer an enquiry in that format.

Data Science

Tutorials and videos: General

Puppets teach data science too

  • Render a 3D object in R. I have no idea where I would ever use this information in my practice, but it’s presented BY A PUPPET. Great fun.
  • DIY your data science. Another offering from the puppet circle on the data science venn diagram.

Econometrics

Statistics

Work Flow

  • Guide to modern statistical workflow. Really great organisation of background material.
  • Tidy data, tidy models. Honestly, if there was one thing that had been around 10 years ago, I wish this was it. The amount of time and accuracy to be saved using this method is phenomenal.
  • Extracting data from the web. You found the data, now what to do? Look here.

Linear Algebra

Asymptotics

Bayes

Machine learning

Data visualisation

Natural Language Processing

I’ll continue to update this list as I find things I think are useful or interesting.

Edit: actually, “that one where Hadley Wickham was talking about cupcakes” is surprisingly accurate as a search term.

Screen Shot 2016-06-23 at 9.05.37 PM

Cheat Sheets: The New Programmer’s Friend

Cheat sheets are brilliant: whether you’re learning to program for the first time or you’re picking up a new language. Most data scientists are probably programming regularly in multiple languages at any given time: cheat sheets are a handy reference guide that saves you from googling how to “do that thing you know I did it in python yesterday but how does it go in stata?”

This post is an ongoing curation of cheat sheets in the languages I use. In other words, it’s a cheat sheet for cheat sheets. Because a blog post is more efficient than googling “that cheatsheet, with the orange bit and the boxes.” You can find my list of the tutorials and how-to guides I enjoyed here.

R cheat sheets + tutorials

Python cheat sheets

Stata cheat sheets

  • There is a whole list of them here, organised by category.
  • Stata cheat sheet, I could have used this five years ago. Also very useful when it’s been awhile since you last played in the stata sandpit.
  • This isn’t a cheat sheet, but it’s an exhaustive list of commands that makes it easy to find what you want to do- as long as you already have a good idea.

SPSS cheat sheets

  • “For Dummies” has one for SPSS too.
  • This isn’t so much a cheat sheet but a very basic click-by-click guide to trying out SPSS for the first time. If you’re new to this, it’s a good start. Since SPSS is often the gateway program for many people, it’s a useful resource.

General cheat sheets + discusions

  • Comparisons between R, Stata, SPSS, SAS.
  • This post from KD Nuggets has lots of cheat sheets for R, Python, SQL and a bunch of others.

I’ll add to this list as I find things.

Law of Large Numbers vs the Central Limit Theorem: in GIFs

I’ve spoken about these two fundamentals of asymptotics previously here and here. But sometimes, you need a .gif to really drive the point home. I feel this is one of those times.

Firstly, I simulated a population of 100 000 observations from the random uniform distribution. This population looks nothing like a normal distribution and you can see that below.

histogram of uniform distribution

Next, I took 500 samples from the data with varying sample sizes. I used n=5, 10, 20, 50, 100 and 500. I calculated the sample mean (x-bar) and the z score for each and I plotted their kernel densities using ggplot in R.

Here’s a .gif of what happens to the z score as the sample size increases: we can see that the distribution is pretty normal looking, even when the sample size is quite low. Notice that the distribution is centred on zero.

z score gif

Here’s a .gif of what happens to the sample mean as n increases: we can see that the distribution collapses on the population mean (in this case µ=0.5).

sample mean gif

For scale, here is a .gif of both frequencies as n gets large sitting on the same set of axes: the activity is quite different.

Sample mean vs z score

 If you want to try this yourself, the script is here. Feel free to play around with different distributions and sample sizes, see what turns up.

Three things every new data scientist should know

Anyone who has spent any time in the online data science community knows that this kind of post is a genre all on its own. “N things you should know/do/be/learn/never do” is something that pops up in my twitter feed several times a day. These posts range from useful ways to improve your own practice to clickbait listing reams of accomplishments that make Miss Bingley’s “accomplished young ladies” speech in Pride and Prejudice appear positively unambitious.

Miss Bingley’s pronouncement could be easily be applied to data scientists everywhere:

“Oh! certainly,” cried his faithful assistant, “no [woman] can be really esteemed accomplished who does not greatly surpass what is usually met with. A woman must have a thorough knowledge of music, singing, drawing, dancing, and the modern languages, to deserve the word; and besides all this, she must possess a certain something in her air and manner of walking, the tone of her voice, her address and expressions, or the word will be but half-deserved.”

Swap out the references to women with “data scientist”, throw in a different skill set and there we have it:

“Oh! certainly,” cried his faithful assistant, “no data scientist can be really esteemed accomplished who does not greatly surpass what is usually met with. A data scientist must have a thorough knowledge of programming in every conceivable language that was, is or shall be, linear algebra, business acumen, obscure models only ever applied in obscure places, and whatever is “hot” this year, to deserve the title; and besides all this, she must possess a certain something in her air and manner of tweeting, the tone of her blogging, her linkedin profile and be a snappy dresser, or the title will be but half-deserved.”

Put like that, you’d be forgiven for not allowing the Miss Bingleys of the world to define you.

If I had a list of things to say to new data scientists, they wouldn’t have much to do with data science at all:

  1. You define yourself and your own practice. Not twitter, not an online community, not blogs from people who may or may not know your work. Data science is an incredibly broad array of people, ideas and tools. Maybe you’re in the middle of it, maybe you’re on the edge. That’s OK, it’s all valuable.
  2. You’re more than a bot. This is an industry that is increasing automation every day. You add value to your organisation in ways that a bot never can. What is the value you add? Cultivate and grow it.
  3. The online community is a wonderful place full of people who want to help you grow your practice and potential. Dive in and explore: but remember that the advice and pronouncements are just that. They don’t always apply to you all the time. Take what’s useful today and put the rest aside until it’s useful later.

It’s a short list!

The Central Limit Theorem: Misunderstood

Asymptotics are the building blocks of many models. They’re basically lego: sturdy, functional and capable of allowing the user to exercise great creativity. They also hurt like hell when you don’t know where they are and you step on them accidentally. I’m pushing it on the last, I’ll admit. But I have gotten very sweary over recalcitrant limiting distributions in the past (though I may be in a small group there).

One of the fundamentals of the asymptotic toolkit is the Central Limit Theorem, or CLT for short. If you didn’t study eight semesters of econometrics or statistics, then it’s something you (might have) sat through a single lecture on and walked away with the hot take “more data is better”.

The CLT is actually a collection of theorems, but the basic entry-level version is the Lindberg-Levy CLT. It states that for any sample of n random, independent observations drawn from any distribution with finite mean (μ) and standard deviation (σ), if we calculate the sample mean x-bar then,

central limit theorem

In my time both in industry and in teaching, I’ve come across a number of interpretations of this result: many of them very wrong from very smart people. I’ve found it useful to clarify what this result does and does not mean, as well as when it matters.

Not all distributions become normal as n gets large. In fact, most things don’t “tend to normality” as N gets large. Often, they just get really big or really small. Some distributions are asymptotically equivalent to normality, but most “things”- estimators and distributions alike- are not.

The sample mean by itself does not become normal as n gets large. What would happen if you added up a huge series of numbers? You’d get a big number. What would happen if you divided your big number by your huge number? Go on, whack some experimental numbers into your calculator!

Whatever you put into your calculator, it’s not a “normal distribution” you get when you’re done. The sample mean alone does not tend to a normal distribution as N gets large.

The studentised sample mean has a distribution which is normal in the limit. There are some adjustments we need to make before the sample mean has a stable limiting distribution – this is the quantity often known as the z-score. It’s this quantity that tends to normality as n gets large.

How large does n need to be? This theorem works for any distribution with a finite mean and standard deviation, e.g. as long as x comes from a distribution with these features. Generally, statistics texts quote the figure of n=30 as a “rule of thumb”. This works reasonably well for simple estimators and models like the sample mean in a lot of situations.

This isn’t to say, however, that if you have “big data” your problems are gone. You just got a whole different set, I’m sorry. That’s a different post, though.

So that’s a brief run down on the simplest of central limit theorems: it’s not a complex or difficult concept, but it is a subtle one. It’s the building block upon which models such as regression, logistic regression and their known properties have been based.

The infographic below is the same information, but for some reason my students find information in that format easier to digest. When it comes to asymptotic theory, I am disinclined to argue with them: I just try to communicate in whatever way works. On that note, if this post was too complex or boring, here is the CLT presented with bunnies and dragons.** What’s not to love?CLT infographic

** I can’t help myself: The reason why the average bunny weights distribution gets narrower as the sample size gets larger is because this is the sample mean tending towards the true population mean. For a discussion of this behaviour vs the CLT see here.

It’s my only criticism of what was an otherwise a delightful video. Said video being in every way superior to my own version done late one night for a class with my dog assisting and my kid’s drawing book. No bunnies or dragons, but it’s here.

Modelling Early Grade Education in Papua New Guinea

For several years, I worked for the World Bank analysing the early grade education outcomes in a number of different Pacific countries including Laos, Tonga and Papua New Guinea, amongst others. Recently, our earlier work in Papua New Guinea was published for the first time.

One of the more challenging things I did was model a difficult set of survey outcomes: reading amongst young children. You can see the reports here. Two of the most interesting relationships we observed were the importance of language for young children learning to read (Papua New Guinea has over 850 of them so this matters) and the role that both household and school environments play in literacy development.

At some point I will write a post about the choice between standard ordinary least squares regressions used in the field and the tobit models I (generally) prefer for this data. Understanding the theoretical difference between censored, truncated and continuous data isn’t the most difficult thing in the world, but understanding the practical difference between them can have a big impact on modelling outcomes.

Text Mining: Word Clouds

I’ve been exploring text mining in greater depth lately. As an experiment, I decided to create a word cloud based on Virgils Aeneid, one of the great works of Roman literature. Mostly because it can’t all be business cases and twitter analyses. The translation I used was by J.W. Mackail and you can download it here.

word cloud

Aeneas (the protagonist) and Turnus (the main antagonist) feature prominently. “Father” also makes a prominent appearance, as part of the epic is about Aeneas’ relationship with his elderly father. However, neither of Aeneas’ wives or his lover, Dido, appear in the word cloud. “Death”, “gods”, “blood”, “sword”, “arms” and “battle” all feature. That sums the epic up: it’s a rollicking adventure about the fall of Troy, the founding of Rome and a trip to the underworld as well.

The choice to downplay the role of romantic love in the story had particular political implications for the epic as a piece of propaganda. You can read more about it here and here. I found it interesting that the word cloud echoed this.

What I learnt from this experiment was that stop words matter. The cloud was put together from an early 20th century translation of a 2000 year old text using 21st century methods and stop words. Due to the archaic English used in the translation, I added a few stop words of my own: things like thee, thou, thine. This resulted in a much more informative cloud.

I did create a word cloud using the Latin text, but without a set of Latin stop words easily available it yields a cloud helpfully describing the text with prominent features like “but”, “so”, “and”, “until”.

The moral of the story is: the stop words we use matter. Choosing the right set describes the text accurately.

If you’re interested in creating your own clouds, I found these resources particularly helpful:

  • Julia Silge’s analysis of Jane Austen inspired me to think about data mining in relation to Roman texts, you can see it here, it’s great!
  • The Gutenbergr package for accessing texts by Ropenscilabs available on GitHub.
  • This tutorial on data mining from RDatamining.
  • Preparing literary data for text mining by Jeff Rydberg-Cox.
  • A great word cloud tutorial you can view here on STHDA.

There were a number of other tutorials and fixes that were helpful, I noted these in the Rscript. The script is up on github: if you want to try it yourself, you can find it here.