Size Matters

It would be too easy for someone like me to declare that A/B testing is simple. If you’re doing website testing you have all the power in the world. Sample sizes literally not dreamt of when I was an undergraduate. A large sample is the same thing as power, right? And of course, power is all that matters.

This is completely wrong. While statistical A/B testing is using (parts of) the same toolkit as I was using in RCTs for Papua New Guinea and Tonga: it isn’t looking at the same problem and it isn’t looking at the same effect sizes. A/B testing is used for incremental changes in this context. On the contrary, in my previous professional life we were looking for the biggest possible bang we could generate for the very limited development dollar.

As always, context matters. Power gets more expensive as you approach the asymptote if effect size is also shrinking. How expensive? How big does that sample have to be? This is one of the things I’m looking at in this experiment.

However, size isn’t something I see discussed often. The size of the test is about control: it tells you what your tradeoff has been between Type I and Type II errors. We fix our size a priori and then we move on and forget about it. But is our fixed, set size the same as the actual size of our test?

We know that Fisher’s exact test is often actually undersized in practice, for example.  In practice, this means the test is too conservative. On the flip side, a test that is too profligate (over sized) is rejecting the null hypothesis when it’s true far too often.

In this post, I’m going to look at a very basic test with some very robust assumptions and see what happens to size and power as we vary those assumptions and sample sizes. The purpose here is the old decision vs default adage: know what’s happening and own the decision you make.


While power matters in A/B testing, so does the size of the test. We spend a lot of time worrying about power in this field but not enough (in my view) ensuring our expectations of size are appropriate. Simple departures from the unicorns-and-fairy-dust normal distribution can cause problems for size and power.

Testing tests

The test I’m looking at is the plainest of plain vanilla statistical tests. The null hypothesis is that the mean of a generating distribution is zero against an alternate. The statistic is the classic z-statistic and the assumptions underlying the test can go one of two ways:

  1. In the finite sample, if the underlying distribution is normal, then the z-statistic is normal, no matter the sample size.
  2. Asymptotically, as long as the underlying distribution meets some conditions like finite fourth moments (more on this later- fat tails condition) and independence, as the sample size gets large, the z-statistic will have a normal limiting distribution.

I’m going to test this simple test in one of four scenarios over a variety of sample sizes:

  1. Normally generated errors. Everything is super, nothing to see here. This is the statistician’s promised land flowing with free beer and coffee.
  2. t(4) errors. This is a fat tailed distribution still centred on zero and symmetric. The fourth moment is finite, but no larger moments are. Is fat-tails alone an issue?
  3. Centred and standardised chi squared (2) errors: fat tailed and asymmetric, the errors generated from this distribution have mean zero and a standard deviation of unity. Does symmetry matter that much?
  4. Cauchy errors. This is the armageddon scenario: the central limit theorem doesn’t even apply here. There is no theoretical underpinning for getting this test to work under this scenario- there are no finite moments at or equal to the first or above (there are some fractional ones though). Can a big sample size get you over this in practice?

In the experiments, we’ll look at sample sizes between 10 and 100 000. Note that the charts below are on a log10 scale.

The null hypothesis is mu =0, the rejection rate in this scenario gives us the size of our test. We can measure power under a variety of alternatives and here I’ve looked at mu = 0.01, 0.1, 1 and 10. I also looked at micro effect sizes like 0.0001, but it was a sad story. [1]

The test has been set with significance level 0.05 and each experiment was performed with 5000 replications. Want to try it yourself/pull it apart? Code is here.

Normal errors: everything is super

power and size comparisons

With really small samples, the test is oversized. N=10 is a real Hail Mary at the best of times, it doesn’t matter what your assumptions are. But by about n=30 and above it’s consistently in the range of 0.05.

It comes as a surprise to no one to hear that power is very much dependent on distance from the null hypothesis. For alternates where mu =10 or mu =1, power is near unity for smallish sample sizes. In other words, we reject the null hypothesis close to 100% of the time when it is false. It’s easy for the test to distinguish these alternatives because they’re so different to the null.

But what about smaller effect sizes? If mu =0.1, then you need at least a sample of a thousand to get close to power of unity. If mu =0.001 then a sample of 100 000 might be required.

Smaller still? The chart below shows another run of the experiment with effect sizes of mu =0.001 and 0.0001 – miniscule. The test has almost no power even at sample sizes of 100 000. If you need to detect effects this small, you need samples in the millions (at least).

chart with power and size

Fat shaming distributions

Leptokurtotic distributions (fat tails) catch a lot of schtick for messing up tests and models. And that’s fairly well deserved. However, degree of fatness is an issue. The t(4) distribution still generates a statistic that works asymptotically in this context: but it has only exactly the number of moments we need and no wriggle room at all.

Size is more variable than in the normal scenario: still comfortably around 0.05, it’s more profligate at small sample sizes and at larger ones has a tendency to be more conservative. It’s still reasonably close to 0.05 at larger sample sizes, however.

another size/power comparison

Power is costly for the smaller effect sizes. For the same sample size (say n=1000) with mu = 0.1, there is substantially less power than in the normal case. A similar behaviour is evident for mu=0.01. Tiny effect sizes are similarly punished (see below).

t distribution small effect sizes

Fat, Skewed and Nearly Dead

The chi-squared(2) distribution put this simple test through its paces. The size of the test for anything under n=1000 is not in the same ballpark as its nominal, rendering the test (in my view) a liability. By the time n=100 000, the size of the test is reasonable.

Power, in my view, while showing similar outcomes is not a saving grace here: there’s very little control in this test under which you can interpret your power.

chi squared example

Here be dragons

I included a scenario with the Cauchy distribution, despite the fact it’s grossly unfair to this simple little test. The Cauchy distribution ensures that the Central Limit Theorem does not apply here: the test will not work in theory (or, indeed, in practice).

I thought it was a useful exercise, however, to show what that looks like. Too often, we assume “as n gets big CLT is going to work its magic” and that’s just not true. To whit: one hot mess.


Neither size nor power is improved with sample size increasing: that’s because the CLT isn’t operational in this scenario. The test is under sized, under powered for all but the largest of effect sizes (and really, at that effect size you could tell a difference from a chart anyway).

A/B reality is rarely this simple

A/B testing reality is rarely as simple as the test I’ve illustrated above. More typically, we’re testing groups of means or  proportions and interaction effects, possibly dynamic relationships and a long etc.

My purpose here is to show that even a simple, robust test with minimal assumptions can be thoroughly useless if those assumptions are not met. More complex tests and testing regimes that build on these simple results may be impacted more severely and more completely.

Power is not the only concern: size matters.

End notes

[1] Yes, I need to get a grip on latex in WordPress sooner or later, but it’s less interesting than the actual experimenting.

Violence Against Women: Standing Up with Data

Today, I spent the day at a workshop tasked with exploring the ways we can use data to contribute to ending violence against women. I was invited along by The Minerva Collective, who have been working on the project for some time.

Like all good workshops there were approximately 1001 good ideas. Facilitation was great: the future plan got narrowed down to a manageable handful.

One thing I particularly liked was that while the usual NGO and charitable contributors were present (and essential) the team from Minerva had managed to bring in a number of industry contributors from telecommunications and finance who were able to make substantial contributions. This is quite a different approach to what I’ve seen before and I’m interested to see how we can work together.

I’m looking forward to the next stage, there’s a huge capacity to make a difference. While there are no simple answers or magic bullets, data science could definitely do some good here.

Hannan Quinn Information Criteria

This is a short post for all of you out there who use information criteria to inform model decision making. The usual suspects are the Akaike and the Bayes-Schwartz criteria.

Especially, if you’re working with big data, try adding the Hannan Quinn (1972) into the mix. It’s not often used in practice for some reason. Possibly this is a leftover from our small sample days. It has a slower rate of convergence than the other two – it’s log(n) convergent. As a result, it’s often more conservative in the number of parameters or size of the model it suggests- e.g. it can be a good foil against overfitting.

It’s not the whole answer and for your purposes may offer no different insight. But it adds nothing to your run time, is fabulously practical and one of my all time favourites that no one has ever heard of.

Things I wish I’d noticed in grad school

Back in the day, I tended to get a little hyper-focussed on things. I’m sure someone, sometime, somewhere pointed this stuff out to me. But at the time it went over my head and I learned these things the hard way. Maybe my list of things I wish I’d noticed helps someone else.

  • Your professional contacts matter and it’s OK to ask for help. You’re not researching in a vacuum, the people around you want to help.
  • You need to look outside your department and university. There’s a bigger, wider world out there and while what’s going on inside your little world seems like it’s important: you need to be aware of what’s outside too.
  • Being methodologically/theoretically robust matters, yes. But learning when to let it go is going to be harder than learning the theory/methodology. No easy answers here, all you can do is make your decision and own it.
  • It doesn’t matter how much you read, you’re not going to be an expert across your whole field. Just be aware of the field and be an expert in what you’re doing right now. That’s OK.
  • Get a life. Really.

Stargate SG1: Sentiment is not Enough for the Techno Bugs

One of the really great things as your kids get older is that you can share with them stuff you thought was cool when you were young, had ideals and spare time. One of those things my siblings and I spent a lot of that spare time doing was watching Stargate SG1. Now I’ve got kids and they love the show too.

When I sat down to watch the series the first time I was a history student in my first year of university, so Daniel’s fascination with languages and cultures was my interest too. Ironically, at the time I was also avoiding taking my first compulsory econometrics course because I was going to hate it SO MUCH.

Approximately one million years, a Ph.D. in econometrics and possibly an alternate reality later, I’m a completely different person. With Julia Silge’s fabulous Austen analyses fresh in my mind (for a start, see here and then keep exploring) I rewatched the series. I wondered: how might sentiment work for transcripts, rather than print-only media like a novel?

In my view, this is something like an instrumental variables problem. A transcript of a TV show is only part of the medium’s signal: imagery and sound round out the full product. So a sentiment analysis on a transcript is only an analysis of part of the presented work. But because dialogue is such an intrinsic and important part of the medium, might it give a good representation?

What is sentiment analysis?

If you’re not a data scientist, or you’re new to natural language processing, you may not know what sentiment analysis is. Basically, sentiment analysis compares a list of words (like you may find in a transcript, a speech or a novel) to a dictionary that measures the emotions the words convey. In its most simple form, we talk about positive and negative sentiment.

Here’s an example of a piece of text with a positive sentiment:

“I would like to take this opportunity to express my admiration for your cause. It is both honourable and brave.” – Teal’c to Garshaw, Season Two: The Tokra Part II.

Now here’s an example of a piece of dialogue with a negative sentiment:

“I mean, one wrong move, one false step, and a whole fragile world gets wiped out?” – Daniel, Season Two: One False Step

This is an example of a fairly neutral piece of text:

“Gentlemen, these planets designated P3A-575 and P3A-577 have been submitted by Captain Carter’s team as possible destinations for your next mission.”- General Hammond, Season Two: The Enemy Within.

It’s important to understand that sentiment analysis in its simplest form doesn’t really worry about how the words are put together. Picking up sarcasm, for example isn’t really possible by just deciding which words are negative and which are positive.

Sentiment analyses like this can’t measure the value of a text: they are abbreviations of a text. In the same way we use statistics like a mean or a standard deviation to describe a dataset, a sentiment analysis can be used to succinctly describe a text.

If you’d like to find out more about how sentiment analysis works, check out Julia Silge’s blog post here which provided a lot of the detailed code structure and inspiration for this analysis.

What does sentiment analysis show for SG1?

I analysed the show on both a by-episode and by-series basis. With over 200 episodes and 10 series, the show covered a lot of ground with its four main characters. I found a couple of things that were interesting.

The sentiment arc for most shows is fairly consistent.

Most shows open with a highly variable sentiment as the dilemma is explained, sarcastic, wry humour is applied and our intrepid heroes set out on whatever journey/quest/mission they’re tasked with. Season One’s Within the Serpent’s Grasp is a pretty good example. Daniel finds himself in an alternate reality where everything is, to put it mildly, stuffed.

Within the Serpent’s Grasp, Season 1. Daniel crosses over to an alternate reality where earth is invaded by evil parasitic aliens with an OTT dress sense.

According to these charts however, about three quarters of the way through it all gets a bit “meh”.

Below is the sentiment chart for Season Two’s In the Line of Duty, where Sam Carter has an alien parasite in control of her body. If that’s not enough for any astrophysicist to deal with, an alien assassin is also trying to kill Sam.

If we take the sentiment chart below literally, nobody really cares very much at all about the impending murder of a major character. Except, that’s clearly not what’s happening in the show: it’s building to the climax.


In the Line of Duty, Season 2. Sam Carter gets a parasite in her head and if that’s not enough, another alien is trying to kill her.

So why doesn’t sentiment analysis pick up on these moments of high drama?

I think the answer here is that this is a scifi/adventure show: tension and action isn’t usually achieved through dialogue. It’s achieved by blowing stuff up in exciting and interesting ways, usually.

The Season Three cliffhanger introduced “the replicators” for precisely this purpose. SG1 always billed itself as a family-friendly show. Except for an egregious full frontal nude scene in the pilot, everyone kept their clothes on. Things got blown up and people got thrown around the place by Really Bad Guys, but limbs and heads stayed on and the violence wasn’t that bad. SG1 was out to make the galaxy a better place with family values, a drive for freedom and a liberal use of sarcasm.

But scifi/adventure shows thrive on two things: blowing stuff up and really big guns. So the writers introduced the replicators, a “race” of self-generating techno lego that scuttled around in bug form for the most part.

In response to this new galactic terror, SG1 pulled out the shot guns, the grenades and had a delightful several seasons blasting them blood-free. The show mostly maintained its PG rating.

The chart below shows the sentiment chart for the replicators’ introductory episode, Nemesis. The bugs are heading to earth to consume all technology in their path. The Asgard, a race of super-advanced Roswell Greys have got nothing and SG1 has to be called in to save the day. With pump action shotguns, obviously.

The replicator bugs don’t speak. The sound of them crawling around inside a space ship and dropping down on people is pretty damn creepy: but not something to be picked up by using a transcript as an instrument for the full product.

Nemesis, Season 3 cliffhanger: the rise of the techno bugs.

Season Four’s opener, Small Victories solved the initial techno bug crisis, but not before a good half hour of two of our characters flailing around inside a Russian submarine with said bugs. Again, the sentiment analysis found it all a little “whatever” towards the end.

Small Victories, Season 4 series opener. Techno bugs are temporarily defeated.

Is sentiment analysis useless for TV transcripts then?

Actually, no. It’s just that in those parts of the show where dialogue is only of secondary importance to the other elements of the work obscures the usefulness of the transcript as an instrument. In order for the transcript to be a useful instrument, we need to do what we’d ideally do in many instrumental variables cases: look at a bigger sample size.

Let’s take a look at the sentiment chart for the entire sixth season. This is the one where Daniel Jackson is dead, but is leading a surprisingly active and fulfilling life for a dead man. We can see the overall structure of the story arc for the season below. The season starts with something of a bang as new nerd Jonas is introduced just in time for old nerd Daniel to receive a life-ending dose of explosive radiation. The tension goes up and down throughout. It’s most negative at about the middle of the season where there’s usually a double-episode cliffhanger and smooths out towards the end of the series until tension increases with the final cliffhanger.


Series Six: Daniel is dead and new guy Jonas has to pick up the vacant nerd space.

Season Eight, in which the anti-establishment Jack O’Neill has become the establishment follows a broadly similar pattern. (Jack copes with the greatness thrust upon him as a newly-starred general by being more himself than ever before.)

Note the end-of-series low levels of sentiment. This is caused by a couple of things: as with the episodes, moments of high emotion get big scores and this obscures the rest of the distribution. I considered normalising it all between 0 and 1. This would be a good move for comparing between episodes and seasons, but didn’t seem necessary in this case.

The other issue going on here is the narrative structure of the overall arc. In these cases, I think the season is slowing down a little in preparation for the big finale.

Both of these issues were also apparent in the by-episode charts as well.

Your turn now

For the fun of it, I built a Shiny app which will allow you to explore the sentiment of each episode and series on your own. I’ve also added simple word clouds for each episode and series. It’s an interesting look at the relative importance each character has and how that changed over the length of the show.

The early series were intensely focussed on Jack, but as the show progressed the other characters got more and more nuanced development.

Richard Dean Anderson made the occasional guest appearance after Season Eight, but was no longer a regular role on the show after Season Nine started. The introduction of new characters Vala Mal Doran and Cameron Mitchell took the show into an entirely new direction. The word clouds show those changes.

You can play around with the app below, or find a full screen version here. Bear in mind it’s a little slow to load at first: the corpus of SG1 transcripts comes with the first load, and that’s a lot of episodes. Give it a few minutes, it’ll happen!


The details.

The IMSDB transcript database provided the transcripts for this analysis, but not for all episodes: the database only had 182 of the more than 200 episodes that were filmed on file. I have no transcripts to any episode in Season 10 and only half of Season 9! If anyone knows where to find more or if the spinoff Stargate Atlantis transcripts are online somewhere, I’d love to know.

A large portion of this analysis and build used Julia Silge and David Robinson’s Tidy Text package in R. They also have a book coming out shortly, which I have on preorder. If you’re considering learning about Natural Language Processing, this is the book to have, in my opinion.

You can find the code I wrote for the project on Github here.


Expertise vs Awareness for the Data Scientist

We’ve all seen them: articles with headlines like “17 things you MUST know to be a data scientist” and “Great data scientists know these 198 algorithms no one else does.” While the content can be a useful read, the titles are clickbait and imposter syndrome is a common outcome.

You can’t be an expert in every skill on the crazy data science Venn Diagram. It’s not physically possible and if you try you’ll spend all your time attempting to become a “real” data scientist with no time left to be one. In any case, most of those diagrams actually describe an entire industry or a large and diverse team: not the individual.

Data scientists need expertise, but you only need expertise in the areas you’re working with right now. For the rest, you need awareness.

Awareness of the broad church that is data science tells you when you need more knowledge, more skill or more information than you currently have. Awareness of areas outside your expertise means you don’t default to the familiar, you make your decisions based on a broad understanding of what’s possible.

Expertise still matters, but the exact area you’re expert in is less important. Expertise gives you the skills you need to go out and learn new things when and as you need them. Expertise in Python gives you the skills to pick up R or C++ next time you need them. Expertise in econometrics gives you the skills to pick up machine learning. Heck, expertise in languages (human ones, not computer ones) is also a useful skill set for data scientists, in my view.

You need expertise because that gives you the core skills to pick up new things. You need awareness because that will let you know when you need the new things and what they could be. They’re not the same thing: so keep doing what you do well and keep one eye on what other people do well.

Tiny Coders

I’ve mentioned it before, but I run the local code club out here in rural Australia. We are using the Code Club curriculum, designed for kids aged 9-12. Due to our particular circumstances with transport and distance, our code club needs to offer fun and learning for the age range 5-8 as well. Some of our littles are finding the materials too challenging to be fun, so as of this week we are running two streams:

  • The “Senior Dev Team”: in time-honoured managerial tradition, I told them they could be senior devs with a badge, if they helped the littles. That’s right, more responsibility and nothing but a badge to show for it. The senior dev team is going to keep going with the regular code club projects and they are smashing them out. Seriously, all I need to do is get these kids a black t-shirt each and they’re regular programmers already.
  • The “red team”: these are our kids that are struggling with the projects we have been doing and not having fun because of it. We’ll be doing multistage projects with lots of optional end points for kids to stop and go play: these are really young kids sitting down to code after six hours of school, so for some of them 20 minutes is more than enough. For them, it’s enough that they learn that computers and code are fun and interesting. For the older/more capable kids in this group we’ll still be learning about loops and conditional statements and all the good stuff, but our projects will be pared back and more basic so they aren’t overwhelming.

Our first red team project is here: Flying Cat Instructions and on Github here.

Of course, none of this would be possible without an amazing team of dedicated parent and teacher volunteers: many of whom had very little computer skills before we started and NO coding skills. They’re as amazing as the kids.

Models, Estimators and Algorithms

I think the differences between a model, an estimation method and an algorithm are not always well understood. Identifying differences helps you understand what your choices are in any given situation. Once you know your choices you can make a decision rather than defaulting to the familiar.

An algorithm is a set of predefined steps. Making a cup of coffee can be defined as an algorithm, for example. Algorithms can be nested within each other to create complex and useful pieces of analysis. Gradient descent is an algorithm for finding the minima of a function computationally. Newton-Raphson does the same thing but slower, stochastic gradient descent does it faster.

An estimation method is the manner in which your model is estimated (often with an algorithm). To take a simple linear regression model, there are a number of ways you can estimate it:

  • You can estimate using the ordinary least squares closed form solution (it’s just an algebraic identity). After that’s done, there’s a whole suite of econometric techniques to evaluate and improve your model.
  • You can estimate it using maximum likelihood: you calculate the negative likelihood and then you use a computational algorithm like gradient descent to find the minima. The econometric techniques are pretty similar to the closed form solution, though there are some differences.
  • You can estimate a regression model using machine learning techniques: divide your sample into training, test and validation sets; estimate by whichever algorithm you like best. Note that in this case, this is essentially a utilisation of maximum likelihood. However, machine learning has a slightly different value system to econometrics with a different set of cultural beliefs on what makes “a good model.” That means the evaluation techniques used are often different (but with plenty of crossover).

The model is the thing you’re estimating using your algorithms and your estimation methods. It’s the decisions you make when you decide if Y has a linear relationship with X, or which variables (features) to include and what functional form your model has.

Gradient Descent vs Stochastic Gradient Descent: Some Observations of Behaviour

Anyone who does any machine learning at all has heard of the gradient descent and stochastic gradient descent algorithms. What interests me about machine learning is understanding how some of these algorithms (and as a consequence the parameters they are estimating) behave in different scenarios.

The following are single observations from a random data generating process:

y(i) = x(i) +e(i)

where e is a random variable distributed in one of a variety of ways:

  • e is distributed N(0,1). This is a fairly standard experimental set up and should be considered “best case scenario” for most estimators.
  • e is distributed t(4). This is a data generation process that’s very fat tailed (leptokurtotic). It’s a fairly common feature in fields like finance. All the required moments for the ordinary least squares estimator to have all its usual properties exist, but only just. (If you don’t have a stats background, think of this as the edge of reasonable cases to test this estimator against.)
  • e is distributed as centred and standardised chi squared (2). In this case, the necessary moments exist and we have ensured e has a zero mean. But the noise isn’t just fat-tailed, it’s also skewed. Again, not a great scenario for most estimators, but a useful one to test against.

The independent variable x is intended to be deterministic, but in this case is generated as N(1,1). To get all the details of what I did, the code is here. Put simply, I estimated the ordinary least squares estimators for the intercept and coefficient of the process (e.g. the true intercept here is zero and the true coefficient is 1). I did this using stochastic gradient descent and gradient descent to find the minima of the likelihood and generate the estimates.

Limits: the following are just single  examples of what the algorithms do in each case. It’s not a full test of algorithm performance: we would repeat each of these thousands of times and then take overall performance measures to really know how these algorithms perform under these conditions. I just thought it would be interesting to see what happens. If I get some more time I’d develop this into a full simulation experiment if I find something worth exploring in detail.

Here’s what I found:

N is pretty large (10 000). Everything is super (for SGD)!

Realistically, unless dimensionality is huge, closed form ordinary least squares (OLS) estimation methods would do just fine here, I suspect. However, the real point here is to test differences between SGD and GD.

I used a random initialisation for the algorithm and you can see how quickly it got to the true parameter. In situations like this where the algorithm uses only a few iterations, if you are averaging SGD estimators then allowing for a burn in and letting it run beyond the given tolerance level may be very wise. In this case, averaging the parameters over the entire set of iterations would be a worse result than just taking the last estimator.

Under each scenario, the stochastic gradient descent algorithm performed brilliantly and much faster than the gradient descent equivalent which struggled. The different error distributions mattered very little here.

For reference, it took around 37 000 iterations for the gradient descent algorithm to reach its end point:

For the other experiments, I’ll just stick to a maximum of 10 000 iterations.

N is largish (N = 1000). Stochastic gradient descent doesn’t do too badly.

Now, to be fair, unless I was trying to estimate something with huge dimensionality (and this hasn’t been tested here anyway), I’d just use standard OLS estimating procedures for estimating this model in real life.

SGD takes marginally longer to get where it’s going, but the generating process made no material difference, although our example for chi squared was out slightly- this is one example.

N is smallish (N = 100): SGD gets brittle, but it gets somewhere close. GD has no clue.

Again, in real life, this is a moment for simpler methods but the interesting point here is that SGD takes a lot longer and fails to reach the exact true value, especially for the constant term (again, one example in each case, not a thorough investigation).

Here, GD has no clue and it’s taking SGD thousands more iterations to reach a less palatable conclusion. In practical machine learning contexts, it’s highly unlikely this is a sensible estimation method at these sample sizes: just use the standard OLS techniques.

I’m not destructive in general, but sometimes I like to break things.

I failed to achieve SGD armageddon by either weird error distributions (that were within the realms of reasonable for the estimator) or smallish sample sizes. So I decided to give it the doomsday scenario: smallish N and an error distribution that does not fulfil the requirements that the simple linear regression model needs to work. I tried a t(1) distribution for the error- this thing doesn’t even have a finite variance.

SGD and GD are utterly messed up (and I suspect the closed form solution may not be a whole lot better here either). Algorithmic armageddon, achieved. In practical terms, you can probably safely file this under “never going to happen”.

Machine Learning vs Econometric Modelling: Which One?

Renee from Becoming a Data Scientist asked Twitter which basic issues were hard to understand in data science. It generated a great thread with lots of interesting perspectives you can find here.

My opinion is that the most difficult to understand concept has nothing to do with the technical aspects of data science. Twitter post

The choice of when to use machine learning, when to use econometric methods and when it matters is rarely discussed. The reason for that is that the answers are neither simple nor finite.

Firstly, the difference between econometrics/statistics and machine learning is mostly cultural. Many econometric models not commonly seen in machine learning (tobit, conditional logit are two that come to mind) could easily be estimated using those techniques. Likewise, machine learning mainstays like clustering or decision trees could benefit from an econometric/statistical approach to model building and evaluation. The main differences between the two groups are different values about what makes a model “good” and slightly different (and very complimentary) skill sets.

Secondly, I think the differences between a model, an estimation method and an algorithm are not always well understood. Identifying differences helps you understand what your choices are in any given situation. Once you know your choices you can make a decision rather than defaulting to the familiar. See here for details.


So how do I make decisions about algorithms, estimators and models?

Like data analysis (see here, here and here), I think of modelling as an interrogation over my data, my problem and my brief. If you’re new to modelling, here’s some thoughts to get you started.

Why am I here? Who is it for?

It’s a strange one to start off with, but what’s your purpose for sitting down with this data? Where will your work end? Who is it for? All these questions matter.

If you are developing a model that customises a website for a user, then prediction may matter more than explanation. If you need to take your model up to the C-suite then explanation may be paramount.

What’s the life expectancy of your model? This is another question about purpose: are you trying to uncover complex and nuanced relationships that will be consistent in a dynamic and changing space? Or are you just trying to get the right document in front of the right user in a corpus that is static and finite?

Here’s the one question I ask myself for every model: what do I think the causal relationships are here?

What do I need it to do?

The key outcome you need from your model will probably have the most weight on your decisions.

For example, if you need to decide which content to place in front of a user with a certain search query, that may not be a problem you can efficiently solve with classic econometric techniques: the machine learning toolkit may be the best and only viable choice.

On the other hand, if you are trying to decide what the determinants of reading skill among young children in Papua New Guinea are, there may be a number of options on the table. Possibilities might include classic econometric techniques like the tobit model, estimated by maximum likelihood. But what about clustering techniques or decision trees? How do you decide between them?

Next question.

How long do I have?

In this case there are two ways of thinking about this problem: how long does my model have to estimate? How long do I have to develop it?


If you have a reasonable length of time, then considering the full suite of statistical solutions and an open-ended analysis will mean a tight, efficient and nuanced model in deployment. If you have until the end of the day, then simpler options may be the only sensible choice. That applies whether you consider yourself to be doing machine learning OR statistics.

Econometrics and machine learning have two different value sets about what makes a good model, but it’s important to remember that this isn’t a case where you have to pick a team and stick with it. Each of those value sets developed out of a response to solving different problems with a different skill set. There’s plenty of crossover and plenty to learn on each side.

If you have the time, then a thorough interrogation your data is never a bad idea. Statistics has a lot to offer there. Even if your final product is classic machine learning, statistics/econometrics will help you develop a better model.

This is also a situation where the decision to use techniques like lasso and ridge regression may come into play. If your development time is lacking, then lasso and/or ridge regularisation may be a reasonable response to very wide data (e.g. data with a lot of variables). However, don’t make the mistake of believing that defaulting to these techniques is always the best or most reasonable option. Utilising a general-to-specific methodology is something to consider if you have the time available. The two techniques were developed for two different situations, however: one size does not fit all.

If you are on a tight deadline (and that does happen, regularly) then be strategic: don’t default to the familiar, make your decision about what is going to offer most value for your project.


Back to our website example, if your model has 15 microseconds to evaluate every time a new user hits the website, then the critical component of run time becomes paramount. In a big data context, machine learning models with highly efficient algorithms may be the best option.

If you have a few minutes (or more) then your options are much wider: you can consider whether classic models like multinomial or conditional logit may offer a better outcome for your particular needs than, say, machine learning models like decision trees. Marginal effects and elasticities can be used in both machine learning and econometric contexts. They may offer you two things: a strong way to explain what’s going on to end-users and a structured way to approach your problem.

It’s not the case that machine learning = fast, econometrics = slow. It’s very dependent on the models, the resultant likelihoods/optimisation requirements and so on. If you’ve read this far, you’ve also probably seen that the space between the two fields is narrow and blurring rapidly.

This is where domain knowledge, solid analysis and testing in the development stage will inform your choices regarding your model for deployment. Detailed econometric models may be too slow for deployment in some contexts, but experimenting with them at the development stage can inform a final, streamlined deployment model.

Is your model static- do you present one set of results, once? Or is it dynamic: does this model generate multiple times over its lifecycle? These are also decisions you need to consider.

What are the resources I have?

How much data do you have? Do you need all of it? Do you want all of it? How much computing power have you got under your belt? These questions will help you decide what methodologies you need to estimate your model.

I once did a contract where the highly classified data could not be removed from the company’s computers. That’s reasonable! What wasn’t reasonable was the fact that the computer they gave me couldn’t run email and R at the same time. It made the choices we had available rather limited, let me tell you. But that was the constraint we were working under: confidentiality mattered most.

It may not be possible to use the simple closed form ordinary least squares solution for regression if your data is big, wide and streaming continuously. You may not have the computing power you need to estimate highly structured and nuanced econometric models in the time available. In those cases, the models developed for these situations in machine learning are clearly a very superior choice (because they come up with answers).

On the other hand, assuming that machine learning is the solution to everything is limiting and naive: you may be missing an opportunity to generate robust insights if you don’t look beyond what’s common in machine learning.

How big is too big for classic econometrics? Like all these questions, it’s answered with it depends. My best advice here is: during your analysis stage, try it and see.

Now go forth and model stuff

This is just a brief, very general rundown of how I think about modelling and how I make my decisions between machine learning and econometrics. One thing I want to make abundantly clear, however, is that this is not a binary choice.

You’re not doing machine learning OR econometrics: you’re modelling.

That means being aware of your options and that the differences between them can be extremely subtle (or even non existent at times). There are times when those differences won’t matter for your purpose, others where they will.

What are you modelling and how are you doing it? It’d be great to get one non-spam comment this week.