Mastering Logistic Regression: Interpretation and Fitting
Introduction
At it’s core, a logistic regression is nothing more than fitting a special function to data that takes the values between 0 and 1. The most common use cases are when the response of interest is a dichotomous variable, meaning that the variable we’re modeling can take either of exactly 2 values. For example, a coin flip can be modeled as a dichotomous variable, where heads is 1 and tails is 0. Whether someone had breakfast or not today is also dichotomous. Sometimes, dichotomous variables are created from continuous variables. For example, high blood pressure can be modeled as dichotomous by creating a cutoff value for what is considered high blood pressure ( systolic mm Hg). The “regression” part of the name comes in when you want to associate the response with some explanatory variables. For example, if you want to know the relationship between the number of hours you study (explanatory) and passing/failing an exam (response), we can use a logistic regression to model the strength of that relationship.
Logistic regression has found use across many disciplines, including for classification in machine learning models, ecological models of constrained growth, and estimation in retrospective studies, so it’s useful to understand the method from first statistical principles in order to fully appreciate the utility and limitations of this method.
The Logistic Function
The magic of this method starts with the shape of the function itself, with a specific parameterization1 for regression purposes. Since this function is the core of the method, we’ll spend some time developing some intuition for how this function behaves.
There are a few things I'd like you to note about this function.
- The range of this function is bounded between and , while can take on any value on the real line in the domain. This also means that are free to take on any value.
- When we set and vary , the -intercept always stays fixed at . In fact, by itself determines the -intercept which is why we call the intercept parameter.
- Similarly, fixing and varying gives a completely flat line that shifts up and down implying is not useful information. Notice also that we get diminishing movement for values further from 0. The intercept will shift more between than from , etc.
- The function has a symmetry about the inflection point, meaning the rate that the function increases starting from the bottom is the same as the rate the function decreases starting from the top. This implies the inflection point always occurs at . A step-by-step proof of this can be found on Socratic Q&A.
Now with Probability!
Having a function that maps values between 0 and 1 sets the stage perfectly for introducing probability, which will also take values between 0 and 1! To make this into a concrete example, suppose I’m interested in modeling how many fruit snacks one can eat (explanatory variable, ) before getting a stomach ache (response variable, ). It’s reasonable to think about the probability of such an event since I could probably have 100 fruit snacks one day and get a stomach ache and do the same another day and not get a stomach ache. It is a little ambitious to think that there are no other variables that might affect this probability, and that the entire population will have the same tolerance for fruit snacks. For example, if kids can tolerate more fruit snacks than adults, we’d likely need to model more covariates, but we’ll just keep it simple for now.
Introducing the mathematical description, we’re modeling a single data point as a dichotomous (Bernoulli) random variable with parameter . The value of can either be for stomach ache, and for not stomach ache.2 is the probability that we observe a stomach ache. In this case, will depend on the number of fruit snacks that someone eats, so we should instead write , where is the number of fruit snacks. The probabilities are related to by the logistic function as is defined above. If we’re making multiple observations of given some values of , we’ll need to introduce a subscript that indexes all the observations and write since the probability is different for each observation. Finally, the coefficients for the logistic regression are and , that (for now) we assume are given to us. We can include the dependence explicitly in , using a ”;” to separate our observations from our parameters. In symbols, this is all summarized by the statement
If you’re curious, I read this, “Y sub i conditional on x sub i is distributed as a Bernoulli random variable with parameter p of x sub i given beta 0 and beta 1”. Often we’ll just remember that depends on our parameters and data, so we can shorten the notation to just . I’ll shorten this to since we’re thinking of as fixed values as well, sacrificing some precision for clarity.
This is our data model. We have now attached a probabilistic nature to the data events that we observe. Our data model provides a description of the probability of observing each event, known as a probability mass function (pmf). For the bernoulli distribution, , the pmf is given by:
This is a notationally heavy way to say the probability of getting a stomach ache () is and not getting one is 1 minus that. The precision of such a statement takes some getting used to, but trace the function through to make sure you can map the english back to the mathematical statement.
- is a common point of confusion. Just think that is before we observe the data, so we can talk about the probability of such a random variable. is the actual value, and has no probability associated with it, you either had the stomach ache or not. For interpretation, you can always think about “plugging” in the value of the lower case variable, and we can read as “probability of getting a stomach ache”.
- again separates data from parameters, and sometimes the parameters of the pmf are left off. We need to be given our parameter (which in turn is a function of number of fruit snacks and ), in order to state the probability of observing an event .
Interpretation of Coefficients
With understood as the probability of an event, we can determine the meaning of on the probability of an event. We need to invert the logistic function to get:
Steps
The term appearing on the right hand side has a name — we call it the log odds. The term “odds” is the same as you would use colloquially, as in, if you had a chance of winning a game, your odds are 1:2. If you plug in into the formula above, the log odds on the right hand side is . Thus, a 1 unit increase in implies that there is a 1 unit increase of the log odds of the event happening. Similarly, a 1 unit increase in implies that there is a unit increase on the scale of log odds. If you’re a “show me the math” type person, suppose we have two observations of someone eating 55 fruit snacks, and someone else eating 56 fruit snacks, (a one unit difference in ). If we take the difference of these two scenarios,
We say that is the slope on the log odds scale, and is the intercept on the log odds scale when . In our scenario of with 1 fruit snack difference (where the denominator is zero), is directly interpreted as the difference of log odds.
To be frank, interpreting the coefficients on a log odds scale is quite futile — as shown above in the logistic regression function, changes on the log odds scale depends on where you are on the scale, with larger changes in probability when you are close to 0. The most I interpret, is if the log odds are positive, this means the probability is greater than half.
Even exponentiating both sides to get something on the “odds” scale is quite dubious. I suspect the reason odds are used so often in the gambling industry is because they prey on the fact that odds are fraught with misinterpretations. Seeing an odds of 1:2, you’ll likely jump to thinking about a probability of 1/2, yet there’s quite a large difference between winning probabilities of 1/3 and 1/2. Inversely, when talking about “payouts”, for example on a craps table, you’ll commonly see that an Any 7 bet pays “5 for 1”, whose true meaning is a 4:1 payout, you get 4$ for every 1$ you put down. When it comes to taking your money, the casino will make it seem it’s less likely you’ll lose money (odds), but for paying you out, they’ll make it seem that you get more money (proportion)! Mixing the usage of this is intentionally confusing, and that’s not even discussing “difference” of odds, which is what would mean, and still difficult to interpret. Avoid odds if you can.3
My recommendation is to use the function to translate things onto the probability scale, you’ll likely have a more intuitive understanding of the values (even better yet, is to plot the entire probability function as a function of your variable). However, one of my professors pointed me toward a Car Talk riddle about potatoes that highlights some misconceptions about the probability scale as well. Be careful with interpretation, it’s very easy to get tripped up!
Fitting the Model
Now that we know the general form and interpretation of the logistic function, we can take our data and find the best fit model. Our observations take the form of either 1 or 0, so the drag points are constrained. A major question is how do we know what is “best fit”. There are various criteria for determining that, but most will be centered around the idea of likelihood.
The Likelihood Principle
Likelihood is a frequently encountered principle for estimating parameters given some data. It is different from the concept of “probability”, even though commonly we’d use those words interchangeably. In fact, the fundamental idea of likelihood is that it’s kind of the “opposite” of probability. If we think about our data model as the statistical world, and our observations as the real world, the probability function is how we can move from our statistical model to the observations. In turn, the likelihood function uses what we observe in the real world to inform us of better parameters for our data model in the statistical world. We feed parameters into our probability function, and we feed data into our likelihood function.
Let’s skip thinking about the regression for now, and just think about observations from a data model with a single parameter . The definition of likelihood for a single observation from Bernoulli distribution is:
For multiple independent observations, we multiply the probabilities,
What?! Likelihood is defined by the probability function?! So does this mean likelihood can be interpreted as the probability that that is a particular value? Unfortunately no — and I get why this is confusing at first. In frequentist statistics, parameters are thought of as fixed values of the population, i.e. there is no randomness associated with parameters. Hence, it does not make sense to discuss the probability that takes a certain value.4 If you pause and think about what the definition says though, it will make sense because there must be a correspondence between the data model and our observations. It’s a two way road, but it’s just a matter of perspective which direction you’re currently moving in.
Substituting in our pmf for a Bernoulli distribution above, we get
Higher values of mean that value of is more likely to have generated the data that we fed in. Relative values of likelihood are also meaningful, in fact, this forms the basis of the likelihood ratio test. In order to find the most likely value of given our data, we would try maximizing the function. We call the value of that maximizes this function the maximum likelihood estimate (MLE) of . In order to denote estimates of parameters, you’ll commonly see little hats on the parameter, . Since the objective is to maximize this function, we normally take the log of the likelihood function, which doesn’t change the maximum since log is a monotonic function. The log likelihood allows us to avoid dealing with the exponential terms, and the expression is,
- I color coded the sliders, but
prop
is more a proxy for controlling the number of observed . The observations are actually calculated by the function . The MLE will bounce around due to that round function since our observations need to be a whole number. - The maximum of the function (MLE) tends to follow the proportion of 1's observed in the data, as expected. If we saw a baseball player hit 20 / 100 pitches, we'd instinctively say they have a 20% chance of hitting pitches.
- Notice changing does not change the location of the maximum (other than rounding) but it does change the curvature of the likelihood around the maximum. The curvature is intimately related to the standard error, so it would make sense that as we observe more data with the same proportion of 1's, we'd be more certain about our estimate of .
In this situation, it’s not difficult to calculate a closed form solution for the maximum likelihood estimate of the function. We’ll take the derivative and set it equal to 0, and solve for to get:
proof
There’s a good reason the likelihood principle is so prominent — the likelihood function is very useful for guiding us toward good estimates of parameters for our model. However, finding an estimate for and is not as straightforward when we try to associate data with our logistic function, and no closed form solution exists like the simplified case.
Likelihood for Regression
The likelihood function for logistic regression is instead a function of the parameters . Instead of just the number of 1’s and 0’s we observe in the data , we must also take into consideration the values of that we observe. In a sense, the likelihood function transfers along to the next stage of modeling since itself is subsequently a function of and .The likelihood is given by substituting the logistic function for in the binary likelihood function. If we simplify the expression, we get
proof
For a single , we have:
We found in the interpretation section that the log odds is the same as the linear predictor, , and substituting with logistic function,
For multiple observations, we just sum the log-likelihoods, and we get the expression shown.
It’s certainly not a friendly expression, and we are now dealing with two parameters instead of just a single , so we are optimizing over a surface instead of just a curve, but we can still visualize the likelihood.
Hauck-Donner Effect
Above we saw that the logistic function “disappears” when we separate the data. Can you rationalize why this happens as you play around with the likelihood plot and the logistic regression function? The maximum likelihood no longer exists (or not numerically stable) in these situations because the likelihood surface becomes very flat, and the coefficients escape off to infinity. We can make the magnitude of arbitrarily large, because the logistic function will become sharper and there will also be many values that can take on. This phenomenon is known as the Hauck-Donner effect, and happens often in logistic regression, especially in higher dimensional spaces. It’s not the only way this can happen, but it’s important to keep in mind as you diagnose possible issues with your model. One sign is that the standard errors of your coefficients will be very large.
There are various ways of dealing with this issue, philosophical by nature:
- If you’re using logistic regression for classification, you can choose any of the values and get the same classification. You can also consider using a simpler model, like a threshold on .
- Change your model by including different or excluding variables. It’s possible that with this separation, you were asking an ill-posed question in the first place, and a different formulation should be considered.
- You can also adjust the likelihood function with a penalty for large values of coefficients, commonly a ridge or lasso penalty. This is the approach of python’s scikit-learn implementation.
- Changing to a bayesian framework, if we add a prior distribution on the coefficients, we can get a well defined posterior distribution.
bayesglm
in R package “arm” uses this approach. - Other solutions are listed on this stack exchange post.
Optimization Topics
Given that this is a relatively standard nonlinear (convex) optimization problem, you have a choice of many methods that will give you the same optimal answer. I won’t get too much into the nitty gritty, as chances are this will be mostly done already by a computing program. Probably the most common implementations are:
- Gradient Descent — A general purpose descent algorithm that is normally used for simplicity. It’s not as efficient as other methods.
- Newton Raphson — A faster algorithm that uses the second derivative of the likelihood function to find the optimal solution. However looking at the surface of the likelihood function, a quadratic approximation is not always reliable especially as the likelihood function becomes flatter.
- Iteratively Reweighted Least Squares —
Used for models in generalized linear models because each iteration is solving a least squares problem, which also contains useful statistical information by default. This method is the same as “Fisher-Scoring”, which is similar to Newton-Raphson, but instead uses the expected value of the second derivative of the likelihood. This is the method used in R’s
glm.fit
function.
If you come from machine learning or engineering disciplines, it’s common instead to think about minimizing loss functions instead of maximizing likelihood functions. There is a correspondence. In the case of logistic regression, maximizing the likelihood is equivalent to minimizing the cross-entropy of the fitted probabilities of our model and the data values. There is an additional factor of that does not change the results of the optimization.
Footnotes
-
Note there are other related parameterizations floating around, but the one I give is most common in statistics because the parameters are linearly related with each other. This allows the function to fit into the framework of Generalized Linear Model theory. ↩
-
As is convention in statistics, the capital means we don’t know the value yet (is random, or not yet observed) but a lower case means that it’s known (fixed, or already observed). ↩
-
Odds do have a tremendous advantage in (mostly) epidemiological contexts, in that if you are trying to estimate the odds ratio of a disease conditional on some sort of exposure, an important property is that the odds ratio is symmetric with respect to the role of disease and exposure. This implies that you can estimate the odds of a disease conditional on exposure, the same that you would estimate exposure conditional on disease, i.e. you can use retrospective studies to estimate quantities you’d normally be interested in prospective studies! Note odds ratios are NOT the same as relative risk, another common quantity in epidemiology. ↩
-
Likelihood also differs from posterior probability in bayesian statistics. In a bayesian framework, we do think about parameters as being random values, and thus we are able to make probabilistic statements about them. Bayes Theorem describes the relationship between the prior probability, posterior probability and likelihood function. ↩