It’s all about how you encode the knowledge into mathematics modeling, and turn it into an algorithmic system.

# Category Archives: ideas

# Feature and learner

Finding a good feature for the data analytic task is far more important than a sophisticated model.

A good feature crafted by expert experience captures the essence of the data, and it needs much less effort to try to model it.

the recent “feature learning” is definitely the way to go.

# How to hire a data scientist or statistician

I am obviously biased with respect to the importance of statistics based on my education, thoughother people seem to agree with me. During interviews, we tend to either ask questions that play to our individual strengths or brainteasers. Though easier, this approach is fundamentally wrong. Interviewers should outline the skills required for the role and ask questions to ensure the candidate possesses all the necessary qualifications. **If your interview questions don’t have a specific goal in mind, they are shitty interview questions**. This means, by definition, that most brain teaser and probability questions are shitty interview questions.

**A data scientist or statistician should be able to:**

- Pull data from, create, and understand SQL and noSQL dbs (and the relative advantages of each)
- understand and construct a good regression
- write their own map/reduce
- understand CART, boosting, Random Forests, maybe SVM and fit them in R or using some other open source implementation
- take a project from start to finish, without the help of an engineer, and create actionable recommendations

### Good Interview Questions

Below are a few of the interview questions I’ve heard or used over the past few years. Each has a very specific goal in mind, which I enumerate in the answer. Some are short and very easy, some are very long and can be quite difficult.

**Q: How would you calculate the variance of the columns of a matrix (called mat) in R without using for loops.**

**A:** This question establishes familiarity with R by indirectly asking about one of the biggest flaws of the language. If the candidate has used it for any non-trivial application, they will know the apply function and will bitch about the slowness of for loops in R. The solution is:

**Q: Suppose you have a .csv files with two columns, the 1st of first names the 2nd of last names. Write some code to create a .csv file with last names as the 1st column and first names as the 2nd column.**

**A:** You should know basic cat, awk, grep, sed, etc.

**Q: Explain map/reduce and then write a simple one in your favorite programming language.**

**A:** This establishes familiarity with map/reduce. See my previous blog post.

**Q: Suppose you are Google and want to estimate the click through rates (CTR) on your ads. You have 1000 queries, each of which has been issued 1000 times. Each query shows 10 ads and all ads are unique. Estimate the CTR for each ad.**

**A:** This is my favorite interview question for a statistician. It doesn’t tackle one specific area, but gets at the depth of statistical knowledge they possess. Only good candidates receive this question. The candidate should immediately recognize this as a binomial trial, so the maximum likelihood estimator of the CTR is simply (# clicks)/(# impressions). This question is easily followed up by mentioning that click through rates are empirically very low, so this will estimate many CTRs at 0, which doesn’t really make sense. The candidate should then suggest altering the estimate by adding pseudo counts: (# clicks + 2)/(# impressions + 4). This is called the Wilson estimator and shrinks your estimate towards .5. Empirically, this does much better than the MLE. You should then ask if this can be interpreted in the context of Bayesian priors, to which they should respond, “Yes, this is equivalent to a prior of beta(2,2), which is the conjugate prior for the binomial distribution.”

The discussion can be led multiple places from here. You can discuss: a) other shrinkage estimators (this is an actual term in Statistics, not a Seinfeld reference, see Stein estimators for further reading) b) pooling results from similar queries c) use of covariates (position, ad text, query length, etc.) to assist in prediction d) method for prediciton logistic regression, complicated ML models, etc. A strong candidate can talk about this problem for at least 15 of 20 minutes.

**Q: Suppose you run a regression with 10 variables and 1 is significant at the 95% level. Suppose you then find 10% of the data had been left out randomly and had their y values deleted. How would you predict their y values?**

**A:** I would be very careful about doing this unless its sensationally predictive. If one generates 10 variables of random noise and regresses them against white noise, there is a ~40% chance at least one will be significant at a 95% confidence level. This question helps me understand if the individual understands regression. I also usually ask about regression diagnostics and assumptions.

**Q: Suppose you have the option to go into one of two bank branches. Branch one has 10 tellers, each with a separate queue of 10 customers, and branch two has 10 tellers, sharing one queue of 100 customers. Which do you choose?**

**A:** This question establishes familiarity with a wide range of basic stat concepts: mean, variance, waiting times, central limit theorem, and the ability to model and then analyze a real world situation. Both options have the same mean wait time. The latter option has smaller variance, because you are averaging the wait times of 100 individuals before you rather than 10. One can fairly argue about utility functions and the merits of risk seeking behavior over risk averse behavior, but I’d go for same mean with smaller variance (think about how maddening it is when another line at the grocery store is faster than your own).

**Q: Explain how Random Forests differs from a normal regression tree.**

**A:** This question establishes familiarity with two popular ML algorithms. “Normal” regression trees, have some splitting rule based on decrease in mean squared error or some other measure of error or misclassification. The tree grows until the next split decreases error by less than some threshold. This often leads to overfitting and trees fit on data sets with large numbers of variables can completely leave out many variables from the data set. Random Forests are an ensemble of fully grown trees. For each tree, a subsample of the variables and bootstrap sample of data are taken, fit, and then averaged together. Generally this prevents overfitting and allows all variables to “shine”. If the candidate is familiar with Random Forests, they should also know about partial dependence plots and variable importance plots. I generally ask this question of candidates that I fear may not be up to speed with modern techniques. Some implementations do not grow trees fully, but the original implementation of Random Forests does.

### Bad Interview Questions

The following are probability and intro stat questions that are not appropriate for a data scientist or statistician roll. They should have learned this in intro statistics. These would be like asking an engineering candidate the complexity of binary search (O(log n)).

**Q: Suppose you are playing a dice game; you roll a single die, then are given the option to re-roll a single time after observing the outcome. What is the expected value of the dice roll?**

**A:** The expected value of a dice roll is 3.5 = (1+2+3+4+5+6)/6, so you should opt to re-roll only if the initial roll is a 1, 2, or 3. If you re-roll (which occurs with probability .5), the expected value of that roll is 3.5, so the expected value is:

4 * 1/6 + 5 * 1/6 + 6 * 1/6 + 3.5 * .5 = 4.25

**Q: Suppose you have two variables, X and Y, each with standard deviation 1. Then, X + Y has standard deviation 2. If instead X and Y had standard deviations of 3 and 4, what is the standard deviation of X + Y?**

**A:** Variances are additive, not standard deviations. The first example was a trick! sd(X+Y) = sqrt(Var(X+Y)) = sqrt(Var(X) + Var(Y)) = sqrt(sd(X)*sd(X) + sd(Y)*sd(Y)) = sqrt(3*3 + 4*4) = 5.

### A few closing notes

Don’t ask anything about traversing a tree or graph structure that you learned in your algorithms class. This is a question for a software engineer, not a data scientist or statistician. If you are a software engineer interviewing a data scientist, ask your data scientist friends for questions beforehand. I do this when I interview software engineers and its a much better experience for everyone involved. If you don’t know any data scientists feel free to steal these or email me for more. Finally, I’d love to hear about your favorite interview questions, worst interview experiences, or anything else related to this topic.

# Why I like ensemble learning ?

It’s a democratic machine learning.

It has checked the consistency among all committee learners:

if an item is ranked/classified with good value by only one committee learner, it’s not reliable enough (not robust enough).

but if an item has good scores from all learners, it means there is a consistency and highly reliable in the prediction.

# Key success to Generalized Linear Model (GLM)

GLM is actually connected to the Kernel method in machine learning society.

One key success of GLM is that, the “linearity” of the model is:

“linear to model parameters”.

not to confuse with :

“linear to input variable”.

Example 1.

The input variable is in 2D dimensional space, denoted as (x1,x2). We have a data set D size of N, { d1, d2, …, dN}. Now for a simple GLM, we can set up a model like this:

Y = (1 ; X) * w + e;

where (1 ; X) contains all data points with a 1 vector augmented on the left, a trick to simplify the w model parameters : w = [w0; w1, … w2].

Trivially this model can be solve by generalized inverse. Therefore we get the w* optimal model parameters.

Example 2.

Same notation as above, but this time we are playing with more a sophisticated GLM model: with a polynomial fitting in GLM.

You might say, “Hey hold on for a second! Polynomial is no more linear! You are still using G-Linear-M, right?”.

Good catch. Yes we are still using GLM, but using the “kernel trick” !

Say we are using 2nd order polynomial, then basically we are constructing a new “feature space” from the original space (x1, x2). Now it’s (x1^2, x1*x2, x2^2).

Hmm.. technically, now for each data point di=[di.x1, di.x2], you compute its new coordinate in the new feature space, then leading to this:

di_new = [(di.x1)^2 , di.x1*di.x2, (di.x1)^2].

The GLM is now:

Y = [1; Z]*wz + e;

Z is a 3D space mapping of the original 2D input space; Model parameters wz = [wz0; wz1; wz2; wz3].

See? the new model is still “linear” to the parameter, but “non-linear” to the input variables!

That’s why GLM is quite successful and powerful!