Knuth's call for a literate programming style has spawned a new genre of statistical exposition, the R vignette, and thereby raised the dreary task of documenting computer code to the level of a minor art form, like finger painting or tap dancing. These vignettes are intended to reveal something of the authors contribution to the greater glory of data analysis, usually in the form of an R package.

This development has been enormously successful, and yet there is a general unease within the research community, a feeling that many of the almost 10,000 packages currently on CRAN have not received adequate vetting, or vignetting. In this spirit I would like to propose a new genre, the

R vinaigrette. These would be brief communications that expose some feature, or bug, in the collective enterprise of statistical software. As the name suggests there should be something piquant about a vinaigrette, some lemon juice to balance the oils, or mustard, or vinegar. I would only insist that, like the vignette, the vinaigrette must be reproducible. Ideally, they should also satisfy the Kolmogorov dictum that every single discovery should fit in a four-page Doklady note, since "the human brain is not capable of creating anything more complicated at one time."

An example is now available at http://www.econ.uiuc.edu/~roger/research/ebayes/Bdecon.pdf

# Da Void of Meaning

## Friday, December 9, 2016

## Monday, November 28, 2016

### Optimal Transport on the London Tube

I've been reading Alfred Galichon's terrific new monograph on optimal transportation, and was

inspired over the fall break to look into his example in Section 8.4 on routes for the Paris metro.

Data for the London Underground was more easily accessible, so I made a toy tube router function

for R that takes an origin and destination and computes an "optimal" path by minimizing the

cumulative distance between stops. An example path is illustrated in the figure below with the

lines color coded, unfortunately my current data sources don't account for links that have multiple

lines, so the routes typically overstate the number of line changes. (It would be nice to penalize

line changes with a fixed cost, but this would have extended the project beyond the fall break.)

Data and code is available here:http://www.econ.uiuc.edu/~roger/research/OT/tube.tar.gz

It is all very simple, just a linear program, but it makes you think about how one might scale it

up to the scheme used by Google Maps.

inspired over the fall break to look into his example in Section 8.4 on routes for the Paris metro.

Data for the London Underground was more easily accessible, so I made a toy tube router function

for R that takes an origin and destination and computes an "optimal" path by minimizing the

cumulative distance between stops. An example path is illustrated in the figure below with the

lines color coded, unfortunately my current data sources don't account for links that have multiple

lines, so the routes typically overstate the number of line changes. (It would be nice to penalize

line changes with a fixed cost, but this would have extended the project beyond the fall break.)

Data and code is available here:http://www.econ.uiuc.edu/~roger/research/OT/tube.tar.gz

It is all very simple, just a linear program, but it makes you think about how one might scale it

up to the scheme used by Google Maps.

## Wednesday, November 9, 2016

## Friday, September 16, 2016

### Dawn of the δ-method

Several years ago my colleague Steve Portnoy wrote a letter to the editor of the American Statistician

in response to an article that they had published called Who invented the δ-method? The article

claimed priority for Robert Dorfman on the basis of an article appearing in 1938 called "A Note

on the δ-method for Finding Variance Formulae" published in the Biometric Bulletin. Portnoy

pointed out that Joe Doob had written about the δ-method in a 1935 Annals paper titled, "On the

limiting distribution of certain statistics" referring to it as the "well-known δ-method" and citing

prior work by T.L. Kelley and Sewell Wright, and noting rather modestly that his Theorem 1

"shows an interpretation which can be given to the results obtained by this method." It seems

plausible that Doob's is the first formal justification for the method, and it is puzzling to put it

euphemistically that Dorfman made no mention of Doob's article. Perhaps this oversight can be

forgiven as a juvenile mistake since the Dorfman paper was written shortly after he finished his

undergraduate studies at Columbia, while working at the Worcester State Hospital, pictured above.

This august institution was reputed to be the first asylum for the insane in New England, and also happened to be the publisher of the Biometric Bulletin. Dorfman later went on to earn a Phd at Berkeley, and taught at Harvard where hecoauthored an influential book about linear programming with Paul Samuelson and Robert Solow.

## Thursday, September 15, 2016

### Bag of Little Bootstrap for QR

In the never ending quest to speed up inference for large quantile regression problems,

I have started to look into the Kleiner, et al Bag of Little Bootstraps proposal. After a

serious confusion on my part was corrected with the help of Xiaofeng Shao, I've come

to the following code fragment added to summary.rq in my quantreg package:

else if (se == "BLB"){ # Bag of Little Bootstraps

n <- length(y)

b <- ceiling(n^gamma)

S <- n %/% b

U <- matrix(sample(1:n, b * S), b, S)

Z <- matrix(0, NCOL(x), S)

for(i in 1:S){

u <- U[,i]

B <- matrix(0, NCOL(x), R)

for(j in 1:R){

w <- c(rmultinom(1, n, rep(1/b, b)))

B[,j] <- rq.wfit(x[u,], y[u], tau, weights = w, method = "fnb")$coef

}

Z[,i] <- sqrt(diag(cov(t(B))))

}

serr <- apply(Z, 1, mean)

}

I have started to look into the Kleiner, et al Bag of Little Bootstraps proposal. After a

serious confusion on my part was corrected with the help of Xiaofeng Shao, I've come

to the following code fragment added to summary.rq in my quantreg package:

else if (se == "BLB"){ # Bag of Little Bootstraps

n <- length(y)

b <- ceiling(n^gamma)

S <- n %/% b

U <- matrix(sample(1:n, b * S), b, S)

Z <- matrix(0, NCOL(x), S)

for(i in 1:S){

u <- U[,i]

B <- matrix(0, NCOL(x), R)

for(j in 1:R){

w <- c(rmultinom(1, n, rep(1/b, b)))

B[,j] <- rq.wfit(x[u,], y[u], tau, weights = w, method = "fnb")$coef

}

Z[,i] <- sqrt(diag(cov(t(B))))

}

serr <- apply(Z, 1, mean)

}

In the eventual implementation I managed to embed the inner loop into fortran

which helps to speed things up a bit, although of course it would be eventually

helpful to allow this to be distributed across cluster nodes.

which helps to speed things up a bit, although of course it would be eventually

helpful to allow this to be distributed across cluster nodes.

I should also mention that the implicit assumption here that BLB works for

moments, appears to be (presently) beyond the scope of the current theory,

## Friday, March 25, 2016

### Round of 16: What's Sweet about It?

The Statistics Department is running a March Madness contest, and I couldn't resist entering.

It is organized a little differently than the usual bracket picking, which made it more fun to

prepare an entry. You are given a budget of 100 units, and you must pick a subset of teams

as many as you want subject to the budget constraint: Teams seeded 1 cost 25, 2 cost 19, ...

16 seeds cost 1. I simulated 10,000 brackets, and recorded the survival probabilities as in

the earlier survival plot on this blog, and then computed the expected number of wins for

each team, normalized by their cost, ordered the teams and produced the following list of

teams. The winner is the entry whose teams accumulate the largest number of wins.

It is organized a little differently than the usual bracket picking, which made it more fun to

prepare an entry. You are given a budget of 100 units, and you must pick a subset of teams

as many as you want subject to the budget constraint: Teams seeded 1 cost 25, 2 cost 19, ...

16 seeds cost 1. I simulated 10,000 brackets, and recorded the survival probabilities as in

the earlier survival plot on this blog, and then computed the expected number of wins for

each team, normalized by their cost, ordered the teams and produced the following list of

teams. The winner is the entry whose teams accumulate the largest number of wins.

EWins Seeds Cost Bang CumCost CumEWins

Gonzaga 1.1619 11 4 0.2904750 4 1.1619

Pittsburgh 0.9041 10 4 0.2260250 8 2.0660

Cincinnati 1.0757 9 5 0.2151400 13 3.1417

Iowa 1.6385 7 8 0.2048125 21 4.7802

Syracuse 0.8112 10 4 0.2028000 25 5.5914

VA Commonwealth 0.7855 10 4 0.1963750 29 6.3769

West Virginia 2.4625 3 13 0.1894231 42 8.8394

Duke 2.2013 4 12 0.1834417 54 11.0407

Purdue 1.9888 5 11 0.1808000 65 13.0295

Connecticut 0.9040 9 5 0.1808000 70 13.9335

Butler 0.8602 9 5 0.1720400 75 14.7937

Indiana 1.8908 5 11 0.1718909 86 16.6845

Texas A&M 1.8507 3 13 0.1423615 99 18.5352

## Tuesday, March 22, 2016

Subscribe to:
Posts (Atom)