Wednesday, April 5, 2017

Florida's bizarre attempt to buy a biotech industry

Context: I moved to Miami for the year of 2017 while continuing to work remotely at Mount Sinai in NYC. The contrast between the academic and biotech ecosystems between Miami and New York was startling, which inspired me to dig a little into the strange history of Florida's attempts to duplicate the magic of biotech districts which have been built up in North Carolina and San Diego.

In 2003, the renowned fiscally conservative governor of Florida, Jeb Bush, went on a biotech spending spree. Then-governor Bush spent ~$1.5 billion to entice a motley crew of research institutes into opening franchises around the state. Most prominently, a Florida branch of the Scripps Research Institute was opened in Jupiter. The researchers were promised lavish funding, Florida tax-payers were promised suspiciously precise numbers of "jobs created". More than a decade later, Scripps Florida is still clinging on, lonely between a golf club and a strip mall. Other research centers were opened in similarly isolated locations and have since died (or are in the process of dying). The Torrey Pines Institute for Molecular Studies has been defunded because it "failed to meet job-creation targets". The Vaccine and Gene Therapy Institute in Port St. Lucie has not only closed, but is being sued and even had a police investigation into trashed documents. The Burnham Institute in Orlando is being pursued by the state of Florida for $80 million, again for not "creating" enough jobs. The lone exception to this pattern of failure seems to be an independently thriving research & biotech scene around the University of Florida in Gainesville.

The Research Centers:

Post-Mortems:

Sunday, January 3, 2016

How much faster is a truncated singular value decomposition?

The Singular Value Decomposition is an important matrix operation which enables many other numerical algorithms. The SVD lets you tame seemingly unwieldy matrices by uncovering their reduced "low rank" representation. A matrix which can be accurately approximated by a low-rank decomposition actually contains much less information than suggested by its dimensions.

A rank-1 matrix you might see often in the wild is an elementary school multiplication table. The multiplication table contains nxn entries which are intimately intertwined (i.e. one entry can't be changed independent of the others). A singular value decomposition of a multiplication table would reveal that the essential information is just the sequence of numbers 1, 2, ..., n (whose outer product reconstructs the table).

This kind of data compression can be very useful in machine learning, where we prefer to learn predictors which use a small of informative features (rather than thousands of highly correlated features). In addition to letting us boil down redundant features into more informative combinations, the SVD can also be used for denoising samples and imputing missing data.

What does the output of an SVD look like? From a given input matrix X, you get back a triplet U, s, V. The two matrices (U and V) often called the left and right singular vectors. The outer products of these singular vectors are rank-1 building block from which X gets reconstructed. The vector s contains singular values, each of which acts as a weight on a combination singular vectors.

To reconstruct your original matrix just add up the outer products of U and V, weighted by the entries of s. That is, you can get back a matrix whose entries are approximately equal to the original by running:

np.sum([np.outer(U[:, i], V[i, :]) * si for i, si in enumerate(s)], axis=0)

Alternatively, you can upgrade the vector s into a diagonal matrix and express the reconstruction of X purely through matrix multiplications:

np.dot(U, np.dot(np.diag(s), V))

In many cases, when we are dealing with low-rank underlying data, many of the singular values will be extremely small or exactly zero. In this case, it is wasteful to perform the "full" SVD and can be advantageous to instead do a "truncated" decomposition (computing only k singular vectors).

Exactly how much slower is a full SVD vs. a truncated SVD? I performed the following quick experiment to find out:

  1. Generate 100 random matrices with rank 50, 1000 columns and a random number of rows between 100 and 5000
  2. Record how long it takes to decompose the matrix using one of:
  3. Ensure that the reconstructed matrix matches the original

Results

The truncated SVD (using either an exact solver or approximate randomized solver) can be many times faster than a full SVD. If you're decomposing a large dataset with known low rank, go ahead and use a truncated SVD. On the other hand, if you're uncertain about the rank of your matrix (and want to inspect a large spectrum of singular values), you'll have to wait for a full SVD instead.

Source code:

Wednesday, May 28, 2014

Spark should be better than MapReduce (if only it worked)

Spark is the user-friendly face of Big Data: a distributing programming framework which lets you write collection oriented algorithms in Scala that (theoretically) execute seamlessly across many machines. Spark has an elegant API (parallel collections with methods like map/reduce/groupByKey) that feels like programming locally. Unlike MapReduce, Spark can cache partial results across the memory of its distributed workers, allowing for significantly faster/lower-latency computations. If Spark worked as promised it would be a huge productivity boost over writing MapReduce pipelines

Unfortunately, as I've learned over the past month, Spark will happily generate programs that mysteriously grind to a halt-- and tracking down the source of these problems can be a numbingly opaque process. There are at least two distinct problems: Spark's lazy evaluation makes it hard to know which parts of your program are the bottleneck and, even if you can identify a particularly slow expression, it's not always obvious why it's slow or how to make it faster.

Lazy Evaluation

Spark's API lets you to express your intentions clearly via many fine-grained calls to transformations such as map, filter, flatMap, distinct, &c. If you ran a long chain of transformations one at a time, you'd incur a large communication overhead and clog each worker's local cache with useless partial results. Spark reconciles its functional style with performance via delayed execution: transformations get bundled together and only run on demand. The unruly down-side to Spark's execution model is that big swaths of your program will run as a monolithic glob of computation. And if that computation runs slowly...well, good luck figuring out which of its constituent parts is the culprit. Spark could ameliorate some of this confusion with a non-lazy debug mode or some built-in tooling assistance (i.e. a distributed profiler). For now, however, you're stuck (1) trying to reason your way through the lazy dependency graph ("Oh! This one is a shuffle dependency!") (2) forcing computations and checking how long they take. 

Too Many Parameters, Too Many Ways to Fail

Though Spark looks like you're programming on your laptop, it has many performance gotchas you must guard vigilantly against. Make sure your objects aren't using Java serialization (Kryo is faster), pull that object out of the closure, broadcast this array to keep it from being copied repeatedly. Though annoying for beginners, these rules of thumb at least have some consistency which can be learned. 

More frustrating, however, is the inevitability with which Spark's many (initially invisible) default parameters will be wrong for your application. Buffers and heap sizes will turn out to be too small. Your reductions will use too few (or too many) workers. Too much memory gets used for data caching, or maybe it's too little. There's no rigorous or systematic way to set these parameters: you wait until things fail and supplicate at the feet of Spark's heuristic knob pantheon. 

My Experience Thus Far

Nothing I do can save my cluster from spending hours thrashing its way through a modest input size (before dying under a flood of obscure exceptions). I have recently tried all of the following (and more) to keep Spark from its stubborn predilection toward dying:
After a month of "stab in the dark" debugging, I've learned a lot about the internals of Spark but still don't have a working application. In some weird twist of Stockholm syndrome, I've come to like expressing my algorithms in Spark's abstractions: if only they would actually run! 

Has anyone else had a similar experience with Spark? Alternatively, has anyone had a positive experience with a non-trivial Spark codebase (bigger than tutorial wordcount examples). If so, what were your tricks to avoid the death-by-a-thousand-shuffles I've been experiencing? How about Spark's close cousins: Scalding, Scoobi, and Scrunch, are they substantially better (or worse)?



Wednesday, March 5, 2014

Big speedup for training Random Forests in scikit-learn 0.15

Until recently, wiseRF was the obviously fastest Random Forest implementation for Python (and thus, the best library for dealing with larger in-memory datasets). Though scikit-learn has had tree ensembles for the past several years, their performance was typically at least an order of magnitude worse than wiseRF (a boon to wiseRF's marketing team). The sklearn developers seemed to shake off their tree-building sluggishness with a Cython rewrite in the 0.14 release.

Unfortunately, as Yisheng and I discovered while working on CudaTree, even the faster Cython tree builder can still be significantly slower than wiseRF. Why is there still a performance gap when both libraries now use native implementations? wiseRF is probably doing something smarter with their choice of algorithms and/or data layout but the iron curtain of closed source software keeps us from finding out exactly what's going on.

It turns out that one important choice for building trees efficiently is the algorithm used to sort candidate splitting thresholds. The upcoming 0.15 release of scikit-learn will include some cache-friendly changes to how their algorithm sorts data. These modifications seem to have finally closed the gap with wiseRF.

Below are the benchmark times from the CudaTree paper, with the current branch of scikit-learn included under the label scikit-learn 0.15. The takeaway is that the new release will build Random Forests 2x-6x faster than the old one and that the performance differences between scikit-learn, wiseRF, and CudaTree are not significant.

Training times for 100 trees grown on a 6-core Xeon E5-2630 machine with an NVIDIA Titan graphics card:

Dataset wiseRF 1.5.11 scikit-learn 0.14 scikit-learn 0.15 CudaTree 0.6
ImageNet subset 23s 50s 13s 25s
CIFAR-100 (raw) 160s 502s 181s 197s
covertype 107s 463s 73s 67s
poker 117s 415s 99s 59s
PAMAP2 1,066s 7,630s 1,683s 934s
intrusion 667s 1,528s 241s 199s

Information about the datasets used above:

Name Features Samples Classes Description
ImageNet subset 4,096 10,000 10 Random subset of 10 labels from the 1000 category ImageNet data set, processed by the convolutional filters of a trained convolutional neural network (amazingly attains same accuracy!)
CIFAR-100 3,072 50k 100 Same as CIFAR-10, but with more samples and more labels.
covertype 57 581k 7 Identify tree cover from domain-specific features.
poker 11 1M 10 Poker hands
PAMAP2 52 2.87M 13 Physical activity monitoring
intrusion 41 5M 24 Network intrusion

Wednesday, December 11, 2013

NIPS and the Zuckerberg Visit

This past weekend, several hundred researchers, students, and hobbyists streamed into Lake Tahoe to attend a conference called Neural Information Processing Systems. NIPS is one of the two machine learning conferences of note (the other is ICML). Acceptance rates are low; prestige is high. Anyone interested in machine learning, statistics, applied math, or data can come to the conference but do expect to be bombarded by 10,000 terms that you don't know, even if you have a PhD.  

When we arrived and cracked open the workshop schedule, we found something very peculiar: 

3:00pm – 3:30pm: Q&A with Mark Zuckerberg”

What is the CEO of Facebook doing speaking at an academic conference on machine learning (and, nominally, neuroscience)? There's obviously a porous boundary between the corporate and academic worlds, but has it ever been this porous?

Ordinarily, when employees of Google, Microsoft, or Facebook show up at NIPS, they either (a) keep to the recruiting room or (b) are there to discuss & present research. The latter group, though they might be wearing their employer's logo on a t-shirt, engage with the conference as academics. Their status is derived from their research accomplishments. And this status does not shut down discourse: they will still field questions and suggestions in-person from any passing student. These kinds of interactions are encouraged at conferences like NIPS.  As a professor once told me when I was a graduate student “We need you guys, you’re the lifeblood of new ideas.”

In contrast, consider the presence of Mark Zuckerberg.  I'm sure someone saw a legitimate need to encircle his Q&A session with armed guards, but nothing screams hierarchy like police at the door. The tone changed rapidly: accomplished professors became little more than lowly researchers shuffling into the Deep Learning workshop to see a Very Important Person speak. Zuckerberg couldn't help but disrupt the conference; the spectacle drew so many, that an adjacent workshop was paused to make room for the overflow. And equally distasteful is what went on behind the scenes.  The conference was full of whispered rumors of one-on-one meetings and secret acquisitions.  This is the first academic conference I have attended where there was this much talk about getting rich or being bought out, something that is actually happening to a number of researchers that appeal to Facebook’s ambitions. 

As for the content of the Q&A itself?  My distrust of excessive power will show itself here (note: Soviet childhood), but I can think of Mark Zuckerberg only as a tunnel visionary.  He wants Facebook to connect all the people in the world & have a personalized theory of mind for each user.  As far as he sees, this is for the good.  Some of the questions asked by the incisive audience were polite versions of “What are the dangers of having this much data about so many people?” and “What does Facebook as a company do to help society?”  These Zuckerberg dodged so expertly that by the time he was done “answering” (with a hefty & convincing confidence), I had forgotten exactly what the question was.

Facebook could have easily sent some high-ranking folks to give an interesting & technical talk instead of Zuckerberg coming himself.  His presence was jarring because it subverted the spirit of the conference, and injected into it the distinct aroma of big money.  Was it anything more than a glamorous & sanctioned recruiting visit?  I would have expected the NIPS organizers to decline to endorse such industrial overreach.

The barriers between Silicon Valley and academia are blurry and getting blurrier. Maybe this is to be expected in Zuckerberg's "knowledge economy", where the largest data sets and greatest computational resources are destined to be locked behind corporate doors. However, if academia has any hope maintaining an atmosphere of open inquiry (rather than just proprietary R&D), academics have to protect their culture. Otherwise, the resulting decline in high-quality reproducible research will be a loss for everyone involved, and society at large.
 
In the future, Mark Zuckerberg should be welcome to attend NIPS just like anyone else, assuming he has paid the appropriate registration fee (or obtained a scholarship).  But it is the job of academics (here, the organizers of NIPS) to uphold the necessary boundary between academia and Silicon Valley.  They have failed to do so, and I sincerely hope that this flirtation with Silicon Valley won’t turn into a marriage.
  
This post was written jointly with Alex Rubinsteyn and a bottle of Scotch.

Friday, October 11, 2013

Training Random Forests in Python using the GPU

Random Forests have emerged as a very popular learning algorithm for tackling complex prediction problems. Part of their popularity stems from how remarkably well they work as "black-box" predictors to model nearly arbitrary variable interactions (as opposed to models which are more sensitive to noise and variable scaling). If you want to construct random forests in Python, then the two best options at the moment are scikit-learn (which uses some carefully optimized native code for tree learning) and the slightly speedier closed-source wiseRF.
Both of these implementations use coarse-grained parallelization to split work across multiple cores, but neither of them make can make use of an available graphics card for computation. In fact, aside from an implementation custom-tailored for vision tasks and a few research papers of questionable worth, I haven't been able to find a general-purpose GPU implementation of Random Forests at all. So, for the past few months I've worked with Yisheng Liao to see if we could dramatically lower the training time of a Random Forest by off-loading as much computation to the GPU as possible.
We made a GPU Random Forest library for Python called CudaTree, which, for recent generation NVIDIA GPUs, can be 2x - 6x faster than scikit-learn.
CudaTree is available on PyPI and can be installed by running pip install cudatree. It's written for Python 2.7 and depends on NumPy, PyCUDA (to compile and launch CUDA kernels), and Parakeet (to accelerate CPU-side computations).

Benchmarks

We trained scikit-learn Random Forests (with fifty trees) on four medium-sized datasets (the largest takes up ~500mb of memory) on a 6-core Xeon E5-2630. We can then compare this baseline with training times for CudaTree on machines with a variety of NVIDIA graphics processors:

Dataset scikit-learn CudaTree (C1060) CudaTree (C2075) CudaTree (K20x) CudaTree (Titan)
CIFAR-10 (raw) 114s 52s 40s 24s 20s
5.7x faster
CIFAR-100 800s 707s 308s 162s 136s
5.8x faster
ImageNet subset 115s 80s 60s 35s 28s
4.1x faster
covertype 138s - 86s - 50s
2.7x faster

edit: Thanks to everyone who nudged me to validate the accuracy to CudaTree's generated models, there was actually a bug which resulted in the GPU code stopping early on the covertype data. We had unit tests for training error but none for held-out data. Noob mistake. The latest version of CudaTree fixes this mistake and the performance impact is negligible on all the other datasets. I'll add new covertype timings once the machines we used are free.

Information about the datasets used above:

Name Features Samples Classes Description
CIFAR-10 3,072 10,000 10 Raw pixel values to classify images of objects.
CIFAR-100 3,072 50,000 100 Same as CIFAR-10, but with more samples and more labels.
ImageNet subset 4,096 10,000 10 Random subset of 10 labels from the 1000 category ImageNet data set, processed by the convolutional filters of a trained convolutional neural network (amazingly attains same accuracy!)
covertype 54 581,012 7 Identify tree cover from a given set of 57 domain-specific features.

Limitations

CudaTree currently only implements a RandomForestClassifier, though regression trees are forthcoming. Furthermore, CudaTree's performance degrades when the number of target labels exceeds a few hundred and it might stop working altogether when the number of labels creeps into the thousands.

Also, there are some datasets which are too large for use with CudaTree. Not only does your data have to fit in comparatively smaller GPU memory, it has to contend with CudaTree's auxiliary data structures, which are proportional in size to your data. The exact specification for the largest dataset you can use is a little hairy, but in general try not to exceed about half the size of your GPU's memory.

If your data & task fit within these restrictions, then please give CudaTree a spin and let us know how it works for you.

Due credit: Yisheng did most of the coding, debugging, and profiling. So, you know, the actual hard work. I got to sit back and play advisor/"ideas man", which was a lot of fun. Also, the Tesla K20 used for this research was kindly donated by the NVIDIA Corporation, thanks NVIDIA!

Sunday, July 22, 2012

Expensive lessons in Python performance tuning

Over the past year I've done a lot of number crunching in Python with the wonderful ipython/numpy/pandas/sklearn stack and using picloud to parallelize my computations. Picloud is an amazing service which lets you run your code in Amazon's behemoth data centers while hiding most of the configuration headache. Their computational model is essentially a parallel map: you submit a list of inputs and specify what function to map over them and picloud automagically launches that function on multiple machines. This is less powerful than full-blown MapReduce (no parallel reductions) but, luckily, a lot of the computations machine learning people want to perform are trivial to parallelize. These include:
  • parameter searches (parallelize over distinct parameters)
  • k-fold cross validation (parallelize over the folds)
  • training random forests (trees don't need to know about each other)
  • random starts for locally converging hill climbers like k-meansEM, etc..
As long as you have multiple things to do in parallel and the work for each item greatly exceeds the communication time, then picloud can be a miracle. However, they do charge a premium over the standard AWS instance prices and even Amazon's baseline prices can add up quickly if you're using powerful instance types. When you're running on someone else's hardware the inefficiencies of your code translate very directly into added costs. One month, I managed to rack up an extremely high bill from an embarrassingly small amount of actual work because (1) my Python code created millions of bloated objects in memory, forcing me to use the more expensive high-memory instance type, and (2) I had overlooked some very simple optimizations that later cut my runtime by an order of magnitude.  

So, I thought I might save someone else time and money by sharing some tips which made my Python code more efficient without sacrificing flexibility or maintainability. 
  1. Profile everything! Profile mercilessly and relentlessly. Let no assumption about efficiency pass without being put under the cold and uncaring gaze of a profiler. This applies to both runtime (use ipython's prun command) and memory consumption (which you can inspect with the over-engineered guppy). Don't "optimize" a damn thing until you know for sure it's a top bottleneck. 
  2. Be suspicious of loops (but have faith in Python's core operations). If your bottleneck is a loop performing numerical computations you can almost always speed it up by using some equivalent vectorized NumPy operations. On the other hand, once the obviously vectorizable stuff is off-loaded to library code, then I'm often surprised just how efficient Python's operations can be. I can build lists of startling length, index into dictionaries all day long, construct millions of simple objects, and it all adds up to a small fraction of my total runtime. 
  3. Simplest is often fastest. I spent a whole day concocting elaborate buffering and pre-fetching schemes to traverse the lines of a file as quickly as possible without reading it all into memory. I neglected, however, to compare with the trivial solution of simply writing "for line in file:", which of course turned out to be faster and more efficient. Similarly, I spent a while tweaking a CSV parsers before finding that using "string.split" with the standard parsing routines int and float was just as fast. 
  4. If you're creating millions of objects, make them namedtuples. In one feature extraction pipeline, I take a multi-million line CSV and parse each line into a Python object. My ever-frustrated internal micro-optimizer screams at the obvious inefficiency. And, sadly, he's right. Python objects are extremely heavy-weight compared to the simple struct we would create in C. Luckily, Guido and Friends foresaw the "zillions of simple objects" use-case and thus created namedtuples. They have named fields like objects, but are laid out contiguously and take up a lot less space. Alternatively, if you're using an object for more than just data storage and want to use non-trivial methods, you can lay out a full-blown object contiguously by filling out the slots attribute. How efficient are they? As a totally unscientific demonstration I constructed a million little vectors with field names "x", "y", and "z". When implemented as ordinary Python objects they took up 400MB of memory, whereas their namedtuple cousins only required 114MB. Still not nearly as compact as they could be, but a dramatic improvement nonetheless! 
  5. Don't search linearly through a sorted array, use binary search (duh). On the one hand, this is CS 101 kid's stuff. On the other, I didn't know about the bisect library and had code littered with np.nonzero(sorted<=x)[0]. Once I switched to using bisect_left/bisect_right I saw a huge performance improvement. edit: In the comments, Peter pointed out that NumPy implements a faster binary search called searchsorted; you (and I) should probably use that instead!.
  6. Numpy's "fancy indexing" is slow. If you index into a NumPy array with a range then you get a very lightweight view over the original array; this kind of indexing doesn't copy any data and is almost free. If, however, you index with a boolean vector or a sequence of integers ("fancy indexing"), then you actually incur a significant cost. If your code does a lot of fancy indexing then you should either switch to using the take method or rewrite your indexing as an explicit loop in native code (see #8).
  7. Wes McKinney is a genius. If you're implementing anything Wes McKinney has already put in his library pandas, just stop. His code is faster, more robust, and more likely to be correct than anything you're about to write. Want rolling window aggregators? Use pandas. Need to handle missing data? Use pandas. Are you writing some sort of unbelievably ugly hack which attempts to implement joins and group-by's over NumPy arrays but actually manages to spend 3 hours computing a subtly incorrect result? (I have done this). Jesus Christ, just stop and use pandas.
  8. When all else fails, try weave. Yes, weave is unmaintained, ugly, and hard to debug. But, if you find yourself thinking "if only I could just inline 5 lines of C right in the middle of this Python code", then weave lets you actually do that. If, however, that 5 lines turns into 50 then maybe it's time to learn about Cython.
Also, a few last words specific to machine learning: climb the complexity ladder carefully. It's tempting to throw a fancy tool at some new data, and doubly tempting to try every possible combination of preprocessing steps and hyperparameters. Brute force search and a muscular learning algorithm let you sidestep the need for thinking, experimentation, or understanding. Of course, in my experience, this rarely works out and ends up costing quite a bit of money.

So, do the more prudent mature thing and start simple. Collect simple statistics, plot each feature, and try to understand the data a little before you launch 100,000 jobs on picloud. Even then, start with simpler learners like  SGD Logistic Regression or small Random Forests. These will let you explore your choices of preprocessing steps and target functions without eating up too many resources. Once you're more comfortable with the problem, go ahead and train a 1000 layer deep belief network (if that's your thing).

Postscript: After almost a year, this article seems out of date with regard to the various ways you can generate native code. Particularly, before you go reaching for weave, Cython, or (gasp!) even C itself, it might be worthwhile to try selective just-in-time compilation. Both Numba and my own Parakeet will generate efficient native implementations for array-oriented programs. Copperhead is a little more limited semantically (it's really a subset of Haskell masquerading as Python), but if you can squeeze your algorithm into Copperhead's model then you'll get a blazing fast GPU kernel as a reward. Alternatively, you can skip the "pretending to be Python" charade and build data-parallel expression trees directly using Theano. It can be a little rough around the edges but, like Copperhead, if you can get it working then the (many orders of magnitude faster) GPU kernel performance is well worth the trouble.