(Why) Has Kohonen-style SOM fallen out of favor?

by Wayne   Last Updated April 05, 2017 20:19 PM

As far as I can tell, Kohonen-style SOMs had a peak back around 2005 and haven't seen as much favor recently. I haven't found any paper that says that SOMs have been subsumed by another method, or proven equivalent to something else (at higher dimensions, anyhow). But it seems like tSNE and other methods get a lot more ink now-a-days, for example in Wikipedia, or in SciKit Learn, and SOM is mentioned more as a historical method.

(Actually, a Wikipedia article seems to indicate that SOMs continue to have certain advantages over competitors, but it's also the shortest entry in the list. EDIT: Per gung's request, one of the articles that I'm thinking about is: Nonlinear Dimensionality Reduction. Note that SOM has less written about it than the other methods. I can't find the article that mentioned an advantage that SOMs seem to retain over most other methods.)

Any insights? Someone else asked why SOMs are not being used, and got references from a while ago, and I have found proceedings from SOM conferences, but was wondering if the rise of SVMs or tSNE, et al, just eclipsed SOMs in pop machine learning.

EDIT 2: By pure coincidence, I was just reading a 2008 survey on nonlinear dimensionality reduction this evening, and for examples it mentions only: Isomap (2000), locally linear embedding (LLE) (2000), Hessian LLE (2003), Laplacian eigenmaps (2003), and semidefinite embedding (SDE) (2004).



Answers 2


I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in competitions, such as the Merck Viz Challenge, I personally have had success implementing SOM for both feature extraction and binary classification. While there are certainly some who dismiss SOMs without justification besides the algorithm's age (check out this discussion, there are also a number of articles that have been published within the last few years that implemented SOMs and achieved positive results (see Mortazavi et al., 2013; Frenkel et al., 2013 for instance). A Google Scholar search will reveal that SOMs are still utilized within a number of application domains. As a general rule, however, the best algorithm for a particular task is exactly that - the best algorithm for a particular task. Where a random forest may have worked well for a particular binary classification task, it may perform horribly on another. The same applies to clustering, regression, and optimization tasks. This phenomenon is tied to the No Free Lunch Theorem, but that is a topic for another discussion. In sum, if SOM works best for you on a particular task, that is the algorithm you should use for that task, regardless of what's popular.

Dirigo
Dirigo
January 09, 2016 17:27 PM

I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. Would love to get some idea on what people think about it and if it is worth publishing in python for people to use.

IEEE link to paper: http://ieeexplore.ieee.org/document/6178802/

Matlab implementation. https://www.mathworks.com/matlabcentral/fileexchange/35538-cluster-reinforcement--cr--phase

Thanks for your feedback.

Narine Hall
Narine Hall
April 05, 2017 20:16 PM

Related Questions




Problem with SOM plots in R (kohonen)

Updated July 25, 2018 13:19 PM

Should I standardize my data or not?

Updated April 11, 2018 06:19 AM

SOM automated/objective clustering

Updated September 29, 2016 08:08 AM