We introduce a very general method for high dimensional classification, based on careful combination of the results of applying an arbitrary base classifier to random projections of the feature vectors into a lower dimensional space. In one special case that we study in detail, the random projections are divided into disjoint groups, and within each group we select the projection yielding the smallest estimate of the test error. Our random-projection ensemble classifier then aggregates the results of applying the base classifier on the selected projections, with a data-driven voting threshold to determine the final assignment. Our theoretical results elucidate the effect on performance of increasing the number of projections. Moreover, under a boundary condition that is implied by the sufficient dimension reduction assumption, we show that the test excess risk of the random-projection ensemble classifier can be controlled by terms that do not depend on the original data dimension and a term that becomes negligible as the number of projections increases. The classifier is also compared empirically with several other popular high dimensional classifiers via an extensive simulation study, which reveals its excellent finite sample performance.
|Data di pubblicazione:||2017|
|Titolo:||A discussion on: Random-projection ensemble classification by T. Cannings and R. Samworth|
|Rivista:||JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B STATISTICAL METHODOLOGY|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1111/rssb.12228|
|Appare nelle tipologie:||2.1 Articolo su rivista |
File in questo prodotto:
|Discussion_Final.pdf||jrssbclass||Documento in Pre-print||Accesso chiuso-personale||Riservato|