18/66 = 0.27 | in (0.18,0.36) | Reinforcement Learning |
10/52 = 0.19 | in (0.17,0.37) | Supervised Learning |
9/51 = 0.18 | not in (0.18, 0.37) | Clustering |
12/46 = 0.26 | in (0.17, 0.37) | Kernel Methods |
11/40 = 0.28 | in (0.15, 0.4) | Optimization Algorithms |
8/33 = 0.24 | in (0.15, 0.39) | Learning Theory |
14/33 = 0.42 | not in (0.15, 0.39) | Graphical Models |
10/32 = 0.31 | in (0.15, 0.41) | Applications (+5 invited) |
8/29 = 0.28 | in (0.14, 0.41]) | Probabilistic Models |
13/29 = 0.45 | not in (0.14, 0.41) | NN & Deep Learning |
8/26 = 0.31 | in (0.12, 0.42) | Transfer and Multi-Task Learning |
13/25 = 0.52 | not in (0.12, 0.44) | Online Learning |
5/25 = 0.20 | in (0.12, 0.44) | Active Learning |
6/22 = 0.27 | in (0.14, 0.41) | Semi-Supervised Learning |
7/20 = 0.35 | in (0.1, 0.45) | Statistical Methods |
4/20 = 0.20 | in (0.1, 0.45) | Sparsity and Compressed Sensing |
1/19 = 0.05 | not in (0.11, 0.42) | Ensemble Methods |
5/18 = 0.28 | in (0.11, 0.44) | Structured Output Prediction |
4/18 = 0.22 | in (0.11, 0.44) | Recommendation and Matrix Factorization |
7/18 = 0.39 | in (0.11, 0.44) | Latent-Variable Models and Topic Models |
1/17 = 0.06 | not in (0.12, 0.47) | Graph-Based Learning Methods |
5/16 = 0.31 | in (0.13, 0.44) | Nonparametric Bayesian Inference |
3/15 = 0.20 | in (0.7, 0.47) | Unsupervised Learning and Outlier Detection |
7/12 = 0.58 | not in (0.08, 0.50) | Gaussian Processes |
5/11 = 0.45 | not in (0.09, 0.45) | Ranking and Preference Learning |
2/11 = 0.18 | in (0.09, 0.45) | Large-Scale Learning |
0/9 = 0.00 | in [0, 0.56) | Vision |
3/9 = 0.33 | in [0, 0.56) | Social Network Analysis |
0/9 = 0.00 | in [0, 0.56) | Multi-agent & Cooperative Learning |
2/9 = 0.22 | in [0, 0.56) | Manifold Learning |
4/8 = 0.50 | not in [0, 0.5) | Time-Series Analysis |
2/8 = 0.25 | in [0, 0.5] | Large-Margin Methods |
2/8 = 0.25 | in [0, 0.5] | Cost Sensitive Learning |
2/7 = 0.29 | in [0, 0.57) | Recommender Systems |
3/7 = 0.43 | in [0, 0.57) | Privacy, Anonymity, and Security |
0/7 = 0.00 | in [0, 0.57) | Neural Networks |
0/7 = 0.00 | in [0, 0.57) | Empirical Insights |
0/7 = 0.00 | in [0, 0.57) | Bioinformatics |
1/6 = 0.17 | in [0, 0.5) | Information Retrieval |
2/6 = 0.33 | in [0, 0.5) | Evaluation Methodology |
I usually find these numbers hard to interpret. At the grossest level, all areas have significant selection. At a finer level, one way to add further interpretation is to pretend that the acceptance rate of all papers is 0.27, then compute a 5% lower tail and a 5% upper tail. With 40 categories, we expect to have about 4 violations of tail inequalities. Instead, we have 9, so there is some evidence that individual areas are particularly hot or cold. In particular, the hot topics are Graphical models, Neural Networks and Deep Learning, Online Learning, Gaussian Processes, Ranking and Preference Learning, and Time Series Analysis. The cold topics are Clustering, Ensemble Methods, and Graph-Based Learning Methods.
欢迎光临 机器学习和生物信息学实验室联盟 (http://123.57.240.48/) | Powered by Discuz! X3.2 |