5299 篇
13868 篇
408774 篇
16079 篇
9269 篇
3869 篇
6464 篇
1238 篇
72401 篇
37108 篇
12060 篇
1619 篇
2821 篇
3387 篇
640 篇
1229 篇
1965 篇
4866 篇
3821 篇
5293 篇
统计学视域下的计算权衡
Computational Trade-offs in Statistical Learning1 Introduction
1.1 Classical statistics, big data and computational constraints
1.2 Connections to existing works
1.3 Main problems and contributions
1.4 Thesis Overview
2 Background
2.1 Typical problem setup
2.2 Background on convex optimization
2.3 Background on stochastic convex optimization
2.4 Background on minimax theory in statistics
3 Oracle complexity of convex optimization
3.1 Background and problem formulation
3.2 Main results and their consequences
3.3 Proofs of results
3.4 Discussion
4 Computationally adaptive model selection
4.1 Motivation and setup
4.2 Model selection over nested hierarchies
4.3 Fast rates for model selection
4.4 Oracle inequalities for unstructured models
4.5 Discussion
5 Optimization for high-dimensional estimation
5.1 Motivation and prior work
5.2 Background and problem formulation
5.3 Main results and some consequences
5.4 Simulation results
5.5 Proofs
5.5.1 Proof of Theorem.1
5.6 Discussion
6 Asymptotically optimal distributed learning
6.1 Motivation and related work
6.2 Setup and Algorithms
6.3 Convergence rates for delayed optimization of smooth functions
6.4 Distributed Optimization
6.5 Numerical Results
6.6 Delayed Updates for Smooth Optimization
6.7 Proof of Theorem.3
6.8 Conclusion and Discussion
7 Conclusions and future directions
7.1 Summary and key contributions
7.2 Important open questions and immediate future directions
7.3 Other suggestions for future work