Personal Research Goals
I’m grateful for John Schulman’s Opinionated Guide to ML Research, if only for articulating the difference between idea-driven and goal-driven research. This is something that took me the first year and a half of my Ph.D. to figure out, and I’m still trying to pin down my own “research taste” and long-term goals.
I’ve decided to post a few of my goals here, forcing myself to organize and critique my thoughts while simultaneously putting myself on the public record (which will be fun to look back at in a few years). They’re vaguely ordered by the likeliness that I’ll spend time on them.
-
Develop more compression methods with provable approximation guarantees.
-
Make AI Dungeon 2 more semantically coherent and game-like.
- Achieve BERT-level performance from scratch for less than $100 of compute. (Currently $7,000).
- Using better over-parameterization bounds, optimization methods, and architectures
- Using better over-parameterization bounds, optimization methods, and architectures
-
Align the compression community on comparable metrics.
- Solve MNIST with a quantum computer simulator
- More of a personal goal than a research milestone.
- More of a personal goal than a research milestone.
- Improve machine translation in low-resource domains.
- Why does domain adaptation help?
- What is a principled way of doing domain adaptation?
- How is this related to fine-tuning pre-trained models?
- Bring theory back to DL (not my field)
- Generalization bounds these days exploit the fact that effective model complexity is usually less than possible.
- Might a complete understanding of compression methods lead to better generalization bounds?
Writing this list revealed how very few real or novel problems I am aware of. Most problems I know about (especially #6 and #7) come from reading the literature. They’re idea-driven rather than goal-driven research, almost by definition. I’m excited to get back out into the real world this summer and get some exposure to more novel problems.