About Me

I'm a PhD student at University of Alberta in Edmonton, Alberta, Canada. I have a BS in Physics and an MS in Computer Science both from Indiana University Bloomington. My current PhD work is focused on reinforcement learning, and specifically in understanding how agents may perceive their world. I focus primarily on prediction making, but have been known to dabble in control from time-to-time. My active research interests include: predictions as a component in intelligence (both artificial and biological), off-policy prediction and policy evaluation, deep learning and resulting learned representations in the reinforcement learning context, and discovery or attention of important abstractions (described as predictions) through interaction.

Recently Published Papers

  1. Jacobsen, A., Schlegel, M., Linke, C., Degris, T., White, A., & White, M. (2019). Meta-descent for Online, Continual Prediction. In AAAI Conference on Artificial Intelligence.
  2. Schlegel, M., Chung, W., Graves, D., Qian, J., & White, M. (2019). Importance Resampling for Off-policy Prediction. In Advances in Neural Information Processing Systems.
  3. Kumaraswamy, R., Schlegel, M., White, A., & White, M. (2018). Context-dependent upper-confidence bounds for directed exploration. In Advances in Neural Information Processing Systems.
  4. Schlegel, M., Pan, Y., Chen, J., & White, M. (2017). Adapting kernel representations online using submodular maximization. In Proceedings of the 34th International Conference on Machine Learning.