@ NVIDIA GTC 2017

One of the most exciting conferences I have ever attended!

Cinque Terre

Curriculum Vitae (CV)


My long form CV is available in PDF format here: CV in PDF format.

For better readability, you can also navigate to the HTML version of my CV here: CV in HTML format

Short Bio


Ammar Ahmad Awan is a Senior Researcher at Microsoft working on the DeepSpeed library with Yuxiong He and the DeepSpeed team. He is the lead developer of the DeepSpeed Mixture of Experts (MoE) system that supports both training and inference of MoE models at scale. He received his B.S, M.S, and Ph.D. degrees in Computer Science from the National University of Science and Technology (NUST), Pakistan, Kyung Hee University (KHU), South Korea, and The Ohio State University, respectively. His current research focus lies at the intersection of high-performance systems and large-scale training and inference of deep learning (DL) models. He previously worked on a Java-based Message Passing Interface (MPI) and nested parallelism with OpenMP and MPI for scientific applications. He has published several papers in conferences and journals related to these research areas. He actively contributed to various projects like MVAPICH2-GDR (High Performance MPI for GPU clusters, OMB (OSU Micro Benchmarks), and HiDL (High Performance Deep Learning) during his graduate studies at OSU. He is the lead author of the OSU-Caffe framework (part of HiDL project) that allows efficient distributed training of Deep Neural Networks.