I am a senior undergraduate at the department of computer science and engineering, IIT Kanpur. Currently, I am on a semester exchange to EPFL. I have been working on distributed stochastic optimization with Dr. Martin Jaggi and Dr. Aymeric Dieuleveut at the Machine Learning and Optimization laboratory, EPFL and on neural information retrieval with Dr. Navid Rekabsaz at IDIAP, EPFL. Previously I was working with Dr. Purshottam Kar at IIT Kanpur, on robust bandit optimization. I am also working on our startup Socrazia which works on leveraging AI techniques for social problems in India, including agriculture, healthcare, teaching, etc.
I am applying for Doctoral positions starting Fall 2019, please contact if you have a suitable position.
B.Tech. in Computer Science and Engineering, 2019
Indian Institute of Technology, Kanpur
Semester Exchange, 2018
École Polytechnique Fédérale De Lausanne
We are compiling a new data-set which can be used for the supervised training of Neural information retrieval models. We are also working on improving the existing neural-IR techniques by combining new ideas from deep learning with established principles in classic IR techiques. Our motivation is to leverage neural network techniques in relatively unexplored IR.
In continuation to the project on Local SGD, we are trying to analyse the infinite-dimensional setting. Our motivation is to understand the asymptotic constants involved in the converegence of Local SGD, hence understanding its behaviour more closely. We work in possibly infinite-dimensional RKHS setting, which has been previously used to give tighter rates for SGD like algorithms.
This Project is to be submitted at AISTATS 2019! We give the first non-asymptotic analysis of the Local-SGD algorithm and compare it with the extreme variants of One-shot and Mini-batch averaging. We also provide lower bounds on communication frequency for Local-SGD to perform optimally i.e. constant times worse than Mini-batch averaging. Finally, we conduct experiments to verify our theoretical findings. Our work hugely improves upon the communication bottleneck faced by distributed optimization algorithms. Arxiv version coming soon…
This project has produced a publication in Springer Machine Learning Journal! We developed novel algorithms for multi-armed and linear bandits, with provably optimal mini-max guarantees, and corruption tolerance against a universally corrupting adversary. Our algorithms experimentally outperform the state of the art UCB and EXP style algorithms with a huge margin.
In this project we study the urban heat effect in the city of Ghent, Belgium by using standard techniques from time-series analysis. Our models are successul in explaining the behaviour observed in over year of climate data-set from various locations within the city.
We first do a brief review of some recent attempts to model sparse learning in humans, using non-paramteric statistics. We present our computational model of one shot learning, an area broadly studied in machine learning using non-parametric Bayesian modelling. It is followed by some experiments with human subjects to prove a part of this hypothesis. We verify a former hypothesis which links surprise to sparse learning of reward-penalty correlation with image stimuli.
We leverage variational auto-encoders to provide a novel generational model for controlled paraphrasing, using attention, and pre-fixed latent variables. Our results are qualitatively and quantitatively comparable to existing state of the art techniques. We also experiment with the popular evaluation techniques in Machine Translation like Bleu, etc. and demonstrate how they are insufficeint for the paraphrasing task.
We develope on the existing cognitive architecture based on fuzzy Z-numbers and recent work in cognitive sciences. Specifically we add skills like extraction and consolidation to the architecture by linking language processing levels in linguistics to atomic cognitive abilities and operators defined in the model. In doing so we leverage the current Z*-algebra and knowledge representation techniques.