Ramansh Sharma
I am a PhD student at the Kahlert School of Computing at The University of Utah advised by Professor Varun Shankar. Our research focuses on designing novel scientific machine learning to solve various partial differential equations.
From 2021 to 2023, I was a visiting scholar at the School of Computing at The University of Utah supervised by Professor Shankar and a remote collaborator at the Approximate Bayesian Inference Team at RIKEN supervised by Dr. Gian Maria Marconi and Dr. Thomas Möllenhoff.
Warnock Engineering Building, Office 2883
ramansh [at] cs.utah.edu
Google Scholar /
GitHub /
CV /
Blog
|
|
Timeline
August 2023 - Present: PhD student at the University of Utah.
September 2021 - June 2023: Visiting scholar at The University of Utah focusing on physics-informed machine learning.
October 2021 - June 2023: Remote collaborator at the ABI team focusing on curriculum learning.
February 2021 - September 2021: Machine Learning Engineer at the World Resources Institute working on identifying economic and financial incentives for forest and landscape restoration using NLP.
October 2020 - December 2020: Data Scientist at SevaExchange working on recommender systems to connect volunteers with
volunteer opportunities.
June 2020 - August 2020: Lead Machine Learning Engineer at Omdena's TrashOut project working on detecting illegal dumpsites (project article).
July 2019 - June 2023: Undergraduate degree in Computer Science and Engineering at SRM Institute of Science and
Technology, Ramapuram, India.
|
Research
My interests lie in the field of physics-informed machine learning (PIML). My current works involves adapting physics-informed neural networks with traditional scientific computing techniques. I strongly believe the two fields together hold strong promise.
|
|
Accelerated Training of Physics Informed Neural Networks (PINNs) using Meshless Discretizations
Ramansh Sharma, Varun Shankar
Neural Information Processing Systems (NeurIPS), 2022
paper /
arxiv /
code /
video /
poster
We present a new technique for the accelerated training of physics-informed neural networks (PINNs): discretely-trained PINNs (DT-PINNs). DT-PINNs are trained by replacing exact spatial derivatives with high-order accurate numerical discretizations computed using meshless radial basis function-finite differences (RBF-FD) and applied via sparse-matrix vector multiplication. Additionally, though traditional PINNs (vanilla-PINNs) are typically stored and trained in 32-bit floating-point (fp32) on the GPU, we show that for DT-PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy. Our results show that fp64 DT-PINNs offer a superior cost-accuracy profile to fp32 vanilla-PINNs.
|
|
Beyond modeling: NLP Pipeline for efficient environmental policy analysis
Jordi Planas, Daniel Firebanks-Quevedo, Galina Naydenova, Ramansh Sharma, Cristina Taylor, Kathleen Buckingham, Rong Fang
Fragile Earth proceedings - KDD, 2021
paper /
arxiv /
code /
video /
We propose a Knowledge Management Framework based on NLP techniques that would tackle challenges such as resource-intensive nature, lack of comprehensive information sources, and overlapping jurisdictions in policy analysis. The framework is designed to be platform-, language- and policy-agnostic. To classify financial incentives in restoration policies, both Sentence-BERT and Cross-Encoders performed well. For sentence classification inference with Sentence-BERT, a random forest classifier can be used to assign a category to a given sentence using the Sentence-BERT learned embeddings.
|
Mentoring
I frequently mentor undergraduate and high school students who wish to pursue a career in Machine Learning. If you think my academic and professional experiences so far can help you in any way, please send me an email.
|
|