Shikhar Bahl

Hi there! I am a first year PhD student at the Robotics Institute, within the School of Computer Science at Carnegie Mellon University. I am interested in artificial intelligence, machine learning and robotics. I am currently advised by Abhinav Gupta.

Prior to CMU, I did my undergrad at UC Berkeley in Applied Math and Computer Science, where I was affiliated with Berkeley Artificial Intelligence Research (BAIR) and worked under Sergey Levine on problems in deep reinforcement learning and robotics.

Feel free to contact me via email! You can reach me at sbahl2 -at- andrew dot cmu dot edu

email  /  CV  /  Google Scholar  /  Twitter  /  GitHub

Research

I am broadly interested in creating robust autonomous agents that operate with minimal or no human supervision. My research focuses on combining machine learning, reinforcement learning, computer vision and perception for robotic control. Here is some of my work:

Contextual Imagined Goals for Self-Supervised Robotic Learning
Ashvin Nair*, Shikhar Bahl*, Alexander Khazatsky*, Vitchyr H. Pong, Glen Berseth , Sergey Levine
Conference on Robot Learning (CoRL), 2019

We propose a conditional goal-setting model that aims to only propose goals that are feasible reachable from the robot's current state, and demonstrate that this enables self-supervised goal-conditioned learning with raw image observations both in varied simulated environments and a real-world pushing task.

Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards
Gerrit Schoettler*, Ashvin Nair*, Jianlan Luo, Shikhar Bahl, Juan aparicio Ojea, Eugen Solowjow, Sergey Levine
arXiv preprint
pdf | project page

We consider a variety of difficult industrial insertion tasks with visual inputs and different natural reward specifications, namely sparse rewards and goal images. We show that methods that combine RL with prior information, such as classical controllers or demonstrations, can solve these tasks from a reasonable amount of real-world interaction.

Skew-Fit: State-Covering Self-Supervised Reinforcement Learning
Vitchyr H. Pong*, Murtaza Dalal*, Steven Lin*, Ashvin Nair, Shikhar Bahl, Sergey Levine
Task-Agnostic Reinforcement Learning Workshop at International Conference on Learning Representations (ICLR), 2019 (Contributed Talk)
pdf | project page

We present an algorithm called Skew-Fit for learning such a maximum-entropy goal distribution, and show that under certain regularity conditions, our method converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand.


Residual Reinforcement Learning for Robot Control
Tobias Johannink*, Shikhar Bahl*, Ashvin Nair*, Jianlan Luo, Avinash Kumar, Matthias Loskyll, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine
IEEE Conference on Robotics and Automation (ICRA), 2019
pdf | project page

We study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects


Visual Reinforcement Learning with Imagined Goals
Ashvin Nair*, Vitchyr H. Pong*, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine
Advances in Neural Information Processing Systems (NIPS), 2018 (Spotlight Presentation)
pdf | project page

We propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies.


Teaching

EECS127 - Fall 2018 (uGSI)


Credit to this great repo for providing the source code for the website!