I’m in the first year of pursuing a PhD in Computer Science under the supervision of Prof. Christopher Amato, specifically in artificial intelligence with a focus on uncertain, multi-agent scenarios.
Specifically, my interest is in human-agent teaming. Given arbitrary numbers of people and AI agents, all with their own unique capabilities, how can we learn and perform complex tasks across domains? How can the AI agents interpret human intent and behavior, and how can humans more effectively guide and teach these agents, as well as understand and trust their capabilities?
Networks of self-driving cars. The ideal case for AI would be an entirely autonomous grid of self-driving cars that communicate with each other. But people aren’t all just going to stop driving, and even if they did, that doesn’t eliminate all uncertainty. Autonomous cars will need to operate in a world with unforeseen events and uncertain drivers.
I find the interdisciplinary nature of AI in general fascinating. We inevitably create AI based on our own internal beliefs and biases, and inevitably there’s a lot of overlap with fields like neuroscience, psychology, economics, and even (or especially) philosophy. I don’t just want to know how to accomplish something or what parameters to tune. I want to know why it works and whether it can tell us something our own thought processes.
Under the assumption that self-guided general AI will not be developed in the near future, let alone superintelligence, my goal is to make human-compatible (human-aided) AI. In addition to the self-driving car networks, I’d like to see AI leveraged for public policy. Can we develop sufficient trust in autonomy to turn over decision-making in areas with fundamentally broken incentive structures?
I’m not yet sure how to accomplish this, but after my PhD, I’m especially interested in working for a group like 80,000 Hours or the Center for Human Compatible AI.
Conowingo. That’s ok, you haven’t heard of it, it’s a small farming town right on the Susquehanna River in Maryland. Census claims it’s 4,200 people, but I have a hard time believing it’s ever been that big. I spent most of my life before and after undergrad there, and I definitely still carry the sense of family and community from my school (my graduating class was 21 people).
University of Pittsburgh. In all honestly, I went there because, like University of Maryland, they gave me a full ride, and unlike University of Maryland, they were located outside of Maryland! That said, the city really grew on me over time, and it ranks high on the list of places I’d love to live long-term. It’s a revitalizing city with a blue-collar attitude, small and unknown enough to still have plenty of hole-in-the-wall gems.
I’m in the first year of pursuing a PhD in Computer Science under the supervision of Prof. Christopher Amato, specifically in artificial intelligence with a focus on uncertain, multi-agent scenarios.
Specifically, my interest is in human-agent teaming. Given arbitrary numbers of people and AI agents, all with their own unique capabilities, how can we learn and perform complex tasks across domains? How can the AI agents interpret human intent and behavior, and how can humans more effectively guide and teach these agents, as well as understand and trust their capabilities?
Networks of self-driving cars. The ideal case for AI would be an entirely autonomous grid of self-driving cars that communicate with each other. But people aren’t all just going to stop driving, and even if they did, that doesn’t eliminate all uncertainty. Autonomous cars will need to operate in a world with unforeseen events and uncertain drivers.
I find the interdisciplinary nature of AI in general fascinating. We inevitably create AI based on our own internal beliefs and biases, and inevitably there’s a lot of overlap with fields like neuroscience, psychology, economics, and even (or especially) philosophy. I don’t just want to know how to accomplish something or what parameters to tune. I want to know why it works and whether it can tell us something our own thought processes.
Under the assumption that self-guided general AI will not be developed in the near future, let alone superintelligence, my goal is to make human-compatible (human-aided) AI. In addition to the self-driving car networks, I’d like to see AI leveraged for public policy. Can we develop sufficient trust in autonomy to turn over decision-making in areas with fundamentally broken incentive structures?
I’m not yet sure how to accomplish this, but after my PhD, I’m especially interested in working for a group like 80,000 Hours or the Center for Human Compatible AI.
Conowingo. That’s ok, you haven’t heard of it, it’s a small farming town right on the Susquehanna River in Maryland. Census claims it’s 4,200 people, but I have a hard time believing it’s ever been that big. I spent most of my life before and after undergrad there, and I definitely still carry the sense of family and community from my school (my graduating class was 21 people).
University of Pittsburgh. In all honestly, I went there because, like University of Maryland, they gave me a full ride, and unlike University of Maryland, they were located outside of Maryland! That said, the city really grew on me over time, and it ranks high on the list of places I’d love to live long-term. It’s a revitalizing city with a blue-collar attitude, small and unknown enough to still have plenty of hole-in-the-wall gems.