805 Columbus Avenue
513 Interdisciplinary Science & Engineering Complex (ISEC)
Boston, MA 02120
ATTN: Lawson Wong, 5th Floor ISEC
360 Huntington Avenue
Boston, MA 02115-5000
Lawson L.S. Wong is an assistant professor in the College of Computer and Information Science at Northeastern University. His research focuses on learning, representing, and estimating knowledge about the world that an autonomous robot may find useful.
Prior to Northeastern, Lawson was a postdoctoral fellow at Brown University. He completed his PhD at the Massachusetts Institute of Technology. He has received a Siebel Fellowship, AAAI Robotics Student Fellowship, and Croucher Foundation Fellowship for Postdoctoral Research.
- PhD, Massachusetts Institute of Technology
Nakul Gopalan, Dilip Arumugam, Lawson L.S. Wong, Stefanie Tellex. Sequence-to-sequence language grounding of non-Markovian task specifications. In Robotics: Science and Systems (RSS), 2018.
Often times, natural language commands issued to robots not only specify a particular target configuration or goal state but also outline constraints on how the robot goes about its execution. That is, the path taken to achieving some goal state is given equal importance to the goal state itself. One example of this could be instructing a wheeled robot to go to the living room but avoid the kitchen, in order to avoid scuffing the floor. This class of behaviors poses a serious obstacle to existing language understanding for robotics approaches that map to either action sequences or goal state representations. Due to the non-Markovian nature of the objective, approaches in the former category must map to potentially unbounded action sequences whereas approaches in the latter category would require folding the entirety of a robot’s trajectory into a (traditionally Markovian) state representation, resulting in an intractable decision-making problem. To resolve this challenge, we use a recently introduced probabilistic variant of Linear Temporal Logic (LTL) as a goal specification language for a Markov Decision Process (MDP). While demonstrating that standard neural sequence-to-sequence learning models can successfully ground language to this semantic representation, we also provide analysis that highlights generalization to novel, unseen logical forms as an open problem for this class of model. We evaluate our system within two simulated robot domains as well as on a physical robot, demonstrating accurate language grounding alongside a significant expansion in the space of interpretable robot behaviors.
Lawson L.S. Wong, Leslie Pack Kaelbling, Tomás Lozano-Pérez. Data association for semantic world modeling from partial views. International Journal of Robotics Research (IJRR), 34(7):1064-1082, 2015.
Autonomous mobile-manipulation robots need to sense and interact with objects to accomplish high-level tasks such as preparing meals and searching for objects. To achieve such tasks, robots need semantic world models, defined as object-based representations of the world involving task-level attributes. In this work, we address the problem of estimating world models from semantic perception modules that provide noisy observations of attributes. Because attribute detections are sparse, ambiguous, and are aggregated across different viewpoints, it is unclear which attribute measurements are produced by the same object, so data association issues are prevalent. We present novel clustering-based approaches to this problem, which are more efficient and require less severe approximations compared with existing tracking-based approaches. These approaches are applied to data containing object type-and-pose detections from multiple viewpoints, and demonstrate comparable quality using a fraction of the computation time.
Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L.S. Wong, Stefanie Tellex. Accurately and efficiently interpreting human-robot instructions of varying granularities. In Robotics: Science and Systems (RSS), 2017.
Humans can ground natural language commands to tasks at both abstract and fine-grained levels of specificity. For instance, a human forklift operator can be instructed to perform a high-level action, like ‘grab a pallet’ or a low-level action like ’tilt back a little bit.’ While robots are also capable of grounding language commands to tasks, previous methods implicitly assume that all commands and tasks reside at a single, fixed level of abstraction. Additionally, methods that do not use multiple levels of abstraction encounter inefficient planning and execution times as they solve tasks at a single level of abstraction with large, intractable state-action spaces closely resembling real world complexity. In this work, by grounding commands to all the tasks or subtasks available in a hierarchical planning framework, we arrive at a model capable of interpreting language at multiple levels of specificity ranging from coarse to more granular. We show that the accuracy of the grounding procedure is improved when simultaneously inferring the degree of abstraction in language used to communicate the task. Leveraging hierarchy also improves efficiency: our proposed approach enables a robot to respond to a command within one second on 90% of our tasks, while baselines take over twenty seconds on half the tasks. Finally, we demonstrate that a real, physical robot can ground commands at multiple levels of abstraction allowing it to efficiently plan different subtasks within the same planning hierarchy.
Nakul Gopalan, Marie desJardins, Michael L. Littman, James MacGlashan, Shawn Squire, Stefanie Tellex, John Winder, Lawson L.S. Wong. Planning with abstract Markov decision processes. In International Conference on Automated Planning and Scheduling (ICAPS), 2017.
David Whitney, Eric Rosen, James MacGlashan, Lawson L.S. Wong, Stefanie Tellex. Reducing errors in object-fetching interactions through social feedback. In IEEE International Conference on Robotics and Automation (ICRA), 2017.
Lawson L.S. Wong, Thanard Kurutach, Tomás Lozano-Pérez, Leslie Pack Kaelbling. Object-based world modeling in semi-static envrionments with dependent Dirichlet process mixtures. In International Joint Conference on Artificial Intelligence (IJCAI), 2016.
To accomplish tasks in human-centric indoor environments, agents need to represent and understand the world in terms of objects and their attributes. We consider how to acquire such a world model via noisy perception and maintain it over time, as objects are added, changed, and removed in the world. Previous work framed this as multiple-target tracking problem, where objects are potentially in motion at all times. Although this approach is general, it is computationally expensive. We argue that such generality is not needed in typical world modeling tasks, where objects only change state occasionally. More efficient approaches are enabled by restricting ourselves to such semi-static environments. We consider a previously-proposed clustering-based world modeling approach that assumed static environments, and extend it to semi-static domains by applying a dependent Dirichlet process (DDP) mixture model. We derive a novel MAP inference algorithm under this model, subject to data association constraints. We demonstrate our approach improves computational performance for world modeling in semi-static environments.