Detect Espionage Using Active Indicators
This project aims to investigate the ability to develop active indicators (stimuli and response pairs) as measures to detect people who have prioritized individual goals when faced with conflicting loyalties between individual and team goals in different virtual contexts. Alternate reality games are used as a methodological platform for this purpose.
This project will afford the opportunity of greatly expanding the understanding of realistic complex networks by joining theoretical analysis of coupled networks with extensive analysis of appropriately chosen large-scale databases
The significant advances realized in recent years in the study of complex networks are severely limited by an almost exclusive focus on the behavior of single networks. However, most networks in the real world are not isolated but are coupled and hence depend upon other networks, which in turn depend upon other networks. Real networks communicate with each other and may exchange information, or, more importantly, may rely upon one another for their proper functioning. A simple but real example is a power station network that depends on a computer network, and the computer network depends on the power network. Our social networks depend on technical networks, which, in turn, are supported by organizational networks. Surprisingly, analyzing complex systems as coupled interdependent networks alters the most basic assumptions that network theory has relied on for single networks. A multidisciplinary, data driven research project will: 1) Study the microscopic processes that rule the dynamics of interdependent networks, with a particular focus on the social component; 2) Define new mathematical models/foundational theories for the analysis of the robustness/resilience and contagion/diffusive dynamics of interdependent networks. This project will afford the opportunity of greatly expanding the understanding of realistic complex networks by joining theoretical analysis of coupled networks with extensive analysis of appropriately chosen large-scale databases. These databases will be made publicly available, except for special cases where it is illegal to do so.
This research has important implications for the understanding the social and technical systems that make up a modern society. A recent US Scientific Congressional Report concludes ?No currently available modeling and simulation tools exist that can adequately address the consequences of disruptions and failures occurring simultaneously in different critical infrastructures that are dynamically inter-dependent? Understanding the interdependence of networks and its effect on the system robustness and on the structural and functional behavior is crucial for properly modeling many real world systems and applications, from disaster preparedness, to building effective organizations, to comprehending the complexity of the macro economy. In addition to these intellectual objectives, the research project includes the development of an extensive outreach program to the public, especially K-12 students.
Easy Alliance, a nonprofit initiative, has been instituted to solve complex, long term challenges in making the digital world a more accessible place for everyone.
This project will support a plugin architecture for transparent checkpoint-restart.
Society’s increasingly complex cyberinfrastructure creates a concern for software robustness and reliability. Yet, this same complex infrastructure is threatening the continued use of fault tolerance. Consider when a single application or hardware device crashes. Today, in order to resume that application from the point where it crashed, one must also consider the complex subsystem to which it belongs. While in the past, many developers would write application-specific code to support fault tolerance for a single application, this strategy is no longer feasible when restarting the many inter-connected applications of a complex subsystem. This project will support a plugin architecture for transparent checkpoint-restart. Transparency implies that the software developer does not need to write any application-specific code. The plugin architecture implies that each software developer writes the necessary plugins only once. Each plugin takes responsibility for resuming any interrupted sessions for just one particular component. At a higher level, the checkpoint-restart system employs an ensemble of autonomous plugins operating on all of the applications of a complex subsystem, without any need for application-specific code.
The plugin architecture is part of a more general approach called process virtualization, in which all subsystems external to a process are virtualized. It will be built on top of the DMTCP checkpoint-restart system. One simple example of process virtualization is virtualization of ids. A plugin maintains a virtualization table and arranges for the application code of the process to see only virtual ids, while the outside world sees the real id. Any system calls and library calls using this real id are extended to translate between real and virtual id. On restart, the real ids are updated with the latest value, and the process memory remains unmodified, since it contains only virtual ids. Other techniques employing process virtualization include shadow device drivers, record-replay logs, and protocol virtualization. Some targets of the research include transparent checkpoint-restart support for the InfiniBand network, for programmable GPUs (including shaders), for networks of virtual machines, for big data systems such as Hadoop, and for mobile computing platforms such as Android.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Kapil Arya and Gene Cooperman. “DMTCP: Bringing Interactive Checkpoint?Restart to Python,” Computational Science & Discovery, v.8, 2015, p. 16 pages. doi:10.1088/issn.1749-4699
Jiajun Cao, Matthieu Simoni, Gene Cooperman,
and Christine Morin. “Checkpointing as a Service in Heterogeneous Cloud Environments,” Proc. of 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid’15),, 2015, p. 61–70. doi:10.1109/CCGrid.2015.160
This project investigates the use of game analytics to evaluate games designed for citizen science and understand problem solving strategies in crowd sourced games.
Type-safe programming languages report errors when a program applies operations to data of the wrong type—e.g., a list-length operation expects a list, not a number—and they come in two flavors: dynamically typed (or untyped) languages, which catch such type errors at run time, and statically typed languages, which catch type errors at compile time before the program is ever run. Dynamically typed languages are well suited for rapid prototyping of software, while static typing becomes important as software systems grow since it offers improved maintainability, code documentation, early error detection, and support for compilation to faster code. Gradually typed languages bring together these benefits, allowing dynamically typed and statically typed code—and more generally, less precisely and more precisely typed code—to coexist and interoperate, thus allowing programmers to slowly evolve parts of their code base from less precisely typed to more precisely typed. To ensure safe interoperability, gradual languages insert runtime checks when data with a less precise type is cast to a more precise type. Gradual typing has seen high adoption in industry, in languages like TypeScript, Hack, Flow, and C#. Unfortunately, current gradually typed languages fall short in three ways. First, while normal static typing provides reasoning principles that enable safe program transformations and optimizations, naive gradual systems often do not. Second, gradual languages rarely guarantee graduality, a reasoning principle helpful to programmers, which says that making types more precise in a program merely adds in checks and the program otherwise behaves as before. Third, time and space efficiency of the runtime casts inserted by gradual languages remains a concern. This project addresses all three of these issues. The project’s novelties include: (1) a new approach to the design of gradual languages by first codifying the desired reasoning principles for the language using a program logic called Gradual Type Theory (GTT), and from that deriving the behavior of runtime casts; (2) compiling to a non-gradual compiler intermediate representation (IR) in a way that preserves these principles; and (3) the ability to use GTT to reason about the correctness of optimizations and efficient implementation of casts. The project has the potential for significant impact on industrial software development since gradually typed languages provide a migration path from existing dynamically typed codebases to more maintainable statically typed code, and from traditional static types to more precise types, providing a mechanism for increased adoption of advanced type features. The project will also have impact by providing infrastructure for future language designs and investigations into improving the performance of gradual typing.
The project team will apply the GTT approach to investigate gradual typing for polymorphism with data abstraction (parametricity), algebraic effects and handlers, and refinement/dependent types. For each, the team will develop cast calculi and program logics expressing better equational reasoning principles than previous proposals, with certified elaboration to a compiler intermediate language based on Call-By-Push-Value (CBPV) while preserving these properties, and design convenient surface languages that elaborate into them. The GTT program logics will be used for program verification, proving the correctness of program optimizations and refactorings.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
This project develops a general model of just-in-time compilation that subsumes deployed systems and allows systematic exploration of the design space of dynamic compilation techniques. The research questions that will be tackled in this work lie along two dimensions: Experimental—explore the design space of dynamic compilation techniques and gain an understanding of trade-offs; Foundational—formalize key ingredients of a dynamic compiler and develop techniques for reasoning about correctness in a modular fashion.
Understanding the geographic nature of Internet paths and their implications for performance, privacy and security.
This study sheds light on this issue by measuring how and when Internet traffic traverses national boundaries. To do this, we ask you to run our browser applet that visits various popular websites, measures the paths taken, and identifies their locations. By running our tool, you will help us understand if and how Internet paths traverse national boundaries, even when two endpoints are in the same country. And we’ll show you these paths, helping you to understand where your Internet traffic goes
This project addresses an urgent, emergent need at the intersection of software maintenance and programming language research. Over the past 20 years, working software engineers have embraced so-called scripting languages for a variety of tasks. Software engineers choose these languages because they make prototyping easy, and before the engineers realize it, these prototypes evolve into large, working systems and escape into the real world. Like all software, these systems need to be maintained—mistakes must be fixed, their performance requires improvement, security gaps call for fixes, their functionality needs to be enhanced—but scripting languages render maintenance difficult. The intellectual merits of this project are to address all aspects of this real-world software engineering problem.
A few years ago, the PIs launched programming language research efforts to address this problem. They diagnosed the lack of sound types in scripting languages as one of the major factors. With types in conventional programming languages, programmers concisely communicate design information to future maintenance workers; soundness ensures the types are consistent with the rest of the program. In response, the PIs explored the idea of gradual typing, that is, the creation of a typed sister language (one per scripting language) so that (maintenance) programmers can incrementally equip systems with type annotations. Unfortunately, these efforts have diverged over the years and would benefit from systematic cross-pollination.
With support from this grant, the PIs will systematically explore the spectrum of their gradual typing system with a three-pronged effort. First, they will investigate how to replicate results from one project in another. Second, they will jointly develop an evaluation framework for gradual typing projects with the goal of diagnosing gaps in the efforts and needs for additional research. Third, they will explore the creation of new scripting languages that benefit from the insights of gradual typing research.