805 Columbus Avenue
614 Interdisciplinary Science and Engineering Complex (ISEC)
Boston, MA 02120
ATTN: Alan Mislove, 202 WVH
360 Huntington Avenue
Boston, MA 02115
Network measurement, networking, security/privacy issues associated with online social networks
- PhD in Computer Science, Rice University
- MS in Computer Science, Rice University
- BA in Computer Science, Rice University
Alan Mislove is an Associate Professor at the College of Computer and Information Science at Northeastern University. He received a BA, MS, and PhD from Rice University in 2002, 2005, and 2009, respectively. Professor Mislove’s research concerns distributed systems and networks, with a focus on using social networks to enhance the security, privacy, and efficiency of newly emerging systems. He was a recipient of the NSF CAREER Award in 2011, and his work has been covered by the Wall Street Journal, the New York Times, and the CBS Evening News.
Chen, Le, Alan Mislove, and Christo Wilson. "Peeking Beneath the Hood of Uber." Proceedings of the 2015 ACM Conference on Internet Measurement Conference. ACM, 2015.
Recently, Uber has emerged as a leader in the “sharing economy”. Uber is a “ride sharing” service that matches willing drivers with customers looking for rides. However, unlike other open marketplaces (e.g., AirBnB), Uber is a black-box: they do not provide data about supply or demand, and prices are set dynamically by an opaque “surge pricing” algorithm. The lack of transparency has led to concerns about whether Uber artificially manipulate prices, and whether dynamic prices are fair to customers and drivers. In order to understand the impact of surge pricing on passengers and drivers, we present the first in-depth investigation of Uber. We gathered four weeks of data from Uber by emulating 43 copies of the Uber smartphone app and distributing them throughout downtown San Francisco (SF) and midtown Manhattan. Using our dataset, we are able to characterize the dynamics of Uber in SF and Manhattan, as well as identify key implementation details of Uber’s surge price algorithm. Our observations about Uber’s surge price algorithm raise important questions about the fairness and transparency of this system.
Zhang, Liang, et al. "Analysis of SSL certificate reissues and revocations in the wake of Heartbleed." Proceedings of the 2014 Conference on Internet Measurement Conference. ACM, 2014.
Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users’ security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.
In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers’ private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.
Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.
Hannak, Aniko, et al. "Measuring price discrimination and steering on e-commerce web sites." Proceedings of the 2014 Conference on Internet Measurement Conference. ACM, 2014.
Today, many e-commerce websites personalize their content, including Netflix (movie recommendations), Amazon (product suggestions), and Yelp (business reviews). In many cases, personalization provides advantages for users: for example, when a user searches for an ambiguous query such as “router,” Amazon may be able to suggest the woodworking tool instead of the networking device. However, personalization on e-commerce sites may also be used to the user’s disadvantage by manipulating the products shown (price steering) or by customizing the prices of products (price discrimination). Unfortunately, today, we lack the tools and techniques necessary to be able to detect such behavior.
In this paper, we make three contributions towards addressing this problem. First, we develop a methodology for accurately measuring when price steering and discrimination occur and implement it for a variety of e-commerce web sites. While it may seem conceptually simple to detect differences between users’ results, accurately attributing these differences to price discrimination and steering requires correctly addressing a number of sources of noise. Second, we use the accounts and cookies of over 300 real-world users to detect price steering and discrimination on 16 popular e-commerce sites. We find evidence for some form of personalization on nine of these e-commerce sites. Third, we investigate the effect of user behaviors on personalization. We create fake accounts to simulate different user features including web browser/OS choice, owning an account, and history of purchased or viewed products. Overall, we find numerous instances of price steering and discrimination on a variety of top e-commerce sites.
Zhang, Liang, and Alan Mislove. "Building confederated web-based services with priv. io." Proceedings of the first ACM conference on Online social networks. ACM, 2013.
With the increasing popularity of Web-based services, users today have access to a broad range of free sites, including social networking, microblogging, and content sharing sites. In order to offer a service for free, service providers typically monetize user content, selling results to third parties such as advertisers. As a result, users have little control over their data or privacy. A number of alternative approaches to architecting today’s Web-based services have been proposed, but they suffer from limitations such as relying the creation and installation of additional client-side software, providing insufficient reliability, or imposing an excessive monetary cost on users.
In this paper, we present Priv.io, a new approach to building Web-based services that offers users greater control and privacy over their data. We leverage the fact that today, users can purchase storage, bandwidth, and messaging from cloud providers at fine granularity: In Priv.io, each user provides the resources necessary to support their use of the service using cloud providers such as Amazon Web Services. Users still access the service using a Web browser, all computation is done within users’ browsers, and Priv.io provides rich and secure support for third-party applications. An implementation demonstrates that Priv.io works today with unmodified versions of common Web browsers on both desktop and mobile devices, is both practical and feasible, and is cheap enough for the vast majority users.
Liu, Yabing, et al. "Analyzing facebook privacy settings: user expectations vs. reality." Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference. ACM, 2011.
The sharing of personal data has emerged as a popular activity over online social networking sites like Facebook. As a result, the issue of online social network privacy has received significant attention in both the research literature and the mainstream media. Our overarching goal is to improve defaults and provide better tools for managing privacy, but we are limited by the fact that the full extent of the privacy problem remains unknown; there is little quantification of the incidence of incorrect privacy settings or the difficulty users face when managing their privacy.
In this paper, we focus on measuring the disparity between the desired and actual privacy settings, quantifying the magnitude of the problem of managing privacy. We deploy a survey, implemented as a Facebook application, to 200 Facebook users recruited via Amazon Mechanical Turk. We find that 36% of content remains shared with the default privacy settings. We also find that, overall, privacy settings match users’ expectations only 37% of the time, and when incorrect, almost always expose content to more users than expected. Finally, we explore how our results have potential to assist users in selecting appropriate privacy settings by examining the user-created friend lists. We find that these have significant correlation with the social network, suggesting that information from the social network may be helpful in implementing new tools for managing privacy.
Taejoong Chung, Roland van Rijswijk-Deij, Balakrishnan Chandrasekaran, David Choffnes, Dave Levin, Bruce M. Maggs, Alan Mislove, and Christo Wilson In Proceedings of USENIX Security Symposium (USENIX Security'17), Vancouver, Canada, Aug 2017
The Domain Name System’s Security Extensions (DNSSEC) allow clients and resolvers to verify that DNS responses have not been forged or modified inflight. DNSSEC uses a public key infrastructure (PKI) to achieve this integrity, without which users can be subject to a wide range of attacks. However, DNSSEC can operate only if each of the principals in its PKI properly performs its management tasks: authoritative name servers must generate and publish their keys and signatures correctly, child zones that support DNSSEC must be correctly signed with their parent’s keys, and resolvers must actually validate the chain of signatures.
This paper performs the first large-scale, longitudinal measurement study into how well DNSSEC’s PKI is managed. We use data from all DNSSEC-enabled subdomains under the
.net TLDs over a period of 21 months to analyze DNSSEC deployment and management by domains; we supplement this with active measurements of more than 59K DNS resolvers worldwide to evaluate resolver-side validation.
Our investigation reveals pervasive mismanagement of the DNSSEC infrastructure. For example, we found that 31% of domains that support DNSSEC fail to publish all relevant records required for validation; 39% of the domains use insufficiently strong key-signing keys; and although 82% of resolvers in our study request DNSSEC records, only 12% of them actually attempt to validate them. These results highlight systemic problems, which motivate improved automation and auditing of DNSSEC management.
Taking a Long Look at QUIC: An Approach for Rigorous Evaluation of Rapidly Evolving Transport Protocols
Arash Molavi Kakhki, Samuel Jero, David Choffnes, Alan Mislove, and Cristina Nita-Rotaru In Proceedings of ACM Internet Measurement Conference (IMC'17), London, United Kingdom, Nov 2017
Google’s QUIC protocol, which implements TCP-like properties at the application layer atop a UDP transport, is now used by the vast majority of Chrome clients accessing Google properties but has no formal state machine specification, limited analysis, and ad-hoc evaluations based on snapshots of the protocol implementation in a small number of environments. Further frustrating attempts to evaluate QUIC is the fact that the protocol is under rapid development, with extensive rewriting of the protocol occurring over the scale of months, making individual studies of the protocol obsolete before publication. Given this unique scenario, there is a need for alternative techniques for understanding and evaluating QUIC when compared with previous transport-layer protocols. First, we develop an approach that allows us to conduct analysis across multiple versions of QUIC to understand how code changes impact protocol effectiveness. Next, we instrument the source code to infer QUIC’s state machine from execution traces. With this model, we run QUIC in a large number of environments that include desktop and mobile, wired and wireless environments and use the state machine to understand differences in transport- and application-layer performance across multiple versions of QUIC and in different environments. QUIC generally outperforms TCP, but we also identified performance issues related to window sizes, re-ordered packets, and multiplexing large number of small objects; further, we identify that QUIC’s performance diminishes on mobile devices and over cellular networks.