I graduated! I am now a software engineer at Arbor Networks.
I have begun releasing pieces of my research code as open source. As I finish this process, I will work on writing more thorough documentation and description of how the pieces fit together, but in the meantime, my progress is visible on my GitHub page.
I was a graduate student at the University of Michigan from September 2007 until April 2014, studying under Professors Brian Noble and Jason Flinn. I was a member of the Mobility, Pervasive Computing, and AutoMedia groups here at Michigan. The latter group is a collaboration with Ford Motors, focusing on vehicular and mobile networking.
In my dissertation research, I focused on building programming abstractions to simplify common mobile application tasks, such as making better use of all available networking options, or deciding when and how aggressively to prefetch application content from remote servers.
Occasionally, I will post some of the painful lessons I've learned on my blog, over here. Hopefully this saves someone out there some time and frustration. This started out fairly Android-focused (as was my research), but has since become more general.
From Fall 2012 to Winter 2014, I was a graduate student instructor for EECS 482 (Operating Systems). Before that, I was an undergraduate teaching assistant for EECS 280 (Programming and Data Structures) for four semesters, from Fall 2005 until Winter 2007.
I also attended Michigan for my bachelor's and master's degrees. Michigan is awesome (state and institution).
Mobile applications often predict the future to make decisions in the present. Although such predictions are inherently uncertain, applications typically assume that they are completely accurate. This assumption can lead to incorrect decisions resulting in unnecessary delays, wasted resources, or worse.
Instead, prediction error should be a fundamental consideration in mobile systems. Applications should consider uncertainty when weighing alternatives. When one alternative is not clearly superior to another, redundant strategies are often appropriate, resulting in much better performance at a very modest cost.
To illustrate these ideas, we describe and implement several methods for quantifying uncertainty in mobile environments. Our system allows applications to explicitly weigh the tradeoff between the performance gained via redundancy and the cost of extra energy and cellular data resources spent, tailoring decisions to their relative importance. We adapt two systems to use this approach. Compared to both simple and adaptive strategies that do not reflect prediction error, our library improves application performance by up to a factor of two.
In Proceedings of the Sixth International Conference on Mobile Computing, Applications, and Services (MobiCASE),
Austin, Texas (November 2014)
In this dissertation, we explore the various limited resources involved in mobile applications - battery energy, cellular data usage, and, critically, user attention - and we devise principled methods for managing the tradeoffs involved in creating a good user experience. Building quality mobile applications requires developers to understand complex interactions between network usage, performance, and resource consumption. Because of this difficulty, developers commonly choose simple but suboptimal approaches that strictly prioritize performance or resource conservation.
These extremes are symptoms of a lack of system-provided abstractions for managing the complexity inherent in managing performance/resource tradeoffs. By providing abstractions that help applications manage these tradeoffs, mobile systems can significantly improve user-visible performance without exhausting resource budgets. This dissertation explores three such abstractions in detail. We first present Intentional Networking, a system that provides synchronization primitives and intelligent scheduling for multi-network traffic. Next, we present Informed Mobile Prefetching, a system that helps applications decide when to prefetch data and how aggressively to spend limited battery energy and cellular data resources toward that end. Finally, we present Meatballs, a library that helps applications consider the cloudy nature of predictions when making decisions, selectively employing redundancy to mitigate uncertainty and provide more reliable performance. Overall, experiments show that these abstractions can significantly reduce interactive delay without overspending the available energy and data resources.
PhD Dissertation (also available here)
Committee: Brian Noble, Jason Flinn (co-chairs), Z. Morley Mao, and Mingyan Liu
Prefetching is a double-edged sword. It can hide the latency of data transfers over poor and intermittently connected wireless networks, but the costs of prefetching in terms of increased energy and cellular data usage are potentially substantial, particularly for data prefetched incorrectly. Weighing the costs and benefits of prefetching is complex, and consequently most mobile applications employ simple but sub-optimal strategies.
Rather than leave the job to applications, we argue that the underlying mobile system should provide explicit prefetching support. Our prototype, IMP, presents a simple interface that hides the complexity of the prefetching decision. IMP uses a cost-benefit analysis to decide when to prefetch data. It employs goal-directed adaptation to try to minimize application response time while meeting budgets for battery lifetime and cellular data usage. IMP opportunistically uses available networks while ensuring that prefetches do not degrade network performance for foreground activity. It tracks hit rates for past prefetches and accounts for network-specific costs in order to dynamically adapt its prefetching strategy to both the network conditions and the accuracy of application prefetch disclosures. Experiments with email and news reader applications show that IMP provides predictable usage of budgeted resources, while lowering application response time compared to the oblivious strategies used by current applications.
In Proceedings of the 10th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys),
Low Wood Bay, United Kingdom (June 2012)
Mobile devices face a diverse and dynamic set of networking options. Using those options to the fullest requires knowledge of application intent. This paper describes Intentional Networking, a simple but powerful mechanism for handling network diversity. Applications supply a declarative label for network transmissions, and the system matches transmissions to the most appropriate network. The system may also defer and re-order opportunistic transmissions subject to application-supplied mutual exclusion and ordering constraints. We have modified three applications to use Intentional Networking: BlueFS, a distributed file system for pervasive computing, Mozilla's Thunderbird e-mail client, and a vehicular participatory sensing application. We evaluated the performance of these applications using measurements obtained by driving a vehicle through WiFi and cellular 3G network coverage. Compared to an idealized solution that makes optimal use of all aggregated available networks but that lacks knowledge of application intent, Intentional Networking improves the latency of interactive messages from 48% to 13x, while adding no more than 7% throughput overhead.
In Proceedings of the 16th Annual International Conference on Mobile Computing and Networking (MobiCom), Chicago, IL (September 2010)
Brett Higgins, Azarias Reda, Timur Alperovich, Jason Flinn, T.J. Giuli, Brian Noble, and David Watson
The 7th Annual Microsoft Research Networking Summit (June 2010)
Wireless infrastructures are increasingly diverse, complex, and difficult to manage. Those who restrict themselves to homogeneous, managed campus or corporate networks are a vanishing breed. In the wild, users are confronted with many overlapping infrastructures with a broad variety of strengths and weaknesses. Such diversity of infrastructure is both a challenge and an opportunity. The challenge lies in presenting the alternatives to applications and users in a way that provides the best possible utility to both. However, by managing these many alternatives, we can provide significant benefits, exploiting multiple networks concurrently and planning future transmissions intelligently.
To this end, we are developing Intentional Networking---a
set of interfaces and mechanisms that allow applications,
users, and the operating system to proactively manage
current and expected future connectivity. We do this through
extensions to the networking API. Applications can classify
sockets or individual transmissions with a
The Tenth Workshop on Mobile Computing Systems and Applications (HotMobile 2009)
Searching through a user's distributed data set effectively is crucial. User created content is increasingly stored on multiple devices away from home. Conventional desktop search and distributed file systems have relied on kernel modules and practically unlimited resources to organize and search user content. These designs do not consider the complex set of constraints and challenges in the distributed search domain. We propose a distributed architecture, DSearch, to manage the complexities of a mobile data set to improve query performance across all the devices in a user's personal area network. First, we provide a light-weight infrastructure that can effectively organize and search a set of devices. Second, we develop a membership system to manage the dynamics of multiple devices in a network that records the current set of active devices and distributes information to the group. Third, we examine three search index replication schemes - no replication, centralized replication, and device-based replication - to improve query performance. We developed the DSearch distributed search architecture and evaluated its performance.
Available as University of Michigan Technical Report CSE-TR-549-08, October 30, 2008
Sprockets are a lightweight method for extending the functionality of distributed file systems. They specifically target file systems implemented at user level and small extensions that can be expressed with up to several hundred lines of code. Each sprocket is akin to a procedure call that runs inside a transaction that is always rolled back on completion, even if sprocket execution succeeds. Sprockets therefore make no persistent changes to file system state; instead, they communicate their result back to the core file system through a restricted format using a shared memory buffer. The file system validates the result and makes any necessary changes if the validations pass. Sprockets use binary instrumentation to ensure that a sprocket can safely execute file system code without making changes to persistent state. We have implemented sprockets that perform type-specific handling within file systems such as querying application metadata, application-specific conflict resolution, and handling custom devices such as digital cameras. Our evaluation shows that sprockets can be up to an order of magnitude faster to execute than extensions that utilize operating system services such as fork. We also show that sprockets allow fine-grained isolation and, thus, can catch some bugs that a fork-based implementation cannot.
Proceedings of the 2007 USENIX Annual Technical Conference Santa Clara, CA, June 2007.