My main area of interest is programming abstractions to simplify application development, with a particular interest in parallel systems, both multiprocessor machines and networks of machines. I'm currently interested in programming models to seamlessly allow web applications to scale from a single server to a cluster and beyond to the cloud.

Below I highlight some of my past research including links to selected talks, publications, and software. I also have a research-oriented resume available and a complete list of publications.

Stanford PhD

At Stanford I was part of the Transactional Coherence and Consistency (TCC) project led by principal advisor Prof. Kunle Olukotun and co-advisor Prof. Christos Kozyrakis. The project focused on how transactional memory can improve the performance of parallel programs as well as making writing parallel programs easier. Being mostly comprised of computer architects, the project was focused on hardware transactional memory systems but we explored various trade-offs in the hardware-software interface including software based handlers for compensating actions and contention management, nested-transaction programming models including open-nested transactions, as well as various software-supported hardware resource virtualization mechanisms and hybrid hardware and software transactional memory systems.

My research at Stanford focused on programming with transactional memory. My first work was on trace-based analysis of Java programs using the projects TCC protocol, as described in Transactional Coherence and Consistency: Simplifying Parallel Hardware and Software. My own early work focused on evaluating the execution of Java programs with a transactional execution model, as described in Executing Java programs with transactional memory. After that I focused on what a transactional only programming language might be like, as described in The Atomos Transactional Programming Language. Part of the Atomos work involved defining the right supporting hardware interfaces, which is described in Transactional Memory: The Hardware-Software Interface. With the foundation of the Atomos programming language in place, I looked at how transactional memory could make programming easier, not just faster, by studying programs consisting of a few transactions covering most of the runtime, as opposed to programs with many short transactions covering a small part of the runtime. This study led to the development of Transactional Collection Classes that allowed these long transactions to modify shared data structures without necessarily rolling back non-conflicting changes. My research was summarized in my PhD dissertation Programming with Transactional Memory.

Although my research was on making parallel programming easier with transactional memory, most of my work was on our various architecture simulators. While the first few papers from the TCC project were based on generating and analyzing program traces, within the first year we moved to using an execution-driven PowerPC simulator based on SimOS-PPC, allowing us to compare traditional cache coherence protocols with our TCC hardware transactional memory protocol. I later ported this simulator to support x86 alongside PowerPC, integrating a fellow group members x86 interpreter into our simulated memory hierarchy. After that, we refactored the simulator to more flexibly support different cache protocols and hierarchies, allowing us to compare several competing hardware transactional memory proposals and different combinations of private and shared caches in a single simulator. The simulator has also been used since 2006 in the Stanford graduate class CS315A: Parallel Computer Architecture and Programming to teach students about MESI protocol implementation.

Selected Talks

Programming with Transactional Memory [PDF] [PPT]
PhD Thesis Defense, Stanford University, June 2008.

Transactional Collection Classes [PDF] [PPT]
PPoPP, San Jose, CA, March 2007.

The Atomos Transactional Programming Language [PDF]
PLDI, Ottawa, Canada, 12 June 2006.

Selected publications

Programming with Transactional Memory,
Brian David Carlstrom,
Doctor of Philosophy Dissertation, Stanford University, June 2008.

Transactional Collection Classes,
Brian D. Carlstrom, Austen McDonald, Michael Carbin, Christos Kozyrakis, Kunle Olukotun,
ACM SIGPLAN 2007 Conference on Principles and Practice of Parallel Computing, San Jose, California, March 2007.

Transactional Memory: The Hardware-Software Interface ,
Austen McDonald, Brian D. Carlstrom, JaeWoong Chung, Chi Cao Minh, Hassan Chafi, Christos Kozyrakis, Kunle Olukotun,
Micro's Top Picks, IEEE Micro January/February 2007 (Vol. 27, No. 1).

Executing Java programs with transactional memory,
Brian D. Carlstrom, JaeWoong Chung, Hassan Chafi, Austen McDonald, Chi Cao Minh, Lance Hammond, Christos Kozyrakis, Kunle Olukotun,
Science of Computer Programming, Volume 63, Issue 2, 1 December 2006, Pages 111-129.

The Atomos Transactional Programming Language,
Brian D. Carlstrom, Austen McDonald, Hassan Chafi, JaeWoong Chung, Chi Cao Minh, Christos Kozyrakis, Kunle Olukotun,
ACM SIGPLAN 2006 Conference on Programming Language Design and Implementation, Ottawa, Canada, 12 June 2006.

Transactional Coherence and Consistency: Simplifying Parallel Hardware and Software,
Lance Hammond, Brian D. Carlstrom, Vicky Wong, Michael Chen, Christos Kozyrakis, Kunle Olukotun,
Micro's Top Picks, IEEE Micro November/December 2004 (Vol. 24, No. 6).


While working at Ariba, I learned that my research advisor, Olin Shivers, would be leaving MIT for Georgia Tech and that if I ever wanted to complete my MEng degree, I needed to file a thesis proposal before he left. Since I was working full time, I decided to write about something I had already done.

Before I started working at Ariba, I had started implementing a Scheme interpreter in Java as a sort of library learning project because, although I had implemented a Java bytecode interpreter in my spare time while I was working at General Magic, I had not done much actual Java application programming. When early on at Ariba I needed a scripting language for the approval rules engine, I took my existing code and started to flesh it out significantly under the guidance of Schemer Norman Adams, thus satisfying a generalized version of Greenspun's Tenth Rule:

Any sufficiently complicated platform contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a functional programming language.

My completed thesis refers to the system simply as Script, but in 2002 it was open sourced under the name BDC Scheme.


Embedding Scheme in Java,
Brian D. Carlstrom,
MIT Master of Engineering Thesis, Cambridge, MA, February 2001.


BDC Scheme
Release 0.0, January 2002.


MIT's Undergraduate Research Opportunities Program (UROP) allowed me to work with three different research groups over my undergraduate years at MIT. I started out working at the Flight Transportation Laboratory for my freshmen advisor in the aero-astro department, Prof. Robert Simpson. We worked with McDonald-Douglas for NASA to evaluate the economic viability of a proposed hypersonic transport aircraft. I worked primarily as a programmer and although the pay was better in aero-astro, I eventually decided I wanted to move to on to more interesting computer science research work.

My next UROP was with Prof. Tom Knight of the Transit Project. I worked on Liquid, a compiler and runtime system to automatically parallelize mostly-functional sequential programs, as described in Steven Davis's master thesis. I worked largely on the Liquid Abstract Machine part of the system as well as a runtime environment. The compiler focused on extracting procedure level parallelism of Scheme programs, but otherwise used concepts similar to later Thread-Level Speculation (TLS) and transactional memory systems, which is part of what later attracted me to transactional memory work at Stanford.

My final UROP was with Prof. Olin Shivers in the Personal Information Architecture group, which at the time was a joint group between the Laboratory of Computer Science and the Media Laboratory. My primary focus was on Scsh, the Scheme Shell. My main contribution to Scsh was the networking interface. As part of our group's focus on Personal Information Architecture, Edward Olebe and I built a web browser for the Apple Newton using Scsh. The browser functionality was split between a NewtonScript client implemented by Ed that displayed preformatted lines and a Scsh server that I implemented that actually acted as the HTTP client and parsed and formatted HTML+. The Newton connected to the server by dialing into a modem via an OKI cell phone and we implemented our own SLIP-like protocol to help us identify legitimate requests and responses on the very noisy connection. Later I added a static linker to convert Scheme 48 byte code image files to C that could be compiled and linked into the virtual machine executable to reduce start times. I was responsible for the first few years of releases and several ports to new operating systems and architectures. Scsh can be found prepackaged for systems such as Cygwin, Debian, and Ubuntu.


The scsh manual, release 0.3,
Olin Shivers, Brian D. Carlstrom,
MIT Laboratory for Computer Science, Cambridge, MA, 25 December 1994.


Scsh: The Scheme Shell
POSIX programing in the Scheme Language.


My friend David LaMacchia and I were troubled by the lack of scaling of MIT Zephyr's instant messaging servers. We started development on what might now be called a peer-to-peer system with my focus on the server and Dave working on the client. Further development happened as part of Prof. David Tennenhouse's 6.853 Computer Systems class. Eventually I wrote up my part for my Advanced Undergraduate Project and David used it for his MEng thesis the following year.

Conversational messaging for the Internet.

Brian D. Carlstrom