I spent about a
full day at UW and a full day at MS Research (1/25-26, 2001). In both
places I gave a "state of the iRoom" overview talk, accompanied by
video clips. In both cases the talk was too long, but we're getting there; and
it was very well received. Here are some highlights from the visit.
Please read this in case there are implied action items for you here.
:-) Even if there aren't, I gleaned some good high level info about
what's going on in EasyLiving and Portolano that may save you from having to
read some papers.
Arnstein and perhaps one or two others from Portolano are scheduled to spend
the day of 3/14 with us. I would like to know if it's OK to invite
one or two EasyLiving people down as well - not just Steve Shafer, but some
- Maybe we
should invite Nuria Oliver (MSR) to give a talk about her work on using
dynamic (time-evolving) Bayesian networks to do fusion of sensor inputs and
infer user intentions; this seems relevant to the human-centered interaction
work in the iRoom.
Portolano's One.world platform may be useful if we ever reimplement the
Event Heap. It would give us the possibliities of replication,
migration, and other nice reliability sematnics we don't have. It also
makes some interesting assumptions about failure semantics for programs.
It's the subject of hotOS submission, which I brought back, or email email@example.com
(robert grimm). On the other hand, Steve Shafer (MSR, EasyLiving)
thinks we may be able to reimplement the EH as a layer on top of a
"real" database, and this would solve all our performance problems
while still giving us all the same behaviors we get now. This is worth
discussing at a future iw-infra; Brad, sounds like you should think
about this since you will have to address this issue in your thesis...
Sigurdsson is working on service composition for labScape, Portolano's bio
lab scenario analogous to the iRoom (some of you saw demo at WMCSA).
Some of what he's doing sounds like "event macros".
- The two major
themes/metaphors of Portolano:
associating physical objects with time,
place, and person using them. E.g. a smart pipette records not
only the volume of fluid moved, but who moved it, from which beaker,
and when. This makes later data mining, accounting, etc.
networking model - almost like dataflow with history.
"The devices just put data out there, and the data finds its
own way." This is very similar to the vision for ADS,
and we should explore crossover between those projects.
EasyLiving has an "event like"
system for sensor fusion, but it's not as general or as exposed to
applciations as ours. See description below.
I have beta software from MS for
".net", their new service composition framework. We
should play around with it to compare it to other industrial appraoches. I
imagine a lot of what's in it is SOAP or XML-RPC type support.
Supposedly, terraserver.microsoft.net (or maybe http://www.terraserver.net)
is a .net-based implementation of the TerraServer, available for public
use now. This would be a great element to use in service
composition and/or CS241 projects.
Everybody was interested in learning more
about the HCI/Mural aspects of the iRoom (one guy at MSR was formerly of
Kai Li's DisplayWall group at Princeton), and the
video/classroom-of-the-future aspects. They were also pretty
impressed how far we have come.
We need to do a better job of trying to
articulate the research questions in each area as part of the overview
talk. Perhaps a list of PhD thesis titles/topics would help here.
But overall the talk was well received. I will be posting my
slides (both long and short versions; 1:15 and 0:45) on the iRoom web
The videos were very helpful and will be
even better once integrated as clips into the slides. Thanks,
I got to see a
short demo of EasyLiving. Imagine a living room with couches, a wall
size display, a coffee table, a stereo, and a "sign in area"
consisting of a pressure sensitive mat, a small display, and a thumbprint
reader. There are triocular stereo cameras at various places around the
room, and one overhead looking down at the coffee table.
- Person steps
on mat and puts thumb on reader. "Hello, Brian". If
person is not known, they are a generic "guest".
- Brian walks
over to couch and sits down. Pressure pads on couch detect this and
Brian's "terminal session" (Windows desktop) appears on screen,
since camera has been tracking him and recognizes he is the one sitting
- SImilarly, if
he takes the keyboard, cameras detect that it is he who has come close to
the keyboard and keystrokes are sent to his session. If he passes the
keyboard to Steve, cameras detect this as well, and screen switches to
Steve's session and keystrokes go there.
- Sessions can
also explicitly be sent to other displays by the user. This overrides
"implcit" behaviors in the demo.
Basically the way
this works is that all the sensors feed sensor fusion algorithms connected to
various models. E.g. there is a geometric model of where things are in
the room, a model of which persons are in the room, etc. Individual apps
can query these models to ask various questions; also the models can
extract/synthesize interesting "state change" events from raw sensor
inputs, e.g. "Brian has just moved within the extent of the
keyboard". Apps can also subscribe to receive events from specific
sensors, but this isn't the general way apps are written since events must be
explicitly addressed to desitnation(s) by the publisher. The distributed
system that all this stuff is built on is called InConcert. It wasn't
100% clear how it works beyond this simple description; my sense is that it is
not as general a framework at this point as the iRoom, and the apps/demos are
Robert Grimm, One.world (subject of a HotOS
submission, worth reading)
systems support for pervasive computing. 1)
traditional distrib sys/obj approaches don't scale to worldwide pervasive
computing. 2) programming for change: existing DS abstractions like RPC
don't work. instead want
to write apps that "are prepared to re-acquire any resource at any
time". (related to leases; and implies apps are prepare to lose any
resource at any time! a new programming discipline required of app
writers.) 2 ways to help app
writers meet this challenge:
1. as INS demonstrated, late binding &
discovery are a good idea in dynamic systems. PvC is a 2d space where
axes are "degree of dynamism" and "size of network"; their
API's express this. (HotOS paper has a
graph with these axes showing how existing distrib sys and PvC frameworks cover
different regions of this space.) eg discovery vs. lookup
is just treated as different API's to lookup mechanism, depending on these
2. if programmers write apps this way, we can do
checkpointing and process migration. an "environment" can contain
active computation thread(s) and persistently stored tuples (and environments
nest). a checkpoint is taking the active computation tree, serializing
them, and checkpointing the tuples. each env has a main event loop that
calls you; the apps are written to respond to the main event loop.
things like connections to outside services ("External state") are
not checkpointed, since it's expected that the app can reacquire those later
when it comes back up. current impl serializes all the java objects.
checkpoint waits for environment to quiesce before checkpointing and killing
Interesting ugrad project course: each project is
implemented in parallel in one.world and something else (jini, etc) to do
Potential collaboration: new
Eheap servre implementation on top of One.world; would allow scaling,
replication, failover, checkpointing/migration, etc.
Stebbi Sigurdsson, Service composition for LabScape
RPC queues on top of events: higher level tasks
are broken down into event lists, they are executed in strict serial (not
async) order to maintain a sense of where the locus of control else.
second goal: separate concerns of applications,
administration (data formats of different vendors), etc. use an
event-based approach and install "user handlers" to do whatever the
task at hand is. the user handler is just the abovementioned task.
reference impl of pub/sub substrate is one.world. the demo at wmcsa used
an older version of this framework.
Gaetano Borriello - Portolano
technologies, scenarios, themes
Experimental preschool - active monitoring
(badges, etc) to collect data (today your kid spent 30% of her time doing X),
data mining, cross correlation to video feed, etc. (better than just a
Distributed CSCW - some iroom-like apps, but where
people are not geographically collocated. building a space on Univ Way
to explore this. They would consider using our stuff. Might be a
forcing function to look beyond connected-room stuff.
Should point to Milton to look at virtual
The Portolano world: 3 types of people.
Device makers, high level app makers, administrator/coordinator of the
"space" (physical or virtual). usually these will be separate
parties who don't know about each other, so must separate concerns in systems
infrastructure. example: Cell Structure Initiative at UW is using
Labscape as their prototype env for experimental. Main impediment in
large-scale genomics etc. is that people don't do well at leveraging each
others' work in the community. one possibilty is to use Labscape to
audit/log everything: then you can reproduce experiemnts, do data mining, etc.
Current challenges: (a) data representation (currently a knolwedge model for
biology experiemnts, called Metagraph); (b) experiment capture, which
populates the Metagraph DB - that's what Labscape does. can also capture
menial things ("The results are screwed up
because i was using a contaminated pipette, since i can see this
pipette was used for experiment X last week").
One Portolano Theme: associating
physical objects with time, place, and person using them.
Second theme: purely data-centric
networking model - almost like dataflow with history. "The devices
just put data out there, and the data finds its own way."
A lot of similarities between
Portolano/Labscape and ADS! See Gaetano's november 2000 IEEE
Computer article (2-pager on invisible computing); main difference is the
aggregation of data to account for the fact that things like devices might not
know the userid of who is using them, etc.
Mobicom submission: using a tuplespace as an
active cache for intermittently-connected mobile devices. Will be built
on one.world eventually but not right now.
A hypothesis - the only time we
REALLY do re-use in CS is when the reuse is in the form of reusable code
artifacts. but, we haven't succeeded in finding a common representation
that is open enough to tweak, but closed enough to work off-the-shelf.
hence people rarely reuse others' code, and leverage doesn't happen.
this iknows improving (Java VM is a standard environment, perl CPAN, etc.)
but still a long way to go. the HotOS composition paper is really about this
problem. Think about Perl: it focused on keeping datatypes
simple, having a lot of esoteric casting rules, etc, and loose calling
conventions! that's what made modules work!
New research lab: Intel/UW.
Gaetano is 50/50. The lab will work exclusively by finding projects on
which it can collaborate with a university. the first such is Portolano.
Facility will house ~50 people three blocks from
UW. They are actively looking for
recruits immediately, ie new PhD's.
Comments from MSR
questions more explicit - maybe a list of PhD thesis topics in each area.
More about the
EasyLiving has a similar mechanism that they use for "traditional"
pub/sub (ie. publishing is activated when subscriptions come, because it might
be expensive) - but they built everything on top of a traditional DB.
Claim - even if you have to work aroudn data model deficiencies, that's easier
than implementing your own and it gives you everything else you need (and some
things you don't). Question: should the EH be a layer
on top of a DB?
Ross Cutler: Cool
camera demo: multiple cameras mounted around a circle, (soon) with
integrated mikes. Automatic scene-stitching software can create a
panorama, they can also do sound localization and use that to automically
control multiple-frame views (zoom in on speaker, etc) using 1394/DirectShow
interface. A few univesrities are slated to get these for Internet-2
work - Ross Cutler (the guy working on this) thinks we may be one of them.
This strikes me as a potentially very cool thing to have for experimenting
with sensor fusion, identifying context about users that could be used for
overface, etc. Similarly, the algorithsm and filters for sound
source localization, etc are stacked using DirectShow - everything is just
expressed as a filter graph. More info: Anoop's group homepage at MSR
should contain something about this by about Monday.
A lot of
sensor-collection, sensor-fusion work going on, including reasoning about user
actions based on sensed events using dynamic Bayesian networks (Nuria Oliver),
tracking the positions of people, objects and furniture in the EasyLiving room
using stereo and overhead-mounted cameras (John Krumm), etc. They have a
distributed system called InConcert that lets you write apps that can
"receive events" (sorst of). 2 methods:
"events" (messages) can be generated by sensors/observers, but must
be addressed to specific entities (essentially, to specific apps) that have
registered callback functions to react to those messages. Similar to the
old Event model for the iRoom.
- there are
various models (geometry model of where things are, model of where people are,
etc.) that can be passively queried by applications that would care.
analog sensors generate data streams received by these models; the models
extract "interesting" state change events from these, i.e. person X
has come within the extent of keyboard Y.