Ubicomp 01 notes
UbiTools 01 workshop talks
- Generalization (Danny Soroker, IBM): to make apps with consistent but
device-specific UI's, first generalize the app's behavior at a pretty
high level. This is similar to what iCrafter does, but these guys do
it by viewing the app as series of tasks among which the user navigates; it
is the task boundaries and navigations that are captured in the
gneralizations, and from which device-specific UI's are generated.
Their "interaction elements" include things like
"TextInput", "MultipleSelection", etc. rather than
specific widgets. THey have done some work in "reverse
engineering" (extracting the generality from) existing HTML interfaces.
- Phidgets: physical widgets, for facilitating creation of physical
UI's. The actual physical thing (sensor, actuator, whatever) is
mounted on a controller board which exports a standardized HW (+USB comm)
and SW interface to talk to a PC; a screen representation of it can be
manipulated just like one of several widgets in a colleciton (screenshot
showed this using MS Visual Basic widget-based GUI editor), and you
can attach little scripts to it, etc. In essence it behaves just like
a soft widget. Demo showed a rotating servomotor; the on-screen
reperesntation of the widget has controls that let you control the motor
directly or cause scripts to control it. Very cool. It shouldn't
be hard for someone with a little hardware/digital design know-how to
replace the USB comm with Bluetooth or 802.11 transceiver+chipset.
Even with the wired interface, this would make integrating proximity
sensors, etc. a lot easier. Interestingly, during the demo one of the
USB's came unplugged and got all the S/W into a weird state, suggesting that
there are stability issues. Video clips of more phidget demos using
servomotor phidgets: http://www.cpsc.ucalgary.ca/grouplab/developers/USBPhidgets
- m-p@gent ("emPAYjent"): to make an
app that works across big, small, etc devices, split
app into 3 parts. Core
implements app logic; Add-on modules implement platform-dept behavior for
each platform/env; profiles define mapping between each platform/env to
which modules to use. Maybe related to Danny Soroker's Generalization
talk? Concept seems useful but demo examples (migrating an email
composition-in-progress to a Palm, and using a Palm to turn off a light
remotely) are uncompelling. They seem to move the whole execution
state in Java (rather than just moving the application state into an
appropriate app on the small device); for the examples they showed, there
are much easier ways to do this; for other examples, it seems like it
wouldn't be any harder to write separate apps and migrate the state than to
use their approach of also migrating the code.
- One.world: every component consists of fairly shortlived event
handlers. To checkpoint, stop dispatching events, wait for current
handlers to complete, then save state of eventing system. The unit of
checkpointing is the environment (in the Scheme sense), which holds
code (components) and data (tuples). Environments nest, and migrating
an environment migrates all its children. For this to work, you have
to structure your apps as a group of event handlers whose only persistent
storage is tuples. For things like connections, etc, there are leases;
so if you move, the leases eventually expires and you are forced to have
code that deals with this and renews your connection. Event handler
bindings are also leased, so if after a while you can't reach a receiver,
the lease goes away. (Why leases for this? Seems like a
disruptive move would have to cause an upcall.) Main concept in
coding event handlers: Logic are computations that "do not
fail" (eg CPU-local stuff). Operations are things that may
fail (read from I/O channel, remote event, etc; implementation provides
timeouts and retries).
- UIUC Active Spaces has a lot of the same design/scenario goals as the
iRoom. They argue for Gaia, a new OS that exports and coordinates
resources contained in a physical space (house, car, etc). They
lost me right here, since he didn't say why he thought a whole new OS was
needed. But it turns out it's not really an OS, it's middleware that
runs on top of another OS; like what we call the "meta-OS" that is
the iRoom. Gaia addreses everything from lowest level loadable
kernel modules to app components and a coord mechanism layered on top of the
apps. Evt manager (like EH), discovery service (like part of
icrafter), Space Repostiroy (like RoomDB/CMX), Security service (we have no
equivalent), Data object service (like Data Heap). App model is
M-V-C. Would be instructive for someone to do a point-by-point
comparison! Maybe as part of an interactive spaces survey paper.
- Cliff Randell and Henk Muller, U. Bristol (UK): Low-cost indoor
positioning system to give location-in-a-room accuracy to within ~60cm, at a
cost of US$150. Each room is fitted with a cheap FM shortrange
transmitter (can sit anywhere) and four ultrasonic "chirpers" (one
in each corner). The receiving device is fitted with an FM receiver
and ultrasonic receivers. Under software control, every few millisecs
the FM transmitter sends a "sync" blip, followed in precise timing
sequence by a chirp from each ultrasonic transmitter. The receiver
receives all of these, and uses relative timings of the chirpers to
determine 3D position. (Four data points are used to mitigate
occlusion, smooth noise, etc.) Because of the sync pulse, the system
requires no recalibration after initial install. Positioning ability
seems superior to systems using (eg) multiple WaveLAN readings only; one of
these was presented as well, and it has a lot of noise/stability/confidence
problems. But of course the present system requires support in each
receiver as well as in the room. Gaetano points out that in an
empty room this is great, but in a full room, or a cluttered one, since
ultrasonic requires LOS you get all kinds of bouncing off of people, chairs,
etc. The talk/paper don't present results that explicitly take this
into account.
- KTH Active Space has a lot of very similar functions as well; they have
something that looks like Multibrowsing, something that is like the Butler,
etc. They focus more on automatically doing some things by inference
of context -- eg if 2 people are in the room who are supposed tobe having a
meeting, any "active documents" tagged as being relevant to that
meeting automatically bring themselves forward and dispaly themselves on a
big screen. THey have done some usability observations (if not
rigorous studies), main conclusions being that people find the env. fun and
intuitive, that technical breakdowns are frustrating, and that having the
facility alters people's work patterns. We should learn more about
this from our Swedish guests at the retreat. See http://www.dsv.su.se/fuse
for talk and details.
- Privacy beacon. Marc
Langheinrich, ETH Zurich, Privacy in ubicomp. What's different about
ubicomp? Don't know when you are being surveilled (sensors/cameras
invisible/unobtrusive), or giving out data (eg aware coffee cup records your
fingerprint), and data collection is more precise finer-grained, and more
continuous coverage than in past. Sometimes system knows your moods
better than you (eg biomonitoring to detect that you're nervous); also large
data collection enables data mining, which exposes patterns not obvious in
individual data. How to approach privacy in ubicomp:
- not just true consent, but true choice (more than "take it or
leave it"), for systems/interfaces that collect data; provide
conditional service (lower-grade service baseline, higher-grade service
if user willingly allows data collection).
- Need UI's that can ask w/o a screen.
- Can use anonymity/pseudonymity as baseline; but sometimes (eg cameras,
mikes) cannot mask identity except deliberately.
- No spying: aware devices enabled only when used by owner (or owner
nearby)
- Local information stays local; appliances can talk to each other but
not to outside world (eg)
- Context-specific security (eg certain personnel need free access to
medical data during emergency)
- Personally-identifying data must be accessible (esp to owner) and
auditable (who collected & why, who has access & why); consider
how much such data is really needed for a given app
- Example design: a "privacy beacon" in a ubicomp room
could broadcast (to each user's PDA as they walk in) what is being
collected and why; user can then (via Internet) get more details about
what is being collected, by talkign to infrastructural counterpart of
each ubicomp sensor (eg), interact w/more detailed/finer-grained privacy
policies, etc.
- Keith Edwards, PARC: Challenges of ubicomp in the
home
- Homes will become smart gradually and accidentally, not be designed
that way; technology accretes slowly. So avoid
"invisible" interfaces, provide affordances at least equal to
physical systems.
- Unplanned/serendipitous i14y should be a design goal; given that
desktop computing is a mess, unlikely that systems relying on
"designed-in" interoaprbility will be robust.
- No administration: simple single-function devices, obvious
affordances, call expert if something wrong (this model doesn't even
work today for consumer electronics! Works for CATV/Cbl
modem/Tivo: dumb appliance adminsitered over network)
- People adopt technologies in unforeseen ways: eg phone. How to
design technology for "unlikely uses"? Do field studies
w/prototypes. Becky Grinter (coauthor) is doing user studies on
how teenagers use IM; ??Hughes (coauthor) investigating how people use
set-top boxen.
- Social implications of domestic tech: cell phone use, TV
ratings...lesson: even "simple" technologies can have far
reachign effects.
- Robustness/reliability. We should cite
this in future iwork papers. Differences in design
culture, expectation setting (Consumer Reports), regulatory
oversight. "Reliable systems do exist but they tend to be
those that have relibaility designed in, from the ground up."
Challenge: bring culture of reliability to our systems, broaden that
debate beyond our research community.
- Inference in presence of ambiguity: how "smart" will smart
homes really be? Most systems don't DWIM even in their limited
domains! Challenge: understand design cosntraints in absence of
oracle.
- Joe Paradiso, MIT Media Lab: wireless self-powered
remotencontrol button - powered by its own activation. Piezo
striker (adapted from butane lighter) sends a spark thru a coil, that's
where the power is extracted from (similar to mechanical remotes).
Provides stable +3V/0.5mJ on one push, enough to send 12-bit RFID 6-8 times
throughout 50ft radius. Impressive demo - he sent a signal from
podium to back of room (~75ft) with two different RFID's to make two
different sounds on a receiver. Weighs 8g, build for $5, could be
made smaller w/surface mount, being productized by an unnamed MLM partner.
- Brad Myers, CMU, Semantic Snarfing. Motivation: want to allow use of
laser pointer to do tracking,pointing, etc, but laser pointers are jittery
and camera systems for this don't work well. (User study on their
website, but not in paper, on laser pointer use.) Semantic snarfing:
use laser pointer or Palm or whatever to circumscribe area of interest, grab
contents to clipboard, edit in another app, move data back to main
display. Note funny and appropriate definition of "snarf"
in New Hackers Dictionary - grabbing a chunk of a file/doc without the
approval/knowledge of its owner. Semantic snarfing grabs the
underlying structured content, not the screen bits; if you snarf text (works
with most Win apps), you get a text editor widget on handheld; if you snarf
a menu, the menus are reproduced on the handheld and you can actually use
them (they route the appropriate command back to Windows using either OLE/OA
or posting phony events when you make a menu choice). We should
download and use this if possible, and verify it coexists with our
stuff; it's useful enough that I can imagine a lot of people wanting
it. Future work: snarfing physical control panels of A/V
equipt. Snarfing is faster than mouse, but higher user error rate
in selecting region. We should be able to do
something like this w/Data Heap/Smart Clip, and shoudl definitely cite this
if we haven't already in papers where we talk about Smart Clip. Currently
their version is not modular, but a hack, with special case code (not
plug-ins) for each datatype.
- IBM Research (Hawthorne? Almaden?) creating large touch-sensitive
displays anywhere: (Some aspects of this were presented at SIGGRAPH
2001) Use projectors to project images onto walls, floors, etc. Use
oblique projection angle (rather than perpendicular) to decrease the volume
in which an obstacle would occlude image; to compensate, do image
processing on the original image to correct for the oblique projector angle.
Could also control rotation and scale this way, according to user
preference. This is homography-based correction and can be shown to
require only 4 points to do the 3D correction. Use camera and similar
homography technique to allow using your hand as a pointer: use motion
detection to track a moving thing, edge-detect and grayscale-histogram the
region bounding the moving thing, match a grayscale template to a known
pattern for fingertip. This sounds hard and wasn't clear how well
it works for them in practice; I had to leave early before the questions.
Steelcase is a partner in their "Bluespace - Office of the
future" prototype, in which a corner-mounted projector in a cubicle
can turn any cubicle surface into a display. Other uses: silent
message notification; project patterns on cubicle floor entranceway to
indicate out of office, no interrupts, etc.
Recommendations for the iRoom project.
- We need more publicity! We're in very good shape compared to a lot
of projects I saw, and are one of the relatively few that has a working
facility (and even fewer that have outside customers), but we must be more
proactive in both evangelizing and getting other facilities using our
software. Implication: it remains very important for releases to be
stable and to incorporate as much as possible of the functionality we
ourselves use most, and for us to support our "semi-external"
(other depts. in Stanford) customers to make them happy showcases for our
stuff.
- We should emphasize which aspects of our project distinguish it from
similar projects, particulary UIUC ACtive Space, KTH Active Space, and MIT
Intelligent Room. Reusable software, first-class systems concerns, and
support for both creation of new software and integration of legacy stuff
come to mind for me (with my systems bias). Ideally, our website
should have links to related work with a one-sentence summary of how we are
different.
- We need a snappy website including video clip based demos, etc. The
best talks had either live demos or video clips, which quickly convey what
is cool/new about a piece of work. Implication: we should make sure
this is prioritized high enough that we dedicate real (read: paid) resources
to it.
- We need a reading/discussion group of some sort to go over progress like
this, so iWork folks are aware of what others are doing...work is being
produced at a rapid rate and it's a challenge to keep up, so we should
distribute responsibility.