Back to index
SuperWeb: Research Issues in Java-Based Global Computing
Albert D. Alexandrov, Maximilian Ibel, Klaus E. Schauser, and Chris
Summary by: Steve Gribble
SuperWeb is a Java-based "Internet computing" brokerage
infrastructure that allows hosts to register available computational cycles
with broker agents, and clients to access those available computational
cycles. Security, trust, cost, communications, etc. are identified as
the main technical challenges to this infrastructure.
- SuperWeb infrastructure elements:
- Hosts offer their resources to the world by
registering them (along with resource consumption
constraints) to brokers. This registration is treated
as a hint: the host may be in fact unavailable. It is
not clear what entity enforces these constraints.
- Clients are the consumers of hosts' resources.
Clients can be individuals, large organizations that
need supercomputing power, network computers that have
no resources of their own, etc. Consumption is based
on the execution of untrusted Java applets. Clients
are responsible for parallizing and robustifying their
- Brokers coordinate resource consumption across
clients and hosts. Brokers are essentially a directory
of available hosts, bundled along with some economic
model for estimating cost based on supply and demand
and verification services to ensure hosts are up and
provide resources as advertised. The brokers are
- interface - registration/directory service
- scheduler - assigns incoming tasks to
available hosts' resources
- accounting module - keeps track of charges
incurred by client and credits earned by
- Technical challenges:
- Interoperability: Problem is heterogenious
environments, solution is Java, mumble.
- Execution speed: Java is slow. Solution is JIT,
fast interpreters, mumble.
- Security: Problem is that clients, hosts, and
submitted tasks are all untrusted. Solution is
Java, mumble, encrypted algorithms,
mumble, replicated execution for reliability, mumble.
- Communication: Problem is that communication
speed across hosts in internet is slow and unreliable,
vastly limiting the cross-task communication that is
possible. Solution: execute coarse-grained parallel
jobs or independent jobs only, raytracers,
- Another claim is that although prototype is in Java (Javelin),
SuperWeb architecture could support arbitrary untrusted binaries
using something like SFI.
It would be absolutely wonderful to harness all of those idle cycles out
there on the Internet. Now is the time to design a resource brokerage
architecture for harnessing available wide-area resources; superweb is a
decent first stab at identifying the issues.
- Architectural flaws:
- Are brokers trustworthy? Why? What can happen if a
broker is evil?
- There is no directory of brokers, so how do clients find
brokers? It's the usual resource location bootstrap
- Nothing in the architecture can possibly reliably track
resource consumption. Clients simply can not be
assured that they're getting enough bang for their
buck. Maybe this is OK (e.g. Visa fraud model).
- It's unfortunate that I as an application writer only
get a list of hosts to execute on (effectively), as
opposed to some infrastructure tools for storing state,
rendezvousing, estimating job progress, receiving
notification of failed or aborted jobs. This could be
- It's also too bad that I don't get help in writing my
parallel jobs, but this it's hard to provide a
parallelization toolkit that helps in the general case.
- Paper flaws: way to wide and not nearly deep enough. Again,
IMHO, the architecture they advocate and their associated
technical challenges are quite obvious, although it is nice to
have the list of problems explicitly written out. None of their
suggested solution paths seem to be immediately applicable or
Back to index