Living Worlds Technical Tutorial

This paper is designed to give an overview of the Living Worlds proposed standard.

This paper is intended as a tutorial to the technology of Living Worlds, it assumes a familiarity with VRML2.0. Readers might find it useful to read the Living Worlds concepts paper first, which goes into more detail into what we are trying to achieve. For more technical detail read the rest of the standard at http://www.livingworlds.com.

Since the standard is currently (Nov '96) an evolving document, if there are differences between this paper and the standard, then the standard should take precedence. This tutorial was last updated 19th November '96 and reflects the current draft of the standard as of that date, plus the proposed extensions for managing ASA's.

Please send any comments or corrections to Mitra, mitra@mitra.biz. This paper reflects my own views, and not necessarily those of the other Living Worlds editors.


Background

The standardization of VRML has been described as having three phases.

  1. Geometry - static worlds, the world is downloaded and displays statically on your screen
  2. Moving Worlds - worlds with behaviors, that move and can be interacted with.
  3. Living Worlds - multi-user worlds, shared between the viewers, and supporting avatars.

The Living Worlds proposal is intended to meet the third phase of this progress. The concepts paper covers in much more detail what we are trying to achieve, but in summary it provides the ability to open a world, and have your avatar appear in that world and be seen by other users, to interact with the world, and see the interactions appear in other worlds.

Requirements

The design goals led us to the following technical requirements.

Persistent ObjectsObjects had to be able to be introduced to a scene, and remain even after the user which created them disconnected.
MUtech independenceIt had to be possible to build an object or avatar that did not depend on the particular multi-user technology being used.
Personal persistent avatarsAvatars had to be able to belong to a user, to be taken to any world supporting the Living Worlds spec.
Object-Object communicationObjects had to have a means to send events to each other.
State sharingIt had to be possible to specify a set of arbitrary states for the MUtech to share.
Control over what gets sharedIt had to be possible to specify at both a course and fine grain what gets shared and when, so that MUtech's can optimise for low bandwidth.
VRML2.0The specification had to work entirely within VRML2.0 without requiring, at least in the first phase, any extensions to the browsers.

Standard Nodes

To achieve this we defined the following three nodes.

The Zone node usually represents an area of physical space, (although it could be more abstract), this is usually the unit of sharing, i.e. either everything in a Zone will be shared or nothing. The Zone contains SharedObjects, which may correspond to physical objects in the space, or something more abstract. SharedObjects can have Pilot nodes, a Pilot node typically has the code that changes the shared state of the SharedObject.

MUtechNodes

In order to separate out the information only relevant to specific networking technologies. We define a concept called a MUtech (pronounced mew-tek), the MUtech is the code which manages sharing the state of the world, its exact characteristics will vary from supplier to supplier. The choice of MUtech is made by the author of the world, who adds at least one Zone node, and a MUtechZone to their world. The term "MUtechZone" is used to mean any one of the vendor-specific nodes for handling networking. For example a ParaGraphZone node. At run-time the MUtechZone will add corresponding MUtechSharedObject and MUtechPilot nodes to the various SharedObjects and Pilots in the Zone, to generate a scene graph looking like:


Zone

The Zone node is the top node of any shared portion of the tree, it is essentially acting as a Group that contains each of the SharedObject nodes, in fact it is PROTOtyped as Group. Look at the spec for a detailed description of all its fields, but a few things deserve attention here.

SharedObject and Pilot

The SharedObject node, and its companion - the Pilot node - are authored independantly of MUtech, that is, a SharedObject could be introduced as a child of a Zone in any world, and still work. This is crucial for scenarios involving either libraries of objects, or for avatars which must be able to exist in any world. Look at the spec for a detailed description of all their fields, but a few things deserve attention here.

Objects and Avatars

A significant part of the debate around Living Worlds centered around whether Avatars and Objects are the same thing, or different. At first glance they appear different - Avatars are usually human controlled, leave the scene when the User disconnects, have a single machine where they are controlled from, and move around quickly. Objects are usually under program control, are persistent even when the User creating them leaves, can be controlled from multiple locations (e.g. a light-switch will always want to turn on the light without waiting for network latency) and are static.

However all the "usually"s muck up the nice clean distinction, where do robots that walk around fit in for example. So instead the author is given access to a number of fields to tune the behavior of their object or avatar.

Shared Library

In order to make it easier to author these multi-user worlds we defined a set of nodes as a library. Their definitions will be made publicly available so that they can be used confidently by authors, and used by the Living Worlds prototypes.

Some are trivial - look at the spec if you think the object is useful to you.

Other nodes take more explanation.

AssociativeStringArray

The AssociativeStringArray is a critical component in constructing a SharedObject, it is used to package up an arbitrary collection of states for sharing across the network.

To work effectively it is usually used in combination with some related nodes that enable each of the VRML2.0 event types to be stored or extracted in the array without complicated custom scripts. The example below should illustrate it.

SharedObject {
  currentState DEF ASA1 AssociativeStringArray { }
  pilot DEF P Pilot {
    DEF S Script {
      eventOut SFColor newColor
      url "http://foo.com/spheres.wrl"
    }
    DEF C1 SFColorToASA { tag "myColor" }
    ROUTE S.newColor TO C1.in
    ROUTE C1.out TO ASA1.in
  }
  visibleDefinition Shape {
      geometry Sphere { radius 2.3 }
      appearance Appearance {
        material DEF M Material {}   # Red
      }
      DEF C2 ASAToSFColor { tag "myColor" }
      ROUTE ASA1.out TO C2.in
      ROUTE C2.out TO M.emissiveColor
    }
  }


In this example, the pilot's script "spheres.wrl" sets a new color, which is converted for storage in the array, and then read in the visibleDefinition and ROUTEd to the emissiveColor of the sphere.

Selector and SelectorItem

The Selector and its SelectorItems can be used to create a menu. These menus are attached to each SharedObject's selector field. The intention is that browsers will eventually implement these nodes themselves in order to display the menu in a User Interface consistent with that of the browser. In the short run they will be supplied as a set of freely available definitions using Java menus or something similar, at run-time the MUtechSharedObject will add a TouchSensor so that when an object is clicked on its menu is displayed. There is some discussion of generalising this concept to support other forms of UI, so watch the spec for changes.

EventHandler

The EventHandler is used to dispatch events sent to a SharedObject or Pilot. EventHandlers are typically used for Inter-Object communication (see below). The EventHandler contains an MFString listing event names, and an MFNode pointing at Script nodes defining the handling of the event. This may be generally useful in other applications.

SmoothMover

It is expected that a lot of experimentation will happen with the best way to predict motion, and some fields are provided in the SharedObject for that purpose, the SmoothMover node can be patched into those fields to implement trivial interpolation between a current position and a predicted one.

An example - Motion

Living Worlds took the same approach as VRML2.0 of defining lots of small nodes, that can be composed in different ways, rather than a few mega-nodes. This can make event flow difficult to follow, so here is an example of event flow for the common case of motion.

  1. User moves mouse or input device, Browser's UI detects this
  2. Browsers UI moves the VRML camera, which is detected by a ProximitySensor which is part of the Zone's VRML file.
  3. The ProximitySensor's position_changed and orientation_changed eventOuts are routed in that file to the Zone's avatarPosition and avatarOrientation eventIns.
  4. The MUtechZone will have routed the Zone's avatarPosition and avatarOrientation events to the position and orientation fields of the SharedObject pointed to by the avatar field of the Zone.
  5. The MUtechPilot will typically have routed the position and orientation fields of the SharedObject to itself.
  6. The MUtechPilot will send these over the network to the MUtechSharedObject's on all the other places this avatar is visible.
    At this point there is plenty of room for innovation - predicting positions, handling crowd-control and visibility issues etc.
  7. Each of the MUtechSharedObjects will send these to the SharedObject's toPosition, toOrientation and toTime fields.
  8. The SharedObject may have specialized interpolation, or animation built in, or it might just be hooked to the SmoothMover node
  9. The SmoothMover node performs some trivial interpolation and sends via its position and orientation eventOuts to the SharedObject's position and orientation fields.
  10. The SharedObject's position and orientation fields are directly (via the prototype definition) coupled to the translation and rotation fields of the Transform which positions the avatar geometry relative to the Zone.

There are some other control flow examples in the spec.

Capabilities

The concept of Capabilities is another key concept of Living Worlds. The general idea is that an Avatar should be able to carry around some functionality which can be used to interact with other avatars, for example if two avatars both have IBM Connection Phone then they should be able to open a voice communication whether or not the World author has ever heard of this product.

The implemention of this has three parts which are described in more detail in the spec in more detail.

  1. Initialization - the Avatar will arrive in the world with some capabilities, but the MUtechSharedObject is responsible for initializing the selector's and event handlers to handle capabilities that are part of the Client, Zone or MUtech.
  2. When the SharedObject is clicked on, (or selected in some other UI dependent manner) then its selector is triggered and a menu is displayed to the user, picking one of these items will cause an event to be sent to the appropriate script node handling the capability.
  3. The script can then in a capability-dependent manner launch the functionality. The key to this is that the capability may need to negotiate with the pilot of this object in order to do this, for example to find out the IP number and port to connect to. However the Script for the capability has to be independent of the MUtech it is running on, therefore this requires a set of services to be available to these scripts which .implement Inter-Object Communciation.

Inter Object communication

Objects have a number of ways to communciate with each other. Critical to any inter-object communication is the notion of trust, i.e. an object is going to react to something that is part of itself, or written as part of the same world, to the way it will react to events coming from some other arbitrary avatar that happens to have walked into the scene. VRML2.0 has a limited security model, based on total access to any exposedField of anything you have a pointer to, and no access to anything else. So ti handle this, we define public and private event handlers for the SharedObject and Pilot nodes. The idea is that pointers to public event handlers can be freely passed around, while those to private event handlers are only available to the MUtechs,

In addition, each Living Worlds Mutech must supply three events "eventToPilot", "eventToLocalPilot", and "replyToDrone". These can be used by drones to communciate with their pilots, and to communicate with the local avatar. Because VRMl2.0 doesn't have any indication of where an event comes from, we adopt the convention that the second field of the MFString that makes up the event indicates the originator in a MUtech dependent way, and is set by the MUtech as it passes the event along. The event replyToDrone can then be used to respond to the originator.

Object insertion

Object Insertion - Drag and Drop

Several of the scenarios envisioned for Living Worlds include the concept of adding objects to a scene. Typically this occurs in one of two ways,

In a single-user world, the implementation of this (for example in an editor) is outside the scope of standards. However in a multi-user world, it is required that the insertion of this object be communicated to other users viewing the scene. This is done by only attaching SharedObjects to Zones, not by adding arbitrary VRML at arbitrary points in the scene graph.

For "Drag-and-Drop" insertion, the browser should use the same rules as are used for determining when a TouchSensor is activated to determine a Zone which is the parent for the Geometry at the insertion point. It should then add the SharedObject to the Zone and adjust the position and orientation fields of the SharedObject to correctly position it relative to the Zone. There are several parts of this process which are UI-dependent, for example:

For script initiated object insertion this is harder in VRML2.0 since there is currently no way for a Script to convert a 3D point to a path in the scene graph. Work arounds are TBD.

In either case, the added object will be added by an addChildren event being sent to the Zone, this event is intercepted by a script - which can check permissions etc., and can also be intercepted by the MUtechZone to share across the network to other instances of this Zone.

Conclusion

I hope this tutorial explains some of the complicated technology of Living Worlds. I expect to put together another version of this aimed at Content developers at some time, who don't need the implementation details. For more information I recommend the follow sources.