Moving Worlds Specification

A Proposal for VRML 2.0

Submitted by Silicon Graphics, Inc.

Last modified: February 2, 1996. This document can be found at http://webspace.sgi.com/moving-worlds/spec/spec.main.html

This document describes the complete specification for VRML 2.0. It contains the following sections:

If you want to print the specification, it may be convenient to download it as a single document. You can:



Key Concepts

February 2, 1996

This section describes key concepts related to the use of VRML, including how nodes are combined into scene graphs, how nodes receive and generate events, how to create node types using prototypes, how to add node types to VRML and export them for use by others, and how to incorporate programmatic scripts into a VRML file.

This subdocument includes the following sections:

File Syntax and Structure

For easy identification of VRML files, every VRML 2.0 file must begin with the characters:

#VRML V2.0 utf8

The identifier utf8 allows for international characters to be displayed in VRML using the UTF-8 encoding of the ISO 10646 standard. Unicode is an alternate encoding of ISO 10646. UTF-8 is explained under the Text node.

Any characters after these on the same line are ignored. The line is terminated by either the ASCII newline or carriage-return characters.

The # character begins a comment; all characters until the next newline or carriage return are ignored. The only exception to this is within double-quoted SFString and MFString fields, where the # character will be part of the string.

Note: Comments and whitespace may not be preserved; in particular, a VRML document server may strip comments and extra whitespace from a VRML file before transmitting it. WorldInfo nodes should be used for persistent information such as copyrights or author information. To extend the set of existing nodes in VRML 2.0, use prototypes or external prototypes rather than named information nodes.

Blanks, tabs, newlines and carriage returns are whitespace characters wherever they appear outside of string fields. One or more whitespace characters separate the syntactical entities in VRML files, where necessary.

After the required header, a VRML file can contain the following:

Field names start with lowercase letters. Node types start with uppercase. The remainder of the characters may be any printable ASCII characters (0x21-0x7E) except curly braces {}, square brackets [], single ' or double " quotes, sharp #, backslash \\ plus +, period . or ampersand &.

Node names (specified using the DEF keyword; see the "Instancing" section of this document for details) must not begin with a digit, but they may begin with and contain any UTF8 character except those below 0x21 (control characters and white space), and the characters {} [] ' " # \\ + . and &.

VRML is case-sensitive; "Sphere" is different from "sphere" and "BEGIN" is different from "begin."

URLs and URNs

A URL (Universal Resource Locator) specifies a file located on a particular server and accessed through a specified protocol. A URN (Universal Resource Name) provides a more persistent way to refer to data than is provided by a URL. The exact definition of a URN is currently under debate. See the discussion at http://www.w3.org/hypertext/WWW/Addressing/Addressing.html for further details.

All fields in VRML 2.0 that have URLs are of type MFString. The strings in such a field indicate multiple places to look for files, in decreasing order of preference. If the browser can't locate the first file or doesn't know how to deal with a URN given as the first file, it can try the second location, and so on.

VRML 2.0 browsers are not required to support URNs. If they do not support URNs, they should ignore any URNs that appear in MFString fields along with URLs. URN support is specified in a separate document at http://www.mitra.biz/vrml/vrml-urn.html, which may undergo minor revisions to keep it in line with parallel work happening at the IETF.

Relative URLs are handled as described in RFC 1808, "Relative Uniform Resource Locators."

File Extension

The file extension for VMRL files is .wrl (for world).

MIME Type

The MIME type for VRML files is defined as follows:

x-world/x-vrml

The MIME major type for 3D world descriptions is x-world. The MIME minor type for VRML documents is x-vrml. Other 3D world descriptions, such as oogl for The Geometry Center's Object-Oriented Geometry Language, or iv, for SGI's Open Inventor ASCII format, can be supported by using different MIME minor types.

It is anticipated that the official type will change to "model/vrml". At this time, servers should present files as being of type x-world/x-vrml. Browsers should recognise both x-world/x-vrml and model/vrml.

IETF work-in-progress on this subject can be found in "The Model Primary Content Type for Multipurpose Internet Mail Extensions."

Nodes, Fields, and Events

At the highest level of abstraction, VRML is just a file format for describing objects. Theoretically, the objects can contain anything--3D geometry, MIDI data, JPEG images, and so on. VRML defines a set of objects useful for doing 3D graphics. These objects are called nodes. Nodes contain data, which is stored in fields.

VRML defines several different classes of nodes. Most of the nodes can be classified into one of two categories; grouping nodes or leaf nodes. Grouping nodes gather other nodes together, allowing collections of nodes (specified in a grouping-node field called children) to be treated as a single object. Some grouping nodes also control which of their children are drawn. Grouping nodes can have only other grouping nodes or leaf nodes as children.

Leaf nodes may not have children. Nodes that are considered leaf nodes include shapes, lights, viewpoints, sounds, scripts, sensors, interpolators, and nodes that provide information to the browser.

Shape nodes contain two kinds of additional information: geometry and appearance. For purposes of discussion, this specification uses a third node category, subsidiary nodes, for nodes that are always used within fields of other nodes and cannot be used alone. These nodes include geometry (for example, Cone and Cube), geometric property (for example, Coordinate3 and Normal), appearance (Appearance) and appearance property nodes (for example, Material and Texture2).

Nodes can be prototyped and shared. Nodes are arranged in hierarchical structures called scene graphs. A Transform node is a kind of grouping node that defines a coordinate system for its child (leaf) nodes. Each Transform node defines a coordinate system relative to its parent nodes (see Coordinate Systems and Transformations).

Applications that interpret VRML files need not maintain the scene graph structure internally; the scene graph is merely a convenient way of describing objects.

General Node Characteristics

A node has the following characteristics:

The syntax for representing these pieces of information is as follows:

nodetype { fields }

Only the node type and braces are required; nodes may or may not have fields.

Sample File Format

For example, this file contains a simple scene defining a view of a red cone and a blue sphere, lit by a directional light:

#VRML V2.0 utf8
Transform {
  children [

    DirectionalLight {
        direction 0 0 -1  # Light shining into scene
    },

    Transform {   # The red sphere
      translation 3 0 1
      children [
        Shape {
          geometry Sphere {radius 2.3}
          appearance Appearance [ 
             material Material {diffuseColor 1 0 0} ] # Red
        }
      ]
    },

    Transform {  # The blue cube
      translation -2.4 .2 1
      rotation     0 1 1  .9
      children [
        Shape {
          geometry Cube {}
          appearance Appearance [ 
             material Material {diffuseColor 0 0 1} ] # Blue
        }
      ]
    }

  ]
}

The Structure of the Scene Graph

This section describes the general scene graph hierarchy, how to reuse nodes within a file, coordinate systems and transformations in VRML files, and the general model for viewing and interaction within a VRML world.

Grouping Nodes and Leaves

A scene graph consists of grouping nodes and leaf nodes. Grouping nodes, such as Transform, LOD, and Switch, can have child nodes. These children can be other grouping nodes or leaf nodes, such as shapes, browser information nodes, lights, cameras, and sounds. Appearance, appearance properties, geometry, and geometric properties are contained within Shape nodes.

Transformations are stored within Transform nodes. Each Transform node defines a coordinate space for its children. This coordinate space is relative to the parent (Transform) node's coordinate space--that is, transformation accumulate down the scene graph hierarchy. Geometric sensors are contained within a Transform node.

Instancing

A node may be referenced in a VRML file multiple times. This is called instancing (using the same instance of a node multiple times; called "aliasing" or "multiple references" by other systems) and is accomplished by using the DEF and USE keywords.

The DEF keyword gives a node a name and creates an instance of the node. The USE keyword indicates that a previously named node should be used again. If several nodes were given the same name, then the last DEF encountered during parsing "wins." DEF/USE is limited to a single file. There is no mechanism for using nodes that are defined in other files. Nodes cannot be shared between files. For example, if a node is defined inside a file referenced by a WWWInline node, the file containing the WWWInline node cannot USE that node.

Rendering the following scene results in three spheres being drawn. Both of the spheres are named "Joe"; the second (smaller) sphere is drawn twice, on either side of the first (larger) sphere:

#VRML V2.0 utf8
Transform {
  children [
    DEF Joe Sphere { },
    Transform {
      translation 2 0 0
      children [
        DEF Joe Sphere { radius .2 }
      ]
    },
    Transform {
      translation -2 0 0
      children [
        USE Joe  # radius .2 sphere will be used here; most recent one defined
      ]
    }
  ]
}

Coordinate Systems and Transformations

VRML uses a Cartesian, right-handed, 3-dimensional coordinate system. By default, objects are projected onto a 2-dimensional display device by projecting them in the direction of the positive Z axis, with the positive X axis to the right and the positive Y axis up. A modeling transformation can be used to alter this default projection.

The standard unit for lengths and distances is meters. The standard unit for angles is radians.

VRML scenes may contain an arbitrary number of local (or object-space) coordinate systems, defined by the transformation fields of the Transform node. These fields are translation, rotation, scaleFactor, scaleOrientation, and center.

Given a vertex V and a series of transformations such as:

Transform {
  translation T
  rotation    R
  scale       S
  children [
    Shape {
      geometry[ PointSet { ... }]
    }
  ]
}

the vertex is transformed into vertex V' in world-space by first scaling, then rotating, and finally translating. In matrix-transformation notation, thinking of T, R, and S as the equivalent transformation matrices,

V' = T·R·S·V

(if you think of vertices as column vectors)

or

V' = V·S·R·T

(if you think of vertices as row vectors).

Conceptually, VRML also has a world coordinate system. The various local coordinate transformations map objects into the world coordinate system, which is where the scene is assembled. Transformations accumulate downward through the scene graph hierarchy, with each Transform inheriting the transformations of its parents. (Note however, that this series of transformations takes effect from the leaf nodes up through the hierarchy. The local transformations closest to the Shape object take effect first, followed in turn by each successive transformation upward in the hierarchy.)

Viewing Model

This specification assumes that there is a user viewing and interacting with the VRML world. It is expected that a future extension to this specification will provide mechanisms for creating multi-participant worlds. The viewing and interaction model that should be used for the single-participant case is described here.

The world creator may place any number of viewpoints in the world -- interesting places from which the user might wish to view the world. Each viewpoint is described by a Viewpoint node. Viewpoints exist in a particular coordinate system, and either the viewpoint or the coordinate system may be animated.

It is expected that browsers will support user-interface mechanisms by which users may "teleport" themselves from one viewpoint to another, and scripting-language mechanisms by which a viewer can be bound to a viewpoint which can then be animated. If a user teleports to a viewpoint that is moving (one of its parent coordinate systems is being animated), then the user should move along with that viewpoint.

The browser may provide a user interface that allows the user to change his or her viewing position or orientation, which will also change the currently bound viewpoint.

Time

The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will roughly correspond to "real" time.

A world's creator must make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will be greater than any previous time event.

Typically, a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per "frame," where a "frame" is a single rendering of the world or one time-step in a simulation.

Events

Most nodes can receive events, which have names and types corresponding to their fields, with the effect that the corresponding field is changed to the value of the event received. For example, the Transform node can receive set_translation events (of type SFVec3f) that change the Transform's translation field (it may also receive set_rotation events, set_scale events, and so on).

Nodes can also generate events that have names and types corresponding to their fields when those fields are changed. For example, the Transform node generates a translation_changed event when its translation field changes.

Routes

The connection between the node generating the event and the node receiving the event is called a route. A node that produces events of a given name (and a given type) can be routed to a node that receives events of the same type using the following syntax:

ROUTE NodeName.eventOutName TO NodeName.eventInName

Routes are not nodes; ROUTE is merely a syntactic construct for establishing event paths between nodes.

Sensors

Sensor nodes generate events. Geometric sensor nodes (BoxProximitySensor, ClickSensor, CylinderSensor, DiskSensor, PlaneSensor, and SphereSensor) generate events based on user actions, such as a mouse click or navigating close to a particular object. TimeSensor nodes generate events at regular intervals, as time passes.

Prototypes

Prototyping is a mechanism that allows the set of node types to be extended from within a VRML file. It allows the encapsulation and parameterization of geometry, behaviors, or both.

A prototype definition consists of the following:

Square brackets enclose the list of events and fields, and braces enclose the definition itself:

PROTO prototypename [ eventIn      eventtypename name
                      eventOut     eventtypename name
                      exposedField fieldtypename name defaultValue
                      field        fieldtypename name defaultValue
                      ... ] {
  Scene graph
  (nodes, prototypes, and routes)
}

A prototype is not a node; it merely defines a prototype (named prototypename) that can be instantiated later in the same file as if it were a built-in node. The implementation of the prototype is contained in the scene graph rooted by node. That node may be followed by Script and/or ROUTE declarations, as necessary to implement the prototype.

The eventIn and eventOut declarations export events from the scene graph rooted by node. Specifying the type of each event in the prototype is intended to prevent errors when the implementation of prototypes is changed and to provide consistency with external prototypes. Events generated or received by nodes in the prototype's implementation are associated with the prototype using the keyword IS. For example, the following statement exposes a Transform node's built-in set_translation event by giving it a new name (set_position) in the prototype interface:

Transform {
  set_translation IS set_position
}

Fields hold the persistent state of VRML objects. Allowing a prototype to export fields allows the initial state of a prototyped object to be specified when an instance of the prototype is created. The fields of the prototype are associated with fields in the implementation using the IS keyword. For example:

Transform {
  translation IS position
}

A prototype is instantiated as if typename were a built-in node. For example, a simple chair with variable colors for the leg and seat might be prototyped as:

PROTO TwoColorChair [ field MFColor legColor  .8 .4 .7
                      field MFColor seatColor .6 .6 .1 ] {
  Transform {
    children [

      Transform { # chair seat
        children [
          Shape {
            appearance Appearance {
              material Material { diffuseColor IS seatColor }
            }
            geometry Cube { ... }
          }
        ]
      },

      Transform { # chair leg
        translation ...
        children [
          Shape {
            appearance Appearance {
              material Material { diffuseColor IS legColor }
            }
            geometry Cylinder { ... }
          }
        ]
      }

    ] # End of root Transform's children
  } # End of root Transform
} # End of prototype

The prototype is now defined. Although it contains a number of nodes, only the legColor and seatColor fields are public. Instead of using the default legColor and seatColor, this instance of the chair has red legs and a green seat:

TwoColorChair {
  legColor 1 0 0
  seatColor 0 1 0
}

A prototype instance can be used in the scene graph wherever its root node can be used. For example, a prototype defined as

PROTO MyObject [ ... ] {
  Transform { ... }
}

can be instantiated wherever a Transform can be used, since the root node of this prototype's implementation is a Transform node.

Prototype definitions can be nested. A prototype instance may be DEF'ed or USE'ed. Prototype or DEF names declared inside the prototype are not visible outside the prototype.

Defining Prototypes in External Files

The syntax for defining prototypes in external files is as follows:

EXTERNPROTO prototypename [ eventIn eventtypename name
                            eventOut eventtypename name
                            field fieldtypename
                            ... ]
  "URL" or [ "URL", "URL", ... ]

The external prototype is then given the name prototypename in this file's scope. It is an error if the eventIn/eventOut declaration in the EXTERNPROTO is not a subset of the eventIn/eventOut declarations specified in the PROTO referred to by the URL. If multiple URLs are specified, the first one found should be used.

Unlike a prototype, an external prototype does not contain an inline implementation of the node type. Instead, the prototype definition and implementation is found in a set of URLs. The other difference between a prototype and an external prototype is that external prototypes do not contain default values for fields. The external prototype points to a file that contains the prototype implementation, and this file contains the default values.

Extensibility

The set of built-in VRML nodes can be extended using either prototypes or external prototypes. External prototypes provide a way to extend VRML in a manner that all browsers will understand. If a new node type is defined as an external prototype, other browsers can parse it and understand what it looks like, or they can ignore it. An external prototype uses the URL syntax to refer to an internal or built-in implementation of a node. For example, suppose your system has a Torus geometry node. This node can be exported to other systems using an external prototype:

EXTERNPROTO Torus [ field SFFloat bigRadius
                    field SFFloat smallRadius ]
  ["urn:yourdomain:Torus", "http://machine/directory/protofile" ]

The browser can recognize the URN and look for its own internal implementation of the Torus node. If it does not recognize the URN, it goes to the next URL and searches for the specified prototype file. In this case, if the file is not found, it ignores the Torus. If more URLs are listed, the browser tries each one until it succeeds in locating an implementation for the node or it reaches the end of the list.

Naming Conventions

Check the "File Syntax and Structure" section of this standard for the rules on valid characters in names.

To avoid namespace collisions with nodes defined by other people, any of the following conventions should be followed.

  1. Anyone can pick names that include a suffix of an underscore followed by a domain name that you own with the periods changed into underscores. For example, a company owning foo.com could create an extension node "Cube_foo_com".
  2. If you are building a product-- for example, an authoring tool or a browser-- or defining a lot of new nodes, then you can apply for a short prefix. Email type_registry@vrml.org to register for the prefix. This will normally be accepted if it is the most significant part of a .com, .org, or .net address. In the above example, foo.com could register the extension "_foo" and create nodes of the form "Cube_foo".
  3. Extensions supported by several companies should be registered and use the "_X" extension.

Scripting

Logic is often necessary to decide what effect an event should have on the scene -- "if the vault is currently closed AND the correct combination is entered, THEN open the vault." These kinds of decisions are expressed as Script nodes that take in events, process them, and generate other events. A Script node can also keep track of some information between invocations, "remembering" what its internal state is over time.

The event processing is done by a program contained in (or referenced by) the Script node's behavior field. This program can be written in any programming language that the browser supports.

A Script node is activated when it receives an event. At that point the browser executes the program in the Script node's behavior field (passing the program to an external interpreter if necessary). The program can perform a wide variety of actions: sending out events (and thereby changing the scene), performing calculations, communicating with servers elsewhere on the Internet, and so on.

Two of the most common uses for scripts will probably be animation (using interpolators to smoothly move objects from one position to another) and network operations (connecting to servers to allow multi-user interaction).

Script Languages

Scripts can be written in a variety of languages, including Java, C, and Perl. Moving Worlds does not require browsers to support any particular language. See appendices to this specification for bindings to Java and C.

Execution Model

Every time a Script node receives an eventIn, it executes its script. (Scripts aren't executed at any other time, but they may start asynchronous threads that run concurrently with the browser.) First, all pending eventIn values are queued. For each queued event, in timestamp order from oldest to newest, the eventIn method or function that has the same name is called. (Any given eventIn calls exactly one method.) When the queue is empty, the eventsProcessed() method of the script is called to do any final post-processing that might be needed. For instance, the eventIn methods can simply collect data, leaving eventsProcessed() to process all the data at once, in order to prevent duplication of work.

After execution of the eventsProcessed() method, values stored during script execution as eventOuts are sent as events, one for each eventOut that was set at least once during script execution. At most one message is sent for each eventOut value, and all eventOuts have the same time stamp.

In languages that allow multiple threads, such as Java, you can use the standard language mechanisms to start new threads. When the browser disposes of the Script node (as, for instance, when the current world is unloaded), it calls the shutdown() method for each currently active thread, to give threads a chance to shut down smoothly.

If you want to keep static data in a script (that is, to retain values from one invocation of the script to the next), you can use instance variables--local variables within the script, declared private. However, the value of such variables can't be relied on if the script is unloaded from the browser's memory; to guarantee that values will be retained, you have to store them in fields of the Script node.

Nodes and Fields

The API provides a data type in the scripting language for every field type in VRML. For instance, the Java bindings contain a class called SFFloat, which defines methods for getting and setting the value of variables of type SFFloat. A script can get and set the value of its own fields using these data types and methods.

The API also provides a way to access other nodes in the scene. It allows getting the value of any exposed field of any node that the Script has access to.

Browser Interface

The API provides ways for scripts to find out and change information about the browser. When a browser reads in a scene, it determines certain information based on the fields of the scene's NavigationInfo node. If you want to change that information later, use these browser calls -- changing the fields of the NavigationInfo node via routes wouldn't work even if it were possible.

Here are descriptions of the functions/methods that the browser API supports. The syntax given is the Java syntax; bindings for other languages are not necessarily supported by all browsers.

  public static String getName();
  public static String getVersion();

The getName() and getVersion() methods get the "name" and "version" of the browser currently in use. These values are defined by the browser writer, and identify the browser in some (unspecified) way. They are not guaranteed to be unique or to adhere to any particular format, and are for information only. If the information is unavailable these methods return empty strings.

   public static float getCurrentSpeed();

The getCurrentSpeed() method returns the speed at which the viewpoint is currently moving, in meters per second. If speed of motion is not meaningful in the current navigation type, or if the speed cannot be determined for some other reason, 0.0 is returned.

  public static float getCurrentFrameRate();

The getCurrentFrameRate() method returns the current frame rate in frames per second. The way in which this is measured and whether or not it is supported at all is browser dependent. If frame rate is not supported, or can't be determined, 100.0 is returned.

  public static String getWorldURL();
  public static void loadWorld(String [] url);

The getWorldURL() method returns the URL for the root of the currently loaded world. loadWorld() loads one of the URLs in the passed string and replaces the current scene root with the VRML file loaded. The browser first attempts to load the first URL in the list; if that fails, it tries the next one, and so on until a valid URL is found or the end of list is reached. If a URL cannot be loaded, some browser-specific mechanism is used to notify the user. It's up to the browser whether to block on a loadWorld() until the new URL finishes loading, or whether to return immediately and at some later time (when the load operation has finished) replace the current scene with the new one.

  public static Node createVrmlFromURL( String[] url );
  public static Node createVrmlFromString( String vrmlSyntax );

The createVrmlFromString() method takes a string consisting of a VRML scene description and returns the root node of the corresponding VRML scene. The createVRMFromURL() asks the browser to load a VRML scene description from the given URL or URLs, returning the root node of the corresponding VRML scene.

  public void addRoute(Node fromNode, String fromEventOut,
    Node toNode, String toEventIn);
  public void deleteRoute(Node fromNode, String fromEventOut,
    Node toNode, String toEventIn);

These methods respectively add and delete a route between the given event names for the given nodes. An exception is generated if the given nodes do not have the given event names or if an attempt is made to delete a route that does not exist.

  public void bindBackground(Node background);
  public void unbindBackground();
  public boolean isBackgroundBound(Node background);

bindBackground() allows a script to specify which Background node should be used to provide a backdrop for the scene. Once a Background node has been bound, isBackgroundBound() indicates whether a given Background node is the currently bound one, and unbindBackground() restores the Background node in use before the previous bind. If unbindBackground() is called when nothing is bound, nothing happens. Changing the fields of a currently bound Background node changes the currently displayed background.

  public void bindNavigationInfo(Node navigationInfo);
  public void unbindNavigationInfo();
  public boolean isNavigationInfoBound(Node navigationInfo);

bindNavigationInfo() allows a script to specify which NavigationInfo node should be used to provide hints to the browser about how to navigate through a scene. Once a NavigationInfo node has been bound, isNavigationInfoBound() indicates whether a given node is the currently bound one, and unbindNavigationInfo() restores the NavigationInfo node in use before the previous bind. If unbindNavigationInfo() is called when nothing is bound, nothing happens. A script can change the fields of a NavigationInfo node using events and routes. Changing the fields of a currently bound NavigationInfo node changes the associated parameters used by the browser.

  public void bindViewpoint(Node viewpoint);
  public void unbindViewpoint();
  public boolean isViewpointBound(Node viewpoint);

In some cases, a script may need to manipulate the user's current view of the scene. For instance, if the user enters a vehicle (such as a roller coaster or elevator), the vehicle's motion should also be applied to the viewer. bindViewpoint() provides a way to bind the viewer to a given Viewpoint node. This binding doesn't itself change the viewer location or orientation; instead, it changes the fields of the given Viewpoint node to correspond to the current viewer location and orientation. (It also places the viewer in the coordinate space of the given Viewpoint node.) Once a Viewpoint is bound, the script can animate the transformation fields of the Transform that the Viewpoint is in (probably using an interpolator to generate values) and move the viewer through the scene.

Note that scripts should animate the Viewpoint's frame of reference (the transformation of the enclosing Transform) rather than the Viewpoint itself, in order to allow the user to move the viewer a little during transit (for instance, to let the user walk around inside the elevator while it's between floors). Fighting with the user for control of the viewer is a bad idea.

Note also that results are undefined for vehicle travel if the user is allowed to move out of the vehicle while the animation is running. This problem is best resolved by using collision detection to prevent the user leaving the vehicle while it's in motion. Another option is to turn off the browser's user interface during animation by setting the current navigation type to "none".

When the script has finished transporting the user, unbindViewpoint() releases the viewer from the influence of the currently bound Viewpoint, returning the viewer to the coordinate space of the previous viewpoint binding (or the base coordinate system of the scene if there's no previous binding). The fields of the now-unbound Viewpoint node return to the values they had before the binding.

And of course isViewpointBound() returns TRUE if the specified Viewpoint node is currently bound to the viewer (which implies that the fields of that Viewpoint node indicate the current position and orientation of the viewer). The method returns FALSE if the specified Viewpoint is not bound.

System and Networking Libraries

Scripts that need to use system and networking calls should use the scripting language's system and networking libraries. The VRML API doesn't provide such calls.

Scripting Example

A Script node that decided whether or not to open a bank vault might receive vaultClosed and combinationEntered messages, produce openVault messages, and remember the correct combination and whether or not the vault is currently open. The VRML for this Script node might look like this:

DEF OpenVault Script {
  # Declarations of what's in this Script node:
  eventIn SFBool vaultClosed
  eventIn SFString combinationEntered
  eventOut SFBool openVault
  field SFString correctCombination "43-22-9"
  field SFBool currentlyOpen FALSE

  # Implementation of the logic:
  scriptType "javabc"
  behavior "data:java bytecodes in base64 format go here"
}

The bytecodes in the behavior field might be a compiled version of the following Java source code:

import vrml;

class VaultScript extends Script {

  // Declare fields
  private SFBool currentlyOpen = (SFBool) getField("currentlyOpen");
  private SFString correctCombination = (SFString) getField("correctCombination");

  // Declare eventOuts
  private SFBool openVault = (SFBool) getEventOut("openVault");

  // Handle eventIns
  public void vaultClosed(ConstSFBool value, SFTime ts) {
    currentlyOpen.setValue(FALSE);
  }

  public void combinationEntered(ConstSFString combo, SFTime ts) {
    if (currentlyOpen.getValue() == FALSE &&
        combo.getValue() == correctCombination) {
      currentlyOpen.setValue(TRUE);
      openVault.setValue(TRUE);
    }
  }
}


Node Reference

February 2, 1996

This section provides a detailed description of each node in VRML 2.0. It is organized by functional group. Nodes within each group are listed alphabetically. (An alphabetical Index of Nodes and Fields is also available.)

Intrinsic Node Types

Intrinsic nodes are nodes whose functionality cannot be duplicated by any combination of other nodes; they form the core functionality of VRML. The functional groups used in this section are as follows:

Grouping Nodes

Leaf Nodes

Viewpoints
Lights and Lighting
Sounds
Shapes

Subsidiary Nodes

Geometry
Geometric Properties
Appearance
Appearance Properties
Geometric Sensors

Special Nodes

Other Required Node Types

These nodes provide common functionality that all VRML implementations are required to support, but that can be created using one or more of the intrinsic nodes. A reference PROTO implementation is given for these nodes (note: we didn't have time before the VRML 2.0 RFP to do all implementations, for several nodes we just sketch out what the PROTO would look like).

Other Grouping Nodes

Other Leaf Nodes: Sound

Other Subsidiary Nodes: Geometry

Other Special Nodes

Interpolators

The last item in each node description is the public interface for the node, with default values. (The syntax for the public interface is the same as that for prototypes.) For example:

DirectionalLight {
  exposedField SFBool  on         TRUE 
  exposedField SFFloat intensity  1 
  exposedField SFFloat ambientIntensity 0
  exposedField SFColor color      1 1 1
  exposedField SFVec3f direction  0 0 -1
  }

Fields that have associated implicit set_ and _changed events are labeled exposedField. For example, the on field has a set_on input event and an on_changed output event. Exposed fields may be connected using ROUTE statements, and may be read and/or written by Script nodes.

Note that this information is arranged in a slightly different manner in the file format for each node. The keywords "field" or "exposedField" and the types of the fields are not specified when instantiating a node in the file format. For example the file format for the above example is:

DirectionalLight {
  on        TRUE
  intensity 1
  ambientIntensity 0
  color     1 1 1
  direction 0 0 -1
  }

Grouping Nodes

Grouping nodes can contain other grouping nodes or leaf nodes as children. Grouping nodes include the Collision and Transform nodes.

The children of a grouping node are specified using an MFNode field.


Collision

The Collision grouping node specifies to a browser what objects in the scene should not be navigated through. It is useful to keep viewers from walking through walls in a building, for instance. Collision response is browser-defined. For example, when the user comes sufficiently close to an object to register as a collision, the browser may have the user bounce off the object or simply come to a stop.

The children of a Collision node are always drawn, just as the children of a simple Group are drawn. These children are the objects that are checked for collision. If desired, a proxy object can be supplied, and this proxy object will be checked for collision in place of the actual child objects (see description of the proxy field, below).

By default, collision detection is ON. The collide field in this node allows collision detection to be turned off, in which case the children of the Collision node will not be checked for collision, even though they will be drawn.

Since collision with arbitrarily complex geometry is computationally expensive, one method of increasing efficiency is to be able to define an alternate geometry that could serve as a proxy for colliding against. This collision proxy, contained in the proxy field, could be as crude as a simple bounding box or bounding sphere, or could be more sophisticated (for example, the convex hull of a polyhedron).

If the value of the collide field is FALSE, then no collision is performed with the affected geometry. If the value of the collide field is TRUE, then the proxy field defines the geometry against which collision testing is done. If the proxy value is NULL, the children of the collision node are collided against. If the proxy value is not NULL, then it contains the geometry that is used in collision computations.

If children is empty, collide is TRUE and a proxy is specified then collision detection is done against the proxy but nothing is displayed-- this is a way of colliding against "invisible" geometry.

The collision eventOut will generate an event containing the time when the path of the user through the scene intersected a geometry in this collision node against which collisions are being checked. An ideal implementation would compute the exact moment of intersection, but implementations may approximate the ideal by sampling the positions of geometries and the viewer.

Collision {
  exposedField SFBool collide  TRUE
  field        SFNode proxy    NULL 
  exposedField MFNode children []
  eventOut     SFTime collision
}

Transform

A Transform is a grouping node that defines a coordinate system for its children that is relative to the coordinate systems of its parents. A Transform's children can include any grouping or leaf nodes: lights, viewpoints, sounds, shapes, and browser information nodes. See also "Coordinate Systems and Transformations."

The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside this Transform. These are hints to the browser that it may use to optimize certain operations such as determining whether or not the Transform needs to be drawn. If the specified bounding box is smaller than the true bounding box of the Transform, results are undefined. The bounding box should be large enough to completely contain the effects of all sounds, lights and fog nodes that are children of this Transform. If the size of this Transform may change over time because its children are animating (moving), then the bounding box must also be large enough to contain all possible animations (movements).

The add_children event adds the nodes passed in to the Transform's children field. Any nodes passed in the add_children event that are already in the Transform's children list are simply ignored. The remove_children event removes the nodes passed in from the Transform's children field. Any nodes passed in the remove_children event that are not in the Transform's children list are simply ignored.

The translation, rotation, scale, scaleOrientation and center fields define a geometric 3D transformation consisting of (in order) a (possibly) non-uniform scale about an arbitrary point, a rotation about an arbitrary point and axis, and a translation. The Transform node:

Transform {
    translation T1
    rotation R1
    scaleFactor S
    scaleOrientation R2
    center T2

    ...
}

is equivalent to the nested sequence of:

Transform { translation T1 
 Transform { translation T2 
  Transform { rotation R1 
   Transform { rotation R2 
    Transform { scaleFactor S 
     Transform { rotation -R2 
      Transform { translation -T2
              ... 
}}}}}}}


Transform {
  field        SFVec3f     bboxCenter       0 0 0
  field        SFVec3f     bboxSize         0 0 0
  exposedField SFVec3f     translation      0 0 0
  exposedField SFRotation  rotation         0 0 1  0
  exposedField SFFloat     scale            1 1 1
  exposedField SFRotation  scaleOrientation 0 0 1  0
  exposedField SFVec3f     center           0 0 0
  exposedField MFNode      children         [ ]
  eventIn      MFNode      add_children
  eventIn      MFNode      remove_children
}


Leaf Nodes

This section describes the leaf nodes in detail and is organized into the following subsections:

Viewpoints


This functional group includes the Viewpoint node.


Viewpoint

The Viewpoint node defines an interesting location in a local coordinate system from which the user might wish to view the scene. Viewpoints may be animated, and Script nodes may "bind" the user to a particular viewpoint using Script API calls to the browser. A world creator can automatically move the user's view through the world by binding the user to a viewpoint and then animating that viewpoint.

The position and orientation fields of the Viewpoint node specify relative locations in the local coordinate system. Position is relative to the coordinate system's origin (0,0,0), while orientation specifies a rotation relative to the default orientation; the default orientation has the user looking down the -Z axis with +X to the right and +Y straight up. Note that the single orientation rotation (which is a rotation about an arbitrary axis) is sufficient to completely specify any combination of view direction and "up" vector.

The fieldOfView field specifies a preferred field of view from this viewpoint, in radians. A smaller field of view corresponds to a telephoto lens on a camera; a larger field of view corresponds to a wide-angle lens on a camera. The field of view should be greater than zero and smaller than PI; the default value corresponds to a 45 degree field of view. fieldOfView is a hint to the browser and may be ignored.

A viewpoint can be placed in a VRML world to specify the initial location of the viewer when that world is entered. Browsers should recognize the URL syntax "..../scene.wrl#ViewpointName" as specifying that the user's initial view when entering the "scene.wrl" world should be the first viewpoint in file "scene.wrl" that appears as "DEF ViewpointName Viewpoint { ... }".

The description field of the viewpoint may be used by browsers that provide a way for users to travel between viewpoints. The description should be kept brief, since browsers will typically display lists of viewpoints as entries in a pull-down menu, etc.

Viewpoint {
  exposedField SFVec3f    position       0 0 0
  exposedField SFRotation orientation    0 0 1  0
  exposedField SFFloat    fieldOfView    0.785398
  field        SFString   description    ""
}

Lights and Lighting

This grouping includes nodes that light the scene (DirectionalLight, PointLight, and SpotLight) as well as nodes that affect the lighting within the scene, such as the Fog node.

Lighting is additive, so objects are illuminated by the sum of all of the direct and ambient illumination impinging upon them. Ambient illumination results from scattering and reflection of direct illumination, so logically ambient light is tied to lights in the scene, with each having an ambientIntensity. The contribution of a light to the overall ambient lighting is computed as ambientLight[i] = on ? (intensity * ambientIntensity * color[i]) : 0, i=0,1,2. This allows the light's overall brightness, both direct and ambient, to be controlled by changing the intensity. Renderers that do not support per-light ambient illumination may simply use this information to set the ambient lighting parameters when the world is loaded.


DirectionalLight

The DirectionalLight node defines a directional light source that illuminates along rays parallel to a given 3-dimensional vector.

A directional light source illuminates only the objects in its enclosing Group. The light illuminates everything within this coordinate system, including the objects that precede it in the scene graph--for example:

Transform {
  children [
    Shape { ... },
    DirectionalLight { .... } # lights the preceding shape
  ]
}

Some low-end renderers do not support the concept of per-object lighting. This means that placing DirectionalLights inside local coordinate systems, which implies lighting only the objects beneath the Transform with that light, is not supported in all systems. For the broadest compatibility, lights should be placed at outermost scope.

DirectionalLight {
  exposedField SFBool  on                TRUE 
  exposedField SFFloat intensity         1 
  exposedField SFFloat ambientIntensity  0 
  exposedField SFColor color             1 1 1
  exposedField SFVec3f direction         0 0 -1
}

Fog

The Fog node defines an axis-aligned ellipsoid of dense, colored atmosphere. The size field defines the size of this foggy region in the local coordinate system. The maxVisibility field specifies the distance at which an object is completely obscured by the fog. This distance is specified in the local coordinate system (by default, in meters). The color field may be used to simulate different kinds of atmospheric effects by changing the fog's color.

An ideal implementation of fog would compute exactly how much attenuation occurs between the viewer and every object in the world and render the scene appropriately. However, implementations are free to approximate this ideal behavior, perhaps by computing the intersection of the viewing direction vector with any foggy regions and computing overall fogging parameters each time the scene is rendered.

Fog {
  exposedField SFVec3f size           0 0 0
  exposedField SFFloat maxVisibility  1
  exposedField SFColor color          1 1 1
}

PointLight

The PointLight node defines a point light source at a fixed 3D location. A point source illuminates equally in all directions; that is, it is omni-directional.

A PointLight illuminates everything within radius of its location. A PointLight's illumination falls off with distance as specified by three attenuation coefficients. The attenuation factor is 1/(attenuation[0] + attenuation[1]*r + attenuation[2]*r^2), where r is the distance of the light to the surface being illuminated. The default is no attenuation. Renderers that do not support a full attenuation model may approximate as necessary.

PointLight {
  exposedField SFBool  on                TRUE 
  exposedField SFFloat intensity         1  
  exposedField SFFloat ambientIntensity  0 
  exposedField SFColor color             1 1 1 
  exposedField SFVec3f location          0 0 0
  exposedField SFFloat radius            1 
  exposedField SFVec3f attenuation       1 0 0
}

SpotLight

The SpotLight node defines a light source that is placed at a fixed location in 3-space and illuminates in a cone along a particular direction.

The cone of light extends a maximum distance of radius from its location. The light's illumination falls off with distance as specified by three attenuation coefficients. The attenuation factor is 1/(attenuation[0] + attenuation[1]*r + attenuation[2]*r^2), where r is the distance of the light to the surface being illuminated. The default is no attenuation. Renderers that do not support a full attenuation model may approximate as necessary.

The intensity of the illumination may drop off as the ray of light diverges from the light's direction toward the edges of the cone. The angular distribution of light is controlled by the cutOffAngle, beyond which the illumination is zero, and the beamWidth, the angle at which the beam starts to fall off. Renderers that support a two cone model with linear fall off from full intensity at the inner cone to zero at the cutoff cone should use beamWidth for the inner cone angle. Renderers that attenuate using a cosine raised to a power, should use an exponent of exponent = 0.5*log(0.5)/log(cos(beamWidth)). When beamWidth >= PI/2 the illumination is uniform up to the cutoff angle, which is the default.

SpotLight {
  exposedField SFBool  on                TRUE  
  exposedField SFFloat intensity         1  
  exposedField SFFloat ambientIntensity  0 
  exposedField SFColor color             1 1 1 
  exposedField SFVec3f location          0 0 0  
  exposedField SFVec3f direction         0 0 -1
  exposedField SFFloat beamWidth         1.570796
  exposedField SFFloat cutOffAngle       0.785398 
  exposedField SFFloat radius            1 
  exposedField SFVec3f attenuation       1 0 0
}

Sounds

The Sound functional grouping includes the DirectedSound node.

ISSUE: What sound file formats should be required?


DirectedSound

The DirectedSound node describes a sound which emits primarily in the direction defined by the direction vector. Where minRange and maxRange determine the extent of a PointSound, the extent of a DirectedSound is determined by four fields: minFront, minBack, maxFront, and maxBack.

Around the location of the emitter, minFront and minBack determine the extent of the ambient region in front of and behind the sound. If the location of the sound is taken as a focus of an ellipse, and the minBack and minFront values (in combination with the direction vector) as determining the two vertices, these three points describe an ellipse bounding the ambient region of the sound. Similarly, maxFront and maxBack determine the limits of audibility in front of and behind the sound; they describe a second, outer ellipse.

The inner ellipse is analogous to the sphere determined by the minRange field in the PointSound definition: within this ellipse, the sound is non-directional, with constant and maximal intensity. The outer ellipse is analogous to the sphere determined by the maxRange field in the PointSound definition and represents the limits of audibility of the sound. Between the two ellipses, the intensity drops off proportionally with distance and the sound is localized in space.

One advantage of this model is that a DirectedSound behaves as expected when approached from any angle; the intensity increases smoothly even if the emitter is approached from the back.

See the PointSound node for a description of all other fields.

DirectedSound {
  field        MFString name          [ ]
  field        SFString description   ""
  exposedField SFFloat  intensity     1 
  exposedField SFVec3f  location      0 0 0
  exposedField SFVec3f  direction     0 0 1 
  exposedField SFFloat  minFront      10
  exposedField SFFloat  maxFront      10
  exposedField SFFloat  minBack       10 
  exposedField SFFloat  maxBack       10
  exposedField SFBool   loop          FALSE 
  exposedField SFTime   start         0 
  exposedField SFTime   pause         0 
}

Shapes

This functional group includes only one node, the Shape node.


Shape

A Shape node has two fields: appearance and geometry. These fields, in turn, contain other nodes. The appearance field contains an Appearance node that has material, texture, and textureTransform fields (see the Appearance node). The geometry field contains a geometry node. See Subsidiary Nodes.

Shape {
  field SFNode appearance NULL
  field SFNode geometry NULL
}

Subsidiary Nodes

The following groups of nodes are used only in fields within other nodes. They cannot stand alone in the scene graph.

Geometry

A Shape node contains one geometry node in its geometry field. This node can be an IndexedFaceSet, IndexedLineSet, PointSet, or Text node. A geometry node can appear only in the geometry field of a Shape node. Geometry nodes usually contain Coordinate3, Normal, and TextureCoordinate2 nodes in specified SFNode fields. All geometry nodes are specified in a local coordinate system determined by the parent(s) nodes of the geometry.

Application of material, texture, and colors:
The final rendered look of a piece of geometry depends on the Material and Texture in the associated Appearance node along with any Color node specified with the geometry (such as per-vertex colors for an IndexedFaceSet node), as follows:
Shape Hints Fields:
The ElevationGrid, GeneralCylinder, and IndexedFaceSet nodes all have three SFBool fields that provide hints about the shape--whether it contains ordered vertices, whether the shape is solid, and whether it contains convex faces. These fields are ccw, solid, and convex.

The ccw field indicates whether the vertices are ordered in a counter-clockwise direction when the shape is viewed from the outside (TRUE). If the order is clockwise or unknown, this field value is FALSE. The solid field indicates whether the shape encloses a volume (TRUE). If nothing is known about the shape, this field value is FALSE. The convex field indicates whether all faces in the shape are convex (TRUE). If nothing is known about the faces, this field value is FALSE.

These hints allow VRML implementations to optimize certain rendering features. Optimizations that may be performed include enabling backface culling and disabling two-sided lighting. For example, if an object is solid and has ordered vertices, an implementation may turn on backface culling and turn off two-sided lighting. If the object is not solid but has ordered vertices, it may turn off backface culling and turn on two-sided lighting.

Crease Angle Field:
The creaseAngle field, used by the ElevationGrid, GeneralCylinder, and IndexedFaceSet nodes, affects how default normals are generated. For example, when an IndexedFaceSet has to generate default normals, it uses the creaseAngle field to determine which edges should be smoothly shaded and which ones should have a sharp crease. The crease angle is the angle between surface normals on adjacent polygons. For example, a crease angle of .5 radians (the default value) means that an edge between two adjacent polygonal faces will be smooth shaded if the normals to the two faces form an angle that is less than .5 radians (about 30 degrees). Otherwise, it will be faceted.

IndexedFaceSet

The IndexedFaceSet node represents a 3D shape formed by constructing faces (polygons) from vertices listed in the coord field. The coord field must contain a Coordinate3 node. IndexedFaceSet uses the indices in its coordIndex field to specify the polygonal faces. An index of -1 indicates that the current face has ended and the next one begins. The Coordinate3 node must contain at least as many vertex coordinates as the greatest index in the coordIndex field.

For descriptions of the coord, normal, and texCoord fields, see the Coordinate3, Normal, and TextureCoordinate2 nodes.

If the color field is not NULL then it must contain a Color node, whose colors are applied to the vertices or faces of the IndexedFaceSet as follows:

If the normal field is NULL, then the browser should automatically generate normals, using creaseAngle to determine if and how normals are smoothed across shared vertices.

If the normal field is not NULL, then it must contain a Normal node, whose normals are applied to the vertices or faces of the IndexedFaceSet in a manner exactly equivalent to that described above for applying colors to vertices/faces.

If the texCoord field is not NULL, then it must contain a TextureCoordinate2 node. The texture coordinates in that node are applied to the vertices of the IndexedFaceSet as follows:

If the texCoord field is NULL, a default texture coordinate mapping is calculated using the bounding box of the shape. The longest dimension of the bounding box defines the S coordinates, and the next longest defines the T coordinates. If two or all three dimensions of the bounding box are equal, then ties should be broken by choosing the X, Y, or Z dimension in that order of preference. The value of the S coordinate ranges from 0 to 1, from one end of the bounding box to the other. The T coordinate ranges between 0 and the ratio of the second greatest dimension of the bounding box to the greatest dimension.

See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.

IndexedFaceSet {
  exposedField  SFNode  coord             NULL
  exposedField  SFNode  color             NULL
  exposedField  SFNode  normal            NULL
  exposedField  SFNode  texCoord          NULL
  field         MFInt32 coordIndex        [ ]
  field         MFInt32 colorIndex        [ ]
  field         SFInt32 colorPerFace      FALSE
  field         MFInt32 normalIndex       [ ]
  field         SFInt32 normalPerFace     FALSE
  field         MFInt32 textureCoordIndex [ ]
  field         SFBool  ccw               TRUE
  field         SFBool  solid             TRUE
  field         SFBool  convex            TRUE
  field         SFFloat creaseAngle       0
}

IndexedLineSet

This node represents a 3D shape formed by constructing polylines from vertices listed in the coord field. IndexedLineSet uses the indices in its coordIndex field to specify the polylines. An index of -1 indicates that the current polyline has ended and the next one begins.

For descriptions of the coord field, see the Coordinate3 node.

Lines are not texture-mapped or affected by light sources.

If the color field is not NULL, it must contain a Color node, and the colors are applied to the line(s) as folows:

IndexedLineSet {
  exposedField  SFNode  coord             NULL
  exposedField  SFNode  color             NULL
  field         MFInt32 coordIndex        0
  field         MFInt32 colorIndex        0
  field         MFInt32 colorPerLine      FALSE
}

PointSet

The PointSet node represents a set of points listed in the coord field. PointSet uses the coordinates in order. The number of points in the set is specified by the numPoints field.

Points are not texture-mapped or affected by light sources.

If the color field is not NULL, it must contain a Color node that contains at least numPoints colors. Colors are always applied to each point in order.

PointSet {
  exposedField  SFNode  coord      NULL
  field         SFInt32 numPoints  0
  field         SFNode  color      NULL
}

Text

The Text node represents one or more text strings specified using the UTF-8 encoding of the ISO10646 character set (UTF-8 encoding is described below). Note that ASCII is a subset of UTF-8, so all ASCII strings are also UTF-8.

The text strings are contained in the string field. The fontStyle field contains one FontStyle node that specifies the font size, font family and style, direction of the text strings, and any specific language rendering techniques that must be used for non-English text.

The justify field determines where the text is positioned in relation to the origin (0,0,0) of the object coordinate system. The values for the justify field are "BEGIN", "MIDDLE", and "END". For a left-to-right direction , "BEGIN" would specify left-justified text, "END" would specify right-justified text, and "MIDDLE" would specify centered text. See the FontStyle node for details of text placement.

The spacing field determines the spacing between multiple text strings. The size field of the FontStyle node specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either X or Y by -(size * spacing). A value of 0 for spacing causes the string to be in the same position. A value of -1 causes subsequent strings to advance in the opposite direction.

The maxExtent field limits and scales the text string if the natural length of the string is longer than the maximum extent. If the text string is shorter than the maximum extent, it is not scaled. The maximum extent is measured horizontally for horizontal text (FontStyle node: horizontal=TRUE) and vertically for vertical text (FontStyle node: horizontal=FALSE).

The width field contains an MFFloat value that specifies the width of each text string. If the string is too short, it is stretched (either by scaling the text itself or by adding space between the characters). If the string is too long, it is compressed. If a width value is missing--for example, if there are four strings but only three width values--the missing values are considered to be 0.

For both the maxExtent and width fields, specifying a value of 0 indicates to allow the string to be any width.

Textures are applied to 3D text as follows. The texture origin is at the origin of the first string, as determined by the justification. The texture is scaled equally in both S and T dimensions, with the font height representing 1 unit. S increases to the right, T increases up.

UTF-8 Character Encodings 
The 2 byte (UCS-2) encoding of ISO 10646 is identical to the Unicode standard.
In order to allow standard ASCII text editors to contiue to work with most VRML files, we have chosen to support the UTF-8 encoding of ISO 10646. This encoding allows ASCII text (0x0..0x7F) to appear without any changes and encodes all characters from 0x80.. 0x7FFFFFFF into a series of six or fewer bytes.
If the most significant bit of the first character is 0, then the remaining seven bits are interpreted as an ASCII character. Otherwise, the number of leading 1 bits will indicate the number of bytes following. There is always a 0 bit between the count bits and any data.
First byte could be one of the following. The X indicates bits available to encode the character.
  0XXXXXXX  only one byte        0..0x7F (ASCII)
  110XXXXX  two bytes            Maximum character value is 0x7FF 
  1110XXXX  three bytes          Maximum character value is 0xFFFF
  11110XXX  four bytes           Maximum character value is 0x1FFFFF
  111110XX  five bytes           Maximum character value is 0x3FFFFFF
  1111110X  six bytes            Maximum character value is 0x7FFFFFFF
All following bytes have this format: 10XXXXXX
A two byte example. The symbol for a registered trademark is "circled R registered sign" or 174 in both ISO/Latin-1 (8859/1) and ISO 10646. In hexadecimal, it is 0xAE. In HTML, it is ®. In UTF-8 it has the following two-byte encoding: 0xC2, 0xAE.
Text {
  exposedField  MFString  string     [ ]
  field         SFNode    fontStyle  NULL
  field         SFString  justify    "BEGIN" # "BEGIN","MIDDLE", "END"
  field         SFFloat   spacing    1.0
  exposedField  SFFloat   maxExtent  0.0
  field         MFFloat   width      [ ]
}

Geometric Properties

Geometric properties are always contained in the corresponding SFNode fields of geometry nodes such as the IndexedFaceSet, IndexedLineSet, and PointSet nodes.


Color

This node defines a set of RGB colors to be used in the color fields of an IndexedFaceSet, IndexedLineSet, or PointSet node.

Color nodes are only used to specify multiple colors for a single piece of geometry, such as a different color for each face or vertex of an IndexedFaceSet. A Material node is used to specify the overall material parameters of a geometry. If both a Material and a Color node are specified for a geometry, the colors should ideally replace the diffuse component of the material.

Textures take precedence over colors; specifying both a Texture and a Color node for a geometry will result in the Color node being ignored.

Note that some browsers may not support this functionality, in which case an average color should be computed and used instead.

Color {
  exposedField MFColor rgb  []
}

Coordinate3

This node defines a set of 3D coordinates to be used in the coord field of an IndexedFaceSet, IndexedLineSet, or PointSet node.

Coordinate3 {
  exposedField MFVec3f point  []
}

Normal

This node defines a set of 3D surface normal vectors to be used in the normal field of vertex-based shape nodes (IndexedFaceSet, IndexedLineSet, PointSet, ElevationGrid). This node contains one multiple-valued field that contains the normal vectors.

To save network bandwidth, it is expected that implementations will be able to automatically generate appropriate normals if none are given. However, the results will vary from implementation to implementation.

Normal {
  exposedField MFVec3f vector []
}

TextureCoordinate2

This node defines a set of 2D coordinates to be used in the texCoord field to map textures to the vertices of PointSet, IndexedLineSet, IndexedFaceSet, and ElevationGrid objects.

Texture coordinates range from 0 to 1 across the texture. The horizontal coordinate, S, is specified first, followed by the vertical coordinate, T.

TextureCoordinate2 {
  exposedField MFVec2f point []
}

Appearance

The Appearance node occurs only within the appearance field of a Shape node. The value for any of the fields in this node can be NULL. However, if the field contains anything, it must contain one specific type of node. Specifically, the material field, if specified, must contain a Material node. The texture field, if specified, must contain a Texture2 node. The textureTransform field, if specified, must contain a Texture2Transform node.

Appearance {
  exposedField SFNode material          Material {}
  exposedField SFNode texture           NULL
  exposedField SFNode textureTransform  NULL
}

Appearance Properties

The Material, Texture2, and Texture2Transform appearance property nodes are always contained within fields of an Appearance node. The FontStyle node is always contained in the fontStyle field of a Text node.


FontStyle

The FontStyle node, which may only appear in the fontStyle field of a Text node, defines the size, font family, and style of the text font, as well as the direction of the text strings and any specific language rendering techniques that must be used for non-English text.

The size field specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either X or Y by -(size*spacing). (See the Text node for a description of the spacing field.)

Font Family and Style: Font attributes are defined with the family and style fields. It is up to the browser to assign specific fonts to the various attribute combinations.

The family field contains an SFString value that can be "SERIF" (the default) for a serif font such as Times Roman; "SANS" for a sans-serif font such as Helvetica; or "TYPEWRITER" for a fixed-pitch font such as Courier.

The style field contains an SFString value that can be an empty string (the default); "BOLD" for boldface type; "ITALIC" for italic type; or "BOLD ITALIC" for bold and italic type.

Direction: The horizontal, leftToRight, and topToBottom fields indicate the direction of the text. The horizontal field indicates whether the text is horizontal (specified as TRUE, the default) or vertical (FALSE). The leftToRight field indicates whether the text progresses from left to right (specified as TRUE, the default) or from right to left (FALSE). The topToBottom field indicates whether the text progresses from top to bottom (specified as TRUE, the default), or from bottom to top (FALSE).

The justify field of the Text node determines where the text is positioned in relation to the origin (0,0,0) of the local coordinate system. The values for the justify field are "BEGIN", "MIDDLE", and "END". For a left-to-right direction (leftToRight = TRUE), "BEGIN" would specify left-justified text, "MIDDLE" would specify centered text, and "END" would specify right-justified text.

For horizontal text (horizontal = TRUE), the first line of text is positioned with its baseline (bottom of capital letters) at Y = 0. The text is positioned on the positive side of the X origin when leftToRight is TRUE and justify is "BEGIN"; the same positioning is used when leftToRight is FALSE and justify is "END". The text is on the negative side of the X origin when leftToRight is TRUE and justify is "END" (and when leftToRight is FALSE and justify is "BEGIN"). For justify = "MIDDLE" and horizontal = TRUE, each string will be centered at X = 0.

For vertical text (horizontal is FALSE), the first line of text is positioned with the left side of the glyphs along the Y axis. When topToBottom is TRUE and justify is "BEGIN" (or when topToBottom is FALSE and justify is "END"), the text is positioned with the top left corner at the origin. When topToBottom is TRUE and justify is "END" (or when topToBottom is FALSE and justify is "BEGIN"), the bottom left is at the origin. For justify = "MIDDLE" and horizontal = FALSE, the text is centered vertically at X = 0.

In the following tables, each small cross indicates where the X and Y axes should be in relation to the text.

horizontal = TRUE:

Horizontal Text Table

horizontal = FALSE:

Vertical Text Table

Text Language: There are many languages in which the proper rendering of the text requires more than just a sequence of glyphs. The language field allows the author to specify which, if any, language specific rendering techniques to use. For simple languages, such as English, this field may be safely ignored.

The tag used to specify languages will follow RFC1766, "Tags for the Identification of Languages." This RFC specifies that a language tag may simply be a two-letter ISO 639 tag, for example "en" for English, "ja" for Japanese, or "sv" for Swedish. This may be optionally followed by a hyphen and a two-letter country code from ISO 3166. American English, for instance, could be specified as "en-US".

FontStyle {
  field SFFloat  size        1.0
  field SFString family      "SERIF" # "SERIF", "SANS", "TYPEWRITER"
  field SFString style       ""      # "BOLD", "ITALIC", "BOLD ITALIC"
  field SFBool   horizontal  TRUE
  field SFBool   leftToRight TRUE
  field SFBool   topToBottom TRUE
  field SFString language    ""
}

Material

The Material node defines surface material properties for associated geometry nodes.

The fields in the Material node determine the way light reflects off an object to create color:

The lighting parameters defined by the Material node are the same parameters defined by the OpenGL lighting model. For a rigorous mathematical description of how these parameters should be used to determine how surfaces are lit, see the description of lighting operations in the OpenGL Specification. Also note that OpenGL specifies the specular exponent as a non-normalized 0-128 value, which is specified as a normalized 0-1 value in VRML (simply multiplying the VRML value by 128 to translate to the OpenGL parameter).

For rendering systems that do not support the full OpenGL lighting model, the following simpler lighting model is recommended:

A transparency value of 0 is completely opaque, a value of 1 is completely transparent. Browsers need not support partial transparency, but should support at least fully transparent and fully opaque surfaces, treating transparency values >= 0.5 as fully transparent.

Issues for Low-End Rendering Systems. Many low-end PC rendering systems are not able to support the full range of the VRML material specification. For example, many systems do not render individual red, green and blue reflected values as specified in the specularColor field. The following table describes which Material fields are typically supported in popular low-end systems and suggests actions for browser implementors to take when a field is not supported.

Field           Supported?      Suggested Action

ambientIntensity No             Ignore
diffuseColor     Yes            Use
specularColor    No             Ignore
emissiveColor    No             Use in place of diffuseColor if != 0 0 0
shininess        Yes            Use
transparency     No             Ignore

Rendering systems which do not support specular color may nevertheless support a specular intensity. This should be derived by taking the dot product of the specified RGB specular value with the vector [.32 .57 .11]. This adjusts the color value to compensate for the variable sensitivity of the eye to colors.

Likewise, if a system supports ambient intensity but not color, the same thing should be done with the ambient color values to generate the ambient intensity. If a rendering system does not support per-object ambient values, it should set the ambient value for the entire scene at the average ambient value of all objects.

It is also expected that simpler rendering systems may be unable to support both diffuse and emissive objects in the same world.

Material {
  exposedField SFColor diffuseColor      0.8 0.8 0.8
  exposedField SFFloat ambientIntensity  0.2
  exposedField SFColor specularColor     0 0 0
  exposedField SFColor emissiveColor     0 0 0
  exposedField SFFloat shininess         0.2
  exposedField SFFloat transparency      0
}

Texture2

The Texture2 node defines a texture map and parameters for that map.

The texture can be read from the URL specified by the filename field. To turn off texturing, set the filename field to an empty string (""). Implementations should support the JPEG and PNG image file formats. Support for the GIF format and for MPEG is also recommended. If MPEG is supported, the fraction field specifies which frame of the sequence should be used as the texture. A fraction of 0 indicates that the first frame is displayed, and a fraction of 1 indicates that the last frame is displayed. Connecting this field to the fraction eventOut of a TimeSensor allows the texture to be animated by the MPEG movie.

If multiple URLs are presented, this expresses a descending order of preference. A browser may display a lower-preference URL while the higher-order file is not available. See the section on URLs and URNs.

Textures can also be specified inline by setting the image field to contain the texture data. Supplying both image and filename fields will result in undefined behavior.

Texture images may be one component (greyscale), two component (greyscale plus transparency), three component (full RGB color), or four-component (full RGB color plus transparency). An ideal VRML implementation will use the texture image to modify the diffuse color and transparency of an object's material (specified in a Material node), then perform any lighting calculations using the rest of the object's material properties with the modified diffuse color to produce the final image. The texture image modifies the diffuse color and transparency depending on how many components are in the image, as follows:

  1. Diffuse color is multiplied by the greyscale values in the texture image.
  2. Diffuse color is multiplied by the greyscale values in the texture image; material transparency is multiplied by transparency values in texture image.
  3. RGB colors in the texture image replace the material's diffuse color.
  4. RGB colors in the texture image replace the material's diffuse color; transparency values in the texture image replace the material's transparency.

Browsers may approximate this ideal behavior to increase performance. One common optimization is to calculate lighting only at each vertex and combining the texture image with the color computed from lighting (performing the texturing after lighting). Another common optimization is to perform no lighting calculations at all when texturing is enabled, displaying only the colors of the texture image.

The repeatS and repeatT fields specify how the texture wraps in the S and T directions. If repeatS is TRUE (the default), the texture map is repeated outside the 0-to-1 texture coordinate range in the S direction so that it fills the shape. If repeatS is FALSE, the texture coordinates are clamped in the S direction to lie within the 0-to-1 range. The repeatT field is analogous to the repeatS field.

Texture2 {
  exposedField MFString filename   [ ]
  exposedField SFImage  image      0 0 0
  exposedField SFFloat  fraction   0
  field        SFBool   repeatS    TRUE
  field        SFBool   repeatT    TRUE
}

Texture2Transform

The Texture2Transform node defines a 2D transformation that is applied to texture coordinates. This node is used only in the textureTransform field of the Appearance node and affects the way textures are applied to the surfaces of the associated Geometry node. The transformation consists of (in order) a nonuniform scale about an arbitrary center point, a rotation about that same point, and a translation. This allows a user to change the size and position of the textures on shapes.

Texture2Transform {
  field SFVec2f translation 0 0
  field SFFloat rotation    0
  field SFVec2f scaleFactor 1 1
  field SFVec2f center      0 0
}

Geometric Sensors

Geometric sensor nodes are children of a Transform node. They generate events with respect to the Transform's coordinate system and children.


BoxProximitySensor

Proximity sensors are nodes that generate events when the viewpoint enters, exits, and moves inside a space. A proximity sensor can be activated or deactivated by sending it an enabled event with a value of TRUE or FALSE.

A BoxProximitySensor generates isActive TRUE/FALSE events as the viewer enters/exits the region defined by its center and size fields. Ideally, implementations will interpolate viewpoint positions and timestamp the isActive events with the exact time the viewpoint first intersected the volume.

A BoxProximitySensor with a (0 0 0) size field (the default) will sense the region defined by the objects in its coordinate system. The axis-aligned bounding box of the Transform containing the BoxProximitySensor should be computed and used instead of the center and size fields in this case.

position and orientation events giving the position and orientation of the viewer in the BoxProximitySensor's coordinate system are generated when either the user or the coordinate system of the sensor moves and the viewer is inside the region being sensed.

Multiple BoxProximitySensors will generate events at the same time if the regions they are sensing overlap. Unlike ClickSensors, there is no notion of a BoxProximitySensor lower in the scene graph "grabbing" events.

A BoxProximitySensor that surrounds the entire world will have an enter time equal to the time that the world was entered and can be used to start up animations or behaviors as soon as a world is loaded.

BoxProximitySensor {
  exposedField SFVec3f    center      0 0 0
  exposedField SFVec3f    size        0 0 0
  exposedField SFBool     enabled     TRUE
  eventOut     SFBool     isActive
  eventOut     SFVec3f    position
  eventOut     SFRotation orientation
}

ClickSensor

A ClickSensor tracks the pointing device with respect to some geometry. This sensor can be activated or deactivated by sending it an enabled event with a value of TRUE or FALSE.

The ClickSensor generates events as the pointing device passes over some geometry. When the pointing device is over the geometry, this sensor will also generate button press and release events for the button associated with the pointing device. Typically, the pointing device is a mouse and the button is a mouse button.

isOver TRUE/FALSE events are generated as the pointing device moves over the ClickSensor's geometry. When the pointing device is unobstructed by any other surface and moves on top of the ClickSensor's geometry, an isOver TRUE event should be generated. When the pointing device moves and is no longer on top of the geometry, or some other geometry is obstructing the ClickSensor's geometry, an isOver FALSE event should be generated.

All of these events are generated only when the pointing device moves or the user clicks the button.

If the user presses the button associated with the pointing device while the cursor is located over its geometry, the ClickSensor will grab all further motion events from the pointing device until the button is released (other Click or Drag sensors will not generate events during this time). isActive TRUE/FALSE events are generated along with the press/release events. Motion of the pointing device while it has been grabbed by a ClickSensor is referred to as a "drag".

As the user drags the cursor over the ClickSensor's geometry, the point on that geometry which lies directly underneath the cursor is determined. When isOver and isActive are TRUE, hitPoint, hitNormal, and hitTexture events are generated whenever the pointing device moves. hitPoint events contain the 3D point on the surface of the underlying geometry, given in the ClickSensor's coordinate system. hitNormal events contain the surface normal at the hitPoint. hitTexture events contain the texture coordinates of that surface at the hitPoint, which can be used to support the 3D equivalent of an image map.

ClickSensor {
  exposedField SFBool  enabled TRUE
  eventOut     SFBool  isOver
  eventOut     SFBool  isActive
  eventOut     SFVec3f hitPoint
  eventOut     SFVec3f hitNormal
  eventOut     SFVec2f hitTexture
}

CylinderSensor

The CylinderSensor maps dragging motion into a rotation around the Y axis of its local space. The feel of the rotation is as if you were turning a rolling pin.

CylinderSensor {
  exposedField SFFloat    minAngle   0
  exposedField SFVec2f    maxAngle   0
  exposedField SFBool     enabled    TRUE
  eventOut     SFVec3f    trackPoint
  eventOut     SFRotation rotation
  eventOut     SFBool     onCylinder
}

minAngle and maxAngle may be set to clamp rotation events to a range of values (measured in radians about the Y axis). If minAngle is greater than maxAngle, rotation events are not clamped.

Upon the initial click down on the CylinderSensor's geometry, the specific point clicked determines the radius of the cylinder used to map pointing device motion while dragging. trackPoint events always reflect the unclamped drag position on the surface of this cylinder, or in the plane perpendicular to the view vector if the cursor moves off this cylinder. An onCylinder TRUE event is generated at the initial click down; thereafter, onCylinder FALSE/TRUE events are generated if the pointing device is dragged off/on the cylinder.


DiskSensor

The DiskSensor maps dragging motion into a rotation around the Z axis of its local space. The feel of the rotation is as if you were scratching on a record turntable.

DiskSensor {
  exposedField SFFloat    minAngle   0
  exposedField SFVec2f    maxAngle   0
  exposedField SFBool     enabled    TRUE
  eventOut     SFVec3f    trackPoint
  eventOut     SFRotation rotation
}

minAngle and maxAngle may be set to clamp rotation events to a range of values as measured in radians about the Z axis. If minAngle is greater than maxAngle, rotation events are not clamped. trackPoint events provide unclamped drag position in the XY plane.


PlaneSensor

The PlaneSensor maps dragging motion into a translation in two dimensions, in the XY plane of its local space.

PlaneSensor {
  exposedField SFVec2f minPosition 0 0
  exposedField SFVec2f maxPosition -1 -1
  exposedField SFBool  enabled     TRUE
  eventOut     SFBool  isOver
  eventOut     SFBool  isActive
  eventOut     SFVec3f hitPoint
  eventOut     SFVec3f hitNormal
  eventOut     SFVec2f hitTexture
  eventOut     SFVec3f trackPoint
  eventOut     SFVec3f translation
}

minPosition and maxPosition may be set to clamp translation events to a range of values as measured from the origin of the XY plane. If the X or Y component of minPosition is greater than the corresponding component of maxPosition, translation events are not clamped in that dimension. If the X or Y component of minPosition is equal to the corresponding component of maxPosition, that component is constrained to the given value; this technique provides a way to implement a line sensor that maps dragging motion into a translation in one dimension. (There is no built-in line sensor node.)

trackPoint events provide unclamped drag position in in the XY plane.


SphereSensor

The SphereSensor maps dragging motion into a free rotation about its center. The feel of the rotation is as if you were rolling a ball.

SphereSensor {
  exposedField SFBool     enabled    TRUE
  eventOut     SFVec3f    trackPoint
  eventOut     SFRotation rotation
  eventOut     SFBool     onSphere
}

The free rotation of the SphereSensor is always unclamped.

Upon the initial click down on the SphereSensor's geometry, the point hit determines the radius of the sphere used to map pointing device motion while dragging. trackPoint events always reflect the unclamped drag position on the surface of this sphere, or in the plane perpendicular to the view vector if the cursor moves off of the sphere. An onSphere TRUE event is generated at the initial click down; thereafter, onSphere FALSE/TRUE events are generated if the pointing device is dragged off/on the sphere.


Special Nodes

The Background, NavigationInfo, Script, TimeSensor, and WorldInfo nodes are not part of the world's transformational hierarchy.

The Background, NavigationInfo, and WorldInfo nodes are global nodes that affect everything in the scene. They can be used anywhere in the scene description and may appear in fields of a Script node. If more than one Background node appears in a file, the first Background node read is the one that is used; the same rule applies to the NavigationInfo and WorldInfo nodes.


Background

The Background node is used to specify a color-ramp backdrop that simulates ground and sky planes, as well as an environment texture, or panorama, that is placed behind all geometry in the scene and in front of the backdrop.

The backdrop is conceptually a sphere with an infinite radius, painted with a smooth gradation of ground colors (starting with a circle straight downward and rising in concentric bands up to the horizon) and a separate gradation of sky colors (starting with a circle straight upward and falling in concentric bands down to the horizon). (It's acceptable to implement the backdrop as a cube painted in concentric square rings instead of as a sphere.) The groundRanges field is a list of floating point values that indicate the cutoff for each groundColor. Its implicit initial value is 0 radians (downward), and the final value given indicates the elevation angle of the horizon, where the ground color ramp and the sky color ramp meet. The skyRanges field implicitly starts at 0 radians (upward) and works its way down to pi radians. If groundColors is NULL, no ground colors are used.

The panorama is the image that is to be wrapped around the user, between the backdrop and the world's geometry. The panorama consists of a tall texture map consisting of six square submaps, each of which is mapped onto the faces of a cube surrounding the world. Ideally, the texture map should be a power of 2 pixels wide, and six times that many pixels high. The top square in the texture is mapped onto the "ceiling" of the world-cube; the next four are mapped onto the sides of the cube; and the final square is mapped onto the "floor" of the cube. Transparency values in the panorama image specify that the panorama is transparent in particular places, allowing the groundColors and skyColors to show through. (Often, the top and bottom texture squares will be entirely transparent, to allow sky and ground to show; the other four texture squares may depict mountains or other distant scenery. If the textures are run-length encoded, making entire squares transparent significantly reduces the texture file size.) By default, there is no panorama.

If multiple URLs are specified for the panorama field, they expresses a descending order of preference. A browser may display a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, a higher-preference file. See also the section on URLs and URNs.

The first Background node found during reading of the world is used as the initial background. Subsequent Background nodes are ignored. The background may be changed by Script node API calls.

Ground colors, sky colors, and panoramic images do not translate with respect to the viewer, though they do rotate with respect to the viewer. That is, the viewer can never get any closer to the background, but can turn to examine all sides of the panorama cube, and can look up and down to see the concentric rings of ground and sky (if visible).

Background{
  exposedfield MFColor  groundColor  [ 0.14 0.28 0.00, # light green
                                       0.09 0.11 0.00 ]# to dark green
  exposedField MFFloat  groundRange  [ .785 ]   # horizon = 45 degrees
  exposedField MFColor  skyColor     [ 0.02 0.00 0.26  # twilight blue
                                       0.02 0.00 0.65 ]# to light blue 
  exposedField MFFloat  skyRange     [ .785 ]   # horizon = 45 degrees
  exposedField MFString panorama     [ ]
}

NavigationInfo

The NavigationInfo node contains information for the viewer through several fields: type, speed, size, visibilityLimit, and headlight.

The type field specifies a navigation paradigm to use. The types that all VRML viewers should support are "WALK", "EXAMINE", "FLY", and "NONE". A walk viewer is used for exploring a virtual world. The viewer should (but is not required to) have some notion of gravity in this mode. A fly viewer is similar to walk except that no notion of gravity should be enforced. There should still be some notion of "up" however. An examine viewer is typically used to view individual objects and often includes (but does not require) the ability to spin the object and move it closer or further away. The "none" choice removes all viewer controls. The user navigate using only controls provided in the scene, such as guided tours. Also allowed are browser specific viewer types. These should include a suffix as described in the naming conventions section to prevent conflicts. The type field is multi-valued so that authors can specify fallbacks in case a browser does not understand a given type.

The speed is the rate at which the viewer travels through a scene in units per second. Since viewers may provide mechanisms to travel faster or slower, this should be the default or average speed of the viewer. In an examiner viewer, this only makes sense for panning and dollying--it should have no effect on the rotation speed.

The size field specifies parameters to be used in determining the camera dimensions for the purpose of collision detection and terrain following if the viewer type allows these. It is a multi-value field to allow several dimensions to be specified. The first value should be the allowable distance between the user's position and any collision geometry (as specified by Collision) before a collision is detected. The second should be the height above the terrain the camera should be maintained. The third should be the height of the tallest object over which the camera can "step". This allows staircases to be build with dimensions that can be ascended by all browsers. Additional values are browser dependent and all values may be ignored but if a browser interprets these values the first 3 should be interpreted as described above.

The visibilityLimit field sets the furthest distance the viewer is able to see. The browser may clip all objects beyond this limit, fade them into the background or ignore this field. A value of 0.0 (the default) indicates an infinite visibility limit.

The headlight field specifies whether a browser should turn a headlight on. A headlight is a directional light that always points in the direction the user is looking. Setting this field to TRUE allows the browser to provide a headlight, possibly with user interface controls to turn it on and off. Scenes that enlist precomputed lighting (e.g. radiosity solutions) can specify the headlight off here. The headlight should have intensity 1, color 1 1 1, and direction 0 0 -1.

The first NavigationInfo node found during reading of the world supplies the initial navigation parameters. Subsequent NavigationInfo nodes are ignored. The browser may be told to use a different NavigationInfo node using Script node API calls.

NavigationInfo {
  exposedField MFString type             "WALK" 
  exposedField SFFloat  speed            1.0 
  exposedField MFFloat  size             1.0 
  exposedField MFFloat  visibilityLimit  0.0 
  exposedField SFBool   headlight        TRUE
}

Script

Files that describe node behavior are referenced through a Script node. Each Script node has associated code in some programming language that is executed to carry out the Script node's function. That code will be referred to as "the script" in the rest of this description.

A Script node's scriptType field describes which scripting language is being used. The contents of the behavior field depends on which scripting language is being used. Typically the behavior field will contain URLs/URNs from which the script should be fetched.

Each scripting language supported by a browser defines bindings for the following functionality. See Appendices A and B for the standard Java and C language bindings.

The script is created, and any language-dependent or user-defined initialization is performed. The script should be able to receive and process events that are sent to it. Each event that can be received must be declared in the Script node using the same syntax as is used in a prototype definition:

    eventIn type name

"eventIn" is a VRML keyword. The type can be any of the standard VRML field types, and name must be an identifier that is unique for this Script node.

The Script node should be able to generate events in response to the incoming events. Each event that can be generated must be declared in the Script node using the following syntax:

    eventOut type name

If the Script node's mustEvaluate field is FALSE, the browser can delay sending input events to the script until its outputs are needed by the browser. If the mustEvaluate field is TRUE, the browser should send input events to the script as soon as possible, regardless of whether the outputs are needed. The mustEvaluate field should be set to TRUE only if the Script has effects that are not known to the browser (such as sending information across the network); otherwise, poor performance may result.

An example of a Script node is

    Script { 
      behavior   "http://foo.com/bar.class"  ; MFSTRING
      scriptType "javabc"
      eventIn    SFString name   
      eventIn    SFBool   selected
      eventOut   SFString lookto
      field      SFInt32  currentState 0
      field      SFBool   mustEvaluate TRUE
    }

The script should be able to read and write the fields of the corresponding Script node. The Script node is responsible for implementing the behavior of exposed fields; the browser will not automatically update the value of an exposed field and will not automatically generate an eventOut when an exposed field changes.

Once the script has access to some VRML node (via an SFNode or MFNode value either in one of the Script node's fields or passed in as an eventIn), the script should be able to read the contents of that node's exposed field. If the Script node's directOutputs field is TRUE, the script may also send events directly to any node to which it has access.

A script should also be able to communicate directly with the VRML browser to get and set global information such as navigation information, the current time, the current world URL, and so on.

It is expected that all other functionality (such as networking capabilities, multi-threading capabilities, and so on) will be provided by the scripting language.

Script { 
  field MFString behavior      [ ] 
  field SFString scriptType    "" 
  field SFBool   mustEvaluate  FALSE
  field SFBool   directOutputs FALSE
  
  # And any number of:
  eventIn      eventTypeName eventName
  field        fieldTypeName fieldName initialValue
  exposedField fieldTypeName fieldName initialValue
  eventOut     eventTypeName eventName
}

TimeSensor

TimeSensors generate events as time passes. TimeSensors remain inactive until their startTime is reached. At the first simulation tick when "now" is greater than or equal to startTime, the TimeSensor will begin generating time and fraction events, which may be routed to other nodes to drive continuous animation or simulated behaviors.

The length of time a TimeSensor generates events is controlled using cycleInterval and cycleCount; a TimeSensor stops generating time events at time startTime+cycleInterval*cycleCount. The time events contain times relative to startTime, so they will start at zero and increase up to cycleInterval*cycleCount.

The forward and back fields control the mapping of time to fraction values. If forward is TRUE and back is FALSE (the default), fraction events will rise from 0.0 to 1.0 over each interval. If forward is FALSE and back is TRUE, the opposite will happen (fraction events will fall from 1.0 to 0.0 during each interval). If they are both TRUE, fraction events will alternate 0.0 to 1.0, 1.0 to 0.0, reversing direction on each interval. If they are both FALSE, then fraction and time events will be generated only once per cycle (and the fraction values generated will always be 0).

pauseTime may be set to interrupt the progress of a TimeSensor. If pauseTime is greater than startTime, time and fraction events will not be generated after the pause time. pauseTime is ignored if it is less than or equal to startTime.

A TimeSensor will generate an isActive TRUE event when it begins generating times, and will generate an isActive FALSE event when it stops generating times (either because pauseTime was reached or because time startTime+cycleInterval*cycleCount was reached).

If cycleCount is is less than or equal to 0, the TimeSensor will continue to tick continuously, as if the cycleCount is infinity. This use of the TimeSensor should be used with caution, since it incurs continuous overhead on the simulation.

Setting cycleCount to 1 and cycleInterval to 0 will result in a single event being generated at startTime; this can be used to build an alarm that goes off at some point in the future.

No guarantees are made with respect to how often a TimeSensor will generate time events, but TimeSensors are guaranteed to generate final fraction and time events at or after time (startTime+cycleInterval*cycleCount) if pauseTime is less than or equal to startTime.

TimeSensor {
  exposedField SFTime   startTime     0
  exposedField SFTime   pauseTime     0
  exposedField SFTime   cycleInterval 1
  exposedField SFInt32  cycleCount    1
  exposedField SFBool   forward       TRUE
  exposedField SFBool   back          FALSE
  eventOut     SFBool   isActive
  eventOut     SFTime   time
  eventOut     SFFloat  fraction
}

WorldInfo

The WorldInfo node contains information about the world. The title of the world is stored in its own field, allowing browsers to display it--for instance, in their window border. Any other information about the world can be stored in the info field--for instance, the scene author, copyright information, and public domain information.

WorldInfo {
  field SFString title ""
  field MFString info  [ ]
}

Other Grouping nodes:

Group

A Group node is a lightweight grouping node that can contain any number of children. It is equivalent to a Transform node, without the transformation fields.

PROTO Group [
  field        SFVec3f bboxCenter  0 0 0
  field        SFVec3f bboxSize    0 0 0
  exposedField MFNode  children    [ ]
  eventIn      MFNode  add_children
  eventIn      MFNode  remove_children
] {
  Transform {
    bboxCenter IS bboxCenter
    bboxSize IS bboxSize
    children IS children
    add_children IS add_children
    remove_children IS remove_children
  }
}

LOD (Level of Detail)

The LOD node is used to allow browsers to switch between various representations of objects automatically. The children of this node typically represent the same object or objects at varying levels of detail, from highest detail to lowest.

First the distance is calculated from the viewpoint, transformed into the local coordinate space of the LOD node (including any scaling transformations), to the center point of the LOD. If the distance is less than the first value in the range field, then the first level of the LOD is drawn. If between the first and second values in the range field, the second level is drawn, and so on.

If there are N values in the range field, the LOD should have N+1 nodes in its level field. Specifying too few levels will result in the last level being used repeatedly for the lowest levels of detail; if too many levels are specified, the extra children will be ignored. The exception to this rule is to leave the range field empty, which is a hint to the browser that it should choose a level automatically to maintain a constant display rate.

Each value in the range field should be greater than the previous value; otherwise results are undefined. Not specifying any values in the range field (the default) is a special case that indicates that the browser may decide which child to draw to optimize rendering performance.

Authors should set LOD ranges so that the transitions from one level of detail to the next are barely noticeable. Browsers may adjust which level of detail is displayed to maintain interactive frame rates, to display an already-fetched level of detail while a higher level of detail (contained in a WWWInline node) is fetched, or might disregard the author-specified ranges for any other implementation-dependent reason. Authors should not use LOD nodes to emulate simple behaviors, because the results will be undefined. For example, using an LOD node to make a door appear to open when the user approaches probably will not work in all browsers. Use a ProximitySensor instead.

For best results, specify ranges only where necessary, and nest LOD nodes with and without ranges. For example:

LOD {
  range [100, 1000]
  children [
    LOD {
      children [
        Transform { ... detailed version...  },
        DEF LoRes Transform { ... less detailed version... }
      ]
    },
    USE LoRes,
    Shape { } # Display nothing
  ]
}

In this example, the browser is free to choose either a detailed or a less-detailed version of the object when the viewer is closer than 100 meters. The browser should display the less-detailed version of the object if the viewer is between 100 and 1,000 meters and should display nothing at all if the viewer is farther than 1,000 meters. Browsers should try to honor the hints given by authors, and authors should try to give browsers as much freedom as they can to choose levels of detail based on performance.

PROTO LOD [
  field        MFFloat range    [ ]  
  field        SFVec3f center   0 0 0 
  exposedField MFNode  levels   [ ]
] {
  DEF F Transform {
    DEF PS BoxProximitySensor { center IS center }
  }
  DEF LODSCRIPT Script {
    eventOut MFNode  remove
    eventOut MFNode  add
    eventOut SFVec3f maxRange
    eventIn  SFVec3f viewerPosition
    field MFFloat range IS range
    field MFNode  levels IS levels
    #
    # Script must:
    #   -- set maxRange to maximum value in range[] field
    #   -- get viewerPosition, figure out which level should
    #     be seen, add/remove appropriate children
  }
  ROUTE F.position TO LODSCRIPT.viewerPosition
  ROUTE LODSCRIPT.maxRange TO PS.size
  ROUTE LODSCRIPT.remove TO F.removeChildren
  ROUTE LODSCRIPT.add TO F.addChildren
}

Switch

The Switch grouping node traverses zero or one of its children (which are specified in the choices field).

The whichChild field specifies the index of the child to traverse, where the first child has index 0. If whichChild is less than zero or greater than the number of nodes in the choices field then nothing is chosen.

PROTO Switch [
  exposedField    SFInt32 whichChild -1
  exposedField    MFNode  choices   [ ]
] {
  DEF F Transform {
  }
  DEF SWITCHSCRIPT Script {
    eventOut MFNode remove
    eventOut MFNode add
    exposedField SFInt32 whichChild IS whichChild
    exposedField MFNode  children   IS children
    #
    # Script must:
    #   -- keep whichChild up-to-date
    #   -- figure out which child should
    #      be seen when whichChild changes, add/remove 
    #      appropriate children
  }
  ROUTE SWITCHSCRIPT.remove TO F.removeChildren
  ROUTE SWITCHSCRIPT.add TO F.addChildren
}  

WWWAnchor

The WWWAnchor grouping node causes some data to be fetched over the network when any of its children are chosen. If the data pointed to is a VRML world, then that world is loaded and displayed instead of the world of which the WWWAnchor is a part. If another data type is fetched, it is up to the browser to determine how to handle that data; typically, it will be passed to an appropriate, already-open (or newly spawned) general Web browser.

Exactly how a user "chooses" a child of the WWWAnchor is up to the VRML browser; typically, clicking on one of its children with the mouse will result in the new scene replacing the current scene. A WWWAnchor with an empty ("") name does nothing when its children are chosen.

The name is an arbitrary set of URLs. If multiple URLs are presented, this expresses a descending order of preference. A browser may display a lower-preference URL if the higher-order file is not available. See the section on URLs and URNs.

The description field in the WWWAnchor allows for a friendly prompt to be displayed as an alternative to the URL in the name field. Ideally, browsers will allow the user to choose the description, the URL, or both to be displayed for a candidate WWWAnchor.

A WWWAnchor may be used to take the viewer to a particular viewpoint in a virtual world by specifying a URL ending with "#viewpointName", where "viewpointName" is the name of a viewpoint defined in the world. For example:

WWWAnchor {
  name "http://www.school.edu/vrml/someScene.wrl#OverView"
  Cube { } 
}

specifies an anchor that puts the viewer in the "someScene" world looking from the viewpoint named "OverView" when the Cube is chosen. If no world is specified, then the current scene is implied; for example:

WWWAnchor {
  name "#Doorway"
  children [ Sphere { } ]
}

will take the viewer to the viewpoint defined by the "Doorway" viewpoint in the current world when the sphere is chosen.

PROTO WWWAnchor [
  field        MFString name        [ ]
  field        SFString description "" 
  exposedField MFNode   children    [ ]
] {
  Group {
    children [
      DEF CS ClickSensor { },
      Group { children IS children }
    ]
  }
  DEF ASCRIPT Script {
    mustEvaluate TRUE

    field MFString url IS name
    eventIn SFBool loadWorld
    #
    # Script must load new world (using Script API) when
    # ClickSensor is clicked
    #
  }
  ROUTE CS.isActive TO ASCRIPT.loadWorld
}

WWWInline

The WWWInline node is a light-weight grouping node like Group that reads its children from anywhere in the World Wide Web. Exactly when its children are read is not defined; reading the children may be delayed until the WWWInline is actually displayed. A WWWInline with an empty name does nothing. The name is an arbitrary set of URLs.

A WWWInline's URLs must refer to a valid VRML file that contains a grouping or leaf node. Referring to non-VRML files or VRML files that do not contain a grouping or leaf node is undefined.

If multiple URLs are specified, then this expresses a descending order of preference. A browser may display a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URLs and URNs.

If the WWWInline's bboxSize field specifies a non-empty bounding box (a bounding box is non-empty if at least one of its dimensions is greater than zero), then the WWWInline's object-space bounding box is specified by its bboxSize and bboxCenter fields. This allows an implementation to quickly determine whether or not the contents of the WWWInline might be visible. This is an optimization hint only; if the true bounding box of the contents of the WWWInline is different from the specified bounding box, results will be undefined.

PROTO WWWInline [
  field MFString name       [ ]
  field SFVec3f  bboxSize   0 0 0
  field SFVec3f  bboxCenter 0 0 0
] {
  DEF G Group {
    bboxSize IS bboxSize
    bboxCenter IS bboxCenter
  }
  DEF ISCRIPT Script {
    field MFString url IS name
    eventOut MFNode children
    #
    # Script's initialization code should call browser's
    # create node from URL function, then send resulting node out to
    # children eventOut.
  }
  ROUTE ISCRIPT.children TO G.addChildren
}

Other Sound Nodes

PointSound

The PointSound node defines a sound source located at a specific 3D location. The name field specifies a URL from which the sound is read. Implementations should support at least the ??? ??? sound file formats. Streaming sound files may be supported by browsers; otherwise, sounds should be loaded when the sound node is loaded. Browsers may limit the maximum number of sounds that can be played simultaneously.

If multiple URLs are specified, then this expresses a descending order of preference. A browser may use a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URNs.

The description field is a textual description of the sound, which may be displayed in addition to or in place of playing the sound.

The intensity field adjusts the volume of each sound source; an intensity of 0 is silence, and an intensity of 1 is whatever intensity is contained in the sound file.

The sound source has a radius specified by the minRadius field. When the viewpoint is within this radius, the sound's intensity (volume) is constant, as indicated by the intensity field. Outside the minRadius, the intensity drops off to zero at a distance of maxRadius from the source location. If the two radii are equal, the drop-off is sharp and sudden. Otherwise, the drop-off should be proportional to the square of the distance of the viewpoint from the minRadius.

Browsers may also support spatial localizations of sound. However, within minRadius, localization should not occur, so intensity is constant in all channels. Between minRadius and maxRadius, the sound location should be the point on the minRadius sphere that is closest to the current viewpoint. This ensures a smooth change in location when the viewpoint leaves the minRadius sphere. Note also that an ambient sound can therefore be created by using a large minRadius value.

The loop field specifies whether or not the sound is constantly repeated. By default, the sound is played only once. If the loop field is FALSE, the sound has length "length," which is not specified in the VRML file but is implicit in the sound file pointed to by the URL in the name field. If the loop field is TRUE, the sound has an infinite length.

The start field specifies the time at which the sound should start playing. The pause field may be used to make a sound stop playing some time after it has started.

With the start time "start," pause time "pause," and current time "now," the rules are as follows:

if:       now < start: OFF
else if:  now >+ start+length:  OFF
else if:  (pause> start) AND (start <= now < pause) :  ON
else:     ON

Whenever start, pause, or "now" changes, the above rules need to be applied to figure out if the sound is playing. If it is, then it should be playing the bit of sound at (now - start) or, if it is looping, fmod( now - start, realLength).

A sound's location in the scene graph determines its spatial location (the sound's location is transformed by the current transformation) and whether or not it can be heard. A sound can only be heard while it is part of the traversed scene; sound nodes underneath LOD nodes or Switch nodes will not be audible unless they are traversed. If it is later part of the traversal again, the sound picks up where it would have been had it been playing continuously.

PROTO PointSound [
  field          MFString name         [ ]
  field          SFString description  "" 
  exposedField   SFFloat  intensity    1
  exposedField   SFVec3f  location     0 0 0 
  exposedField   SFFloat  minRange     10 
  exposedField   SFFloat  maxRange     10
  exposedField   SFBool   loop         FALSE 
  exposedField   SFTime   start         0 
  exposedField   SFTime   pause         0 
] {
  DirectedSound {
    name IS name   description IS description  intensity IS intensity
    location IS location loop IS loop  start IS start  pause IS pause
    minFront IS minRange
    minBack  IS minRange
    maxFront IS maxRange
    maxBack  IS maxRange
  }
}

Other Geometry Nodes

Cone

This node represents a simple cone whose central axis is aligned with the Y axis. By default, the cone is centered at (0,0,0) and has a size of -1 to +1 in all three directions. The cone has a radius of 1 at the bottom and a height of 2, with its apex at 1 and its bottom at -1.

The cone has two parts: the side and the bottom. Each part has an associated SFBool field that specifies whether it is visible (TRUE) or invisible (FALSE).

When a texture is applied to a cone, it is applied differently to the sides and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cone. The texture has a vertical seam at the back, intersecting the YZ plane. For the bottom, a circle is cut out of the texture square and applied to the cone's base circle. The texture appears right side up when the top of the cone is rotated towards the -Z axis.

PROTO Cone [
  field     SFFloat   bottomRadius 1
  field     SFFloat   height       2
  field     SFBool    side         TRUE
  field     SFBool    bottom       TRUE
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

Cube

This node represents a cuboid aligned with the coordinate axes. By default, the cube is centered at (0,0,0) and measures 2 units in each dimension, from -1 to +1. A cube's width is its extent along its object-space X axis, its height is its extent along the object-space Y axis, and its depth is its extent along its object-space Z axis.

Textures are applied individually to each face of the cube; the entire texture goes on each face. On the front, back, right, and left sides of the cube, the texture is applied right side up. On the top, the texture appears right side up when the top of the cube is tilted toward the user. On the bottom, the texture appears right side up when the top of the cube is tilted towards the -Z axis.

PROTO Cube [
  field    SFFloat width  2
  field    SFFloat height 2
  field    SFFloat depth  2
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

Cylinder

This node represents a simple capped cylinder centered around the Y axis. By default, the cylinder is centered at (0,0,0) and has a default size of -1 to +1 in all three dimensions. You can use the radius and height fields to create a cylinder with a different size.

The cylinder has three parts: the side, the top (Y = +1) and the bottom (Y = -1). Each part has an associated SFBool field that indicates whether the part is visible (TRUE) or invisible (FALSE).

When a texture is applied to a cylinder, it is applied differently to the sides, top, and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cylinder. The texture has a vertical seam at the back, intersecting the YZ plane. For the top and bottom, a circle is cut out of the texture square and applied to the top or bottom circle. The top texture appears right side up when the top of the cylinder is tilted toward the +Z axis, and the bottom texture appears right side up when the top of the cylinder is tilted toward the -Z axis.

PROTO Cylinder [
  field    SFFloat   radius  1
  field    SFFloat   height  2
  field    SFBool    side    TRUE
  field    SFBool    top     TRUE
  field    SFBool    bottom  TRUE
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

ElevationGrid (new)

This node creates a rectangular grid of varying height, especially useful in modeling terrain. The model is primarily described by a scalar array of height values that specify the height of the surface above each point of the grid.

The verticesPerRow and verticesPerColumn fields indicate the number of grid points in the X and Z directions, respectively, defining a grid of (verticesPerRow-1) x (verticesPerColumn-1) rectangles. (Note that the number of columns of vertices is defined by verticesPerRow and the number of rows of vertices is defined by verticesPerColumn. Rows are numbered from 0 through verticesPerColumn-1; columns are numbered from 0 through verticesPerRow-1.)

The vertex locations for the rectangles are defined by the height field and the gridStep field:

Thus, the vertex corresponding to the ith row and jth column is placed at

( gridStep[0] * j, heights[ i*verticesPerRow + j ], gridStep[ 1 ] * i )

in object space, where

0 <= i < verticesPerColumn, and

0 <= j < verticesPerRow.

All points in a given row have the same Z value, with row 0 having the smallest Z value. All points in a given column have the same X value, with column 0 having the smallest X value.

The default texture coordinates range from [0,0] at the first vertex to [1,1] at the far side of the diagonal. The S texture coordinate will be aligned with X, and the T texture coordinate with Z.

The colorPerQuad field determines whether colors (if specified in the color field) should be applied to each vertex or each quadrilateral of the ElevationGrid. If colorPerQuad is TRUE and the color field is not NULL, then the color field must contain a Color node containing at least (verticesPerColumn-1)*(verticesPerRow-1) colors. If colorPerQuad is FALSE and the color field is not NULL, then the color field must contain a Color node containing at least verticesPerColumn*verticesPerRow colors.

See the introductory Geometry section for a description of the ccw, solid, and creaseAngle fields.

By default, the rectangles are defined with a counterclockwise ordering, so the Y component of the normal is positive. Setting the ccw field to FALSE reverses the normal direction. Backface culling is enabled when the ccw field and the solid field are both TRUE (the default).

PROTO ElevationGrid [
  field        SFInt32  verticesPerColumn 0
  field        SFInt32  verticesPerRow    0
  field        SFVec2f  gridStep          [ 1 1 ]
  field        MFFloat  height            [ ]
  exposedField SFNode   color             NULL
  exposedField SFNode   normal            NULL
  exposedField SFNode   texCoord          NULL
  field        SFInt32  colorPerQuad      FALSE
  field        SFInt32  normalPerQuad     FALSE
  field        SFBool   ccw               TRUE
  field        SFBool   solid             TRUE
  field        SFFloat  creaseAngle       0
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

GeneralCylinder

The GeneralCylinder node is used to parametrically describe numerous families of shapes: extrusions (along an axis or an arbitrary path), surfaces of revolution, and bend/twist/taper objects.

A GeneralCylinder is defined by a 2D crossSection piecewise linear curve (described as a series of connected vertices), a 3D spine piecewise linear curve (also described as a series of connected vertices), a list of floating-point width parameters, and a list of floating-point twist parameters (in radians). Shapes are constructed as follows: The cross-section curve, which starts as a curve in the XZ plane, is scaled about the origin by the first width parameter, twisted counter-clockwise about the origin by the first twist parameter, and translated by the vector given as the first vertex of the spine curve. It is then extruded through space along the first segment of the spine curve. Next, it is scaled and twisted by the second width and twist parameters and extruded by the second segment of the spine, and so on.

A transformed cross section is found for each joint (that is, at each vertex of the spine curve, where segments of the generalized cylinder connect), and the joints and segments are connected to form the surface. No check is made for self-penetration. Each transformed cross section is determined as follows:

  1. Start with the cross section as specified, in the XZ plane.
  2. Scale it about (0, 0, 0) by the value for width given for the current joint.
  3. Rotate it about the +Y axis (the cross section's local up vector) by the value for twist at that joint.
  4. Apply another rotation so that when the cross section is placed at its proper location on the spine it will be oriented properly. Essentially, this means that the cross section's Y axis (up vector coming out of the cross section) is rotated to align with an approximate tangent to the spine curve.

    For all points other than the first or last: The tangent for spine[i] is found by normalizing the vector defined by (spine[i+1] - spine[i-1]).

    If the spine curve is closed: The first and last points need to have the same tangent. This tangent is found as above, but using the points spine[0] for spine[i], spine[1] for spine[i+1] and spine[n-2] for spine[i-1], where spine[n-2] is the next to last point on the curve. The last point in the curve, spine[n-1], is the same as the first, spine[0].

    If the spine curve is not closed: The tangent used for the first point is just the direction from spine[0] to spine[1], and the tangent used for the last is the direction from spine[n-2] to spine[n-1].

    In the simple case where the spine curve is flat in the XY plane, these rotations are all just rotations about the Z axis. In the more general case where the spine curve is any 3D curve, you need to find the destinations for all 3 of the local X, Y, and Z axes so you can completely specify the rotation. The Z axis is found by taking the cross product of

    (spine[i-1] - spine[i]) and (spine[i+1] - spine[i]).

    If the three points are collinear then this value is zero, so take the value from the previous point. Once you have the Z axis (from the cross product) and the Y axis (from the approximate tangent), calculate the X axis as the cross product of the Y and Z axes.

5. Finally, the cross section is translated to the location of the spine point.

Surfaces of revolution: If the cross section is an approximation of a circle and the spine is straight, then the GeneralCylinder is equivalent to a surface of revolution, where the width parameters define the width of the cross section along the spine.

Cookie-cutter extrusions: If both the width and spine are straight, then the cross section acts like a cookie cutter, with the thickness of the cookie equal to the length of the spine.

Bend/twist/taper objects: These shapes are the result of using all fields. The spine curve bends the extruded shape defined by the cross section, the twist parameters twist it around the spine, and the width parameters taper it (by scaling about the spine).

GeneralCylinder has three parts: the sides, the beginCap (the surface at the initial end of the spine) and the endCap (the surface at the final end of the spine). Each part has an associated SFBool field that indicates whether the part exists (TRUE) or doesn't exist (FALSE).

When the beginCap or endCap fields are specified as TRUE, planar cap surfaces will be generated regardless of whether the crossSection is a closed curve. (If crossSection isn't a closed curve, the caps are generated as if it were -- equivalent to adding a final point to crossSection that's equal to the initial point. Note that an open surface can still have a cap, resulting (for a simple case) in a shape something like a soda can sliced in half vertically.) These surfaces are generated even if spine is also a closed curve. If a field value is FALSE, the corresponding cap is not generated.

GeneralCylinder automatically generates its own normals. Orientation of the normals is determined by the vertex ordering of the triangles generated by GeneralCylinder. The vertex ordering is in turn determined by the crossSection curve. If the crossSection is drawn counterclockwise, then the polygons will have counterclockwise ordering when viewed from the 'outside' of the shape (and vice versa for clockwise ordered crossSection curves).

Texture coordinates are automatically generated by general cylinders. Textures are mapped like the label on a soup can: the coordinates range in the U direction from 0 to 1 along the crossSection curve (with 0 corresponding to the first point in crossSection and 1 to the last) and in the V direction from 0 to 1 along the spine curve (again with 0 corresponding to the first listed spine point and 1 to the last). When crossSection is closed, the texture has a seam that follows the line traced by the crossSection's start/end point as it travels along the spine. If the endCap and/or beginCap exist, the crossSection curve is cut out of the texture square and applied to the endCap and/or beginCap planar surfaces. The beginCap and endCap textures' U and V directions correspond to the X and Z directions in which the crossSection coordinates are defined.

See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.

PROTO GeneralCylinder [
  field MFVec3f spine        [ 0 0 0, 0 1 0 ]
  field MFVec2f crossSection [ 1 1, -1 1, -1 -1, 1 -1 ]
  field MFFloat width      [ 1, 1 ]
  field MFFloat twist        [ 0, 0 ]
  field SFBool  sides        TRUE
  field SFBool  beginCap     TRUE
  field SFBool  endCap       TRUE
  field SFBool  ccw          TRUE
  field SFBool  solid        TRUE
  field SFBool  convex       TRUE
  field SFFloat creaseAngle  0
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

Sphere

The Sphere node represents a sphere. By default, the sphere is centered at the origin and has a radius of 1.

Spheres generate their own normals. When a texture is applied to a sphere, the texture covers the entire surface, wrapping counterclockwise from the back of the sphere. The texture has a seam at the back on the YZ plane.

PROTO Sphere [
  field SFFloat radius  1
] {
   ... equivalent to an IndexedFaceSet plus generator script...
}

Other Special Nodes

Interpolators

Interpolators are nodes that are useful for doing keyframed animation. Given a sufficiently powerful scripting language, all of these interpolators could be implemented using Script nodes (browsers might choose to implement these as pre-defined prototypes of appropriately defined Script nodes). We believe that keyframed animation will be common enough to justify the inclusion of these classes as built-in types.

Interpolator node names are defined based on the concept of what is to be interpolated: an index, orientation, coordinates, position, color, normals, etc. The fields for each interpolator provide the details on what the interpolators are affecting.


ColorInterpolator

This node interpolates among a set of MFColor values, to produce MFColor outValue events. The number of colors in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many colors will be contained in the outValue events. For example, if 7 keyframe times and 21 colors are given, each keyframe consists of 3 colors; the first keyframe will be colors 0,1,2, the second colors 3,4,5, etc. The color values are linearly interpolated in each coordinate.

The description of MF values in and out belongs in the general interpolator section above, or maybe we should split up the interpolators into single-valued and multi-valued sections.

PROTO ColorInterpolator [
  exposedField MFFloat keys      []
  exposedField MFColor values    []
  eventIn      SFFloat set_fraction
  eventOut     MFColor outValue
] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFColor values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     MFColor outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

Coordinate3Interpolator

This node linearly interpolates among a set of MFVec3f values. This would be appropriate for interpolating vertex positions for a geometric morph.

The number of coordinates in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many coordinates will be contained in the outValue events.

PROTO Coordinate3Interpolator [
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     MFVec3f outValue

] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFVec3f values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     MFVec3f outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

NormalInterpolator

This node interpolates among a set of multi-valued Vec3f values, suitable for transforming normal vectors. All output vectors will have been normalized by the interpolator.

The number of normals in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many normals will be contained in the outValue events.

PROTO NormalInterpolator [
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     MFVec3f outValue
] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFVec3f values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     MFVec3f outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}


OrientationInterpolator

This node interpolates among a set of SFRotation values. The rotations are absolute in object space and are, therefore, not cumulative. The values field must contain exactly as many rotations as there are keyframe times in the keys field, or an error will be generated and results will be undefined.

PROTO OrientationInterpolator [
  exposedField MFFloat    keys      []
  exposedField MFRotation values    []
  eventIn      SFFloat    set_fraction
  eventOut     SFRotation outValue


] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFRotation values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     SFRotation outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

PositionInterpolator

This node linearly interpolates among a set of SFVec3f values. This would be appropriate for interpolating a translation.

PROTO PositionInterpolator [
  exposedField MFFloat keys      []
  exposedField MFVec3f values    []
  eventIn      SFFloat set_fraction
  eventOut     SFVec3f outValue

] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFVec3f values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     SFVec3f outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}

ScalarInterpolator

This node linearly interpolates among a set of SFFloat values. This interpolator is appropriate for any parameter defined using a single floating point value, e.g., width, radius, intensity, etc. The values field must contain exactly as many numbers as there are keyframe times in the keys field, or an error will be generated and results will be undefined.

PROTO ScalarInterpolator [
  exposedField MFFloat keys      []
  exposedField MFFloat values    []
  eventIn      SFFloat set_fraction
  eventOut     SFFloat outValue
] {
  Script {
    exposedField MFFloat keys IS keys
    exposedField MFFloat values IS values
    eventIn      SFFloat set_fraction IS set_fraction
    eventOut     SFFloat outValue IS outValue
    #
    # Does the math to map input fraction into values based on keys...
  }
}



Field Reference

(complete alphabetical listing and description)

There are two general classes of fields; fields that contain a single value (where a value may be a single number, a vector, or even an image), and fields that contain multiple values. Single-valued fields all have names that begin with "SF", multiple-valued fields have names that begin with "MF". Each field type defines the format for the values it writes.

Multiple-valued fields are written as a series of values separated by commas, all enclosed in square brackets. If the field has zero values then only the square brackets ("[]") are written. The last may optionally be followed by a comma. If the field has exactly one value, the brackets may be omitted and just the value written. For example, all of the following are valid for a multiple-valued field containing the single integer value 1:

1
[1,]
[ 1 ]

SFBool

A field containing a single boolean (true or false) value. SFBools may be written as TRUE or FALSE.


SFColor/MFColor

Fields containing one (SFColor) or zero or more (MFColor) RGB colors. Each color is written to file as an RGB triple of floating point numbers in ANSI C floating point format, in the range 0.0 to 1.0. For example:

[ 1.0 0. 0.0, 0 1 0, 0 0 1 ]

is an MFColor field containing the three colors red, green, and blue.


SFFloat/MFFloat

Fields that contain one (SFFloat) or zero or more (MFFloat) single-precision floating point number. SFFloats are written to file in ANSI C floating point format. For example:

[ 3.1415926, 12.5e-3, .0001 ]

is an MFFloat field containing three values.


SFImage

A field that contain an uncompressed 2-dimensional color or greyscale image.

SFImages are written to file as three integers representing the width, height and number of components in the image, followed by width*height hexadecimal values representing the pixels in the image, separated by whitespace. A one-component image will have one-byte hexadecimal values representing the intensity of the image. For example, 0xFF is full intensity, 0x00 is no intensity. A two-component image puts the intensity in the first (high) byte and the transparency in the second (low) byte. Pixels in a three-component image have the red component in the first (high) byte, followed by the green and blue components (so 0xFF0000 is red). Four-component images put the transparency byte after red/green/blue (so 0x0000FF80 is semi-transparent blue). A value of 0xFF is completely transparent, 0x00 is completely opaque. Note: each pixel is actually read as a single unsigned number, so a 3-component pixel with value "0x0000FF" can also be written as "0xFF" or "255" (decimal). Pixels are specified from left to right, bottom to top. The first hexadecimal value is the lower left pixel of the image, and the last value is the upper right pixel.

For example,

1 2 1 0xFF 0x00

is a 1 pixel wide by 2 pixel high greyscale image, with the bottom pixel white and the top pixel black. And:

2 4 3 0xFF0000 0xFF00 0 0 0 0 0xFFFFFF 0xFFFF00

is a 2 pixel wide by 4 pixel high RGB image, with the bottom left pixel red, the bottom right pixel green, the two middle rows of pixels black, the top left pixel white, and the top right pixel yellow.


SFInt32/MFInt32

Fields containing one (SFInt32) or zero or more (MFInt32) 32-bit integers. SFInt32s are written to file as an integer in decimal or hexadecimal (beginning with '0x') format. For example:

[ 17, -0xE20, -518820 ]

is an MFInt32 field containing three values.


SFNode/MFNode (new)

A field containing one or several nodes. An node field's syntax is just the node that it contains; for example, this is valid syntax for an MFNode field:

[ Transform { translation 1 0 0 }, DEF CUBE Cube { }, USE SOME_NODE ]

An SFNode field may also contain the keyword NULL to indicate that it contains nothing.


SFRotation/MFRotation

A field containing an arbitrary rotation. SFRotations are written to file as four floating point values separated by whitespace. The 4 values represent an axis of rotation followed by the amount of right-handed rotation about that axis, in radians. For example, a 180 degree rotation about the Y axis is:

0 1 0  3.14159265

SFString/MFString

Fields containing one (SFString) or zero or more (MFString) UTF-8 string (sequence of characters). Strings are written to file as a sequence of UTF-8 octets in double quotes. Any characters (including newlines and '#') may appear within the quotes. To include a double quote character within the string, precede it with a backslash. To include a backslash character within the string, type two backslashes. For example:

"One, Two, Three"
"He said, \"Immel did it!\""

are all valid strings.


SFTime/MFTime (new)

Field containing a single time value. Each time value is written to file as a double-precision floating point number in ANSI C floating point format. A absolute SFTime is the number of seconds since Jan 1, 1970 GM


SFVec2f/MFVec2f

Field containing a two-dimensional vector. SFVec2fs are written to file as a pair of floating point values separated by whitespace.


SFVec3f/MFVec3f

Field containing a three-dimensional vector. SFVec3fs are written to file as three floating point values separated by whitespace.



Appendix A: Java Bindings for the VRML API

January 31, 1996

This appendix describes the Java classes and methods that allow scripts to interact with associated scenes. It contains links to various Java pages as well as to certain sections of the Moving Worlds spec (including the general description of scripting and the API).

Language

Java(TM) is a portable, interpreted, object-oriented programming language developed at Sun Microsystems. It's likely to be the most common language supported by VRML browsers in Script nodes. A full description of Java is far beyond the scope of this appendix; see the Java web site for more information. This appendix describes only the Java bindings of the VRML API (the calls that allow the script in a VRML Script node to interact with the scene in the VRML file).

Exposed Classes and Methods for Nodes and Fields

Java classes for VRML are defined in the package vrml. (Package names are generally all-lowercase, in deference to UNIX file system naming conventions.)

The Field class extends Java's Object class by default (when declared without an explicit superclass, as below); thus, Field has the full functionality of the Object class, including the getClass() method. The rest of the package defines a "Const" read-only class for each VRML field type, with a getValue() method for each class; and another read/write class for each VRML field type, with both getValue() and setValue() methods for each class.

Most of the setValue() methods are listed as "throws exception," meaning that errors are possible -- you may need to write exception handlers (using Java's catch() method) when you use those methods. Any method not listed as "throws exception" is guaranteed to generate no exceptions. Each method that throws an exception is followed by a comment indicating what type of exception will be thrown.

package vrml;

class Field {
}


//
// Read-only (constant) classes, one for each field type:
//

class ConstSFBool extends Field {
  public boolean getValue();
}

class ConstSFColor extends Field {
  public float[] getValue();
}

class ConstMFColor extends Field {
  public float[][] getValue();
}

class ConstSFFloat extends Field {
  public float getValue();
}

class ConstMFFloat extends Field {
  public float[] getValue();
}

class ConstSFImage extends Field {
  public byte[] getValue(int[] dims);
}

class ConstSFInt32 extends Field {
  public int getValue();
}

class ConstMFInt32 extends Field {
  public int[] getValue();
}

class ConstSFNode extends Field {
  public Node getValue();
}

class ConstMFNode extends Field {
  public Node[] getValue();
}

class ConstSFRotation extends Field {
  public float[] getValue();
}

class ConstMFRotation extends Field {
  public float[][] getValue();
}

class ConstSFString extends Field {
  public String getValue();
}

class ConstMFString extends Field {
  public String[] getValue();
}

class ConstSFVec2f extends Field {
  public float[] getValue();
}

class ConstMFVec2f extends Field {
  public float[][] getValue();
}

class ConstSFVec3f extends Field {
  public float[] getValue();
}

class ConstMFVec3f extends Field {
  public float[][] getValue();
}

class ConstSFTime extends Field {
  public double getValue();
}


//
// And now the writeable versions of the above classes:
//

class SFBool extends Field {
  public boolean getValue();
  public void setValue(boolean value);
}

class SFColor extends Field {
  public float[] getValue();
  public void setValue(float[] value)
    throws ArrayIndexOutOfBoundsException;
}

class MFColor extends Field {
  public float[][] getValue();
  public void setValue(float[][] value)
    throws ArrayIndexOutOfBoundsException;
  public void setValue(ConstMFColor value);
  public void set1Value(int index, float[] value);
}

class SFFloat extends Field {
  public float getValue();
  public void setValue(float value);
}

class MFFloat extends Field {
  public float[] getValue();
  public void setValue(float[] value);
  public void setValue(ConstMFFloat value);
  public void set1Value(int index, float value);
}

class SFImage extends Field {
  public byte[] getValue(int[] dims);
  public void setValue(byte[] data, int[] dims)
    throws ArrayIndexOutOfBoundsException;
}

// In Java, the int class is a 32-bit integer
class SFInt32 extends Field {
  public int getValue();
  public void setValue(int value);
}

class MFInt32 extends Field {
  public int[] getValue();
  public void setValue(int[] value);
  public void setValue(ConstMFInt32 value);
  public void set1Value(int index, int value);
}

class SFNode extends Field {
  public Node getValue();
  public void setValue(Node node);
}

class MFNode extends Field {
  public Node[] getValue();
  public void setValue(Node[] node);
  public void setValue(ConstMFNode node);
  public void set1Value(int index, Node node);
}

class SFRotation extends Field {
  public float[] getValue();
  public void setValue(float[] value)
    throws ArrayIndexOutOfBoundsException;
}

class MFRotation extends Field {
  public float[][] getValue();
  public void setValue(float[][] value)
    throws ArrayIndexOutOfBoundsException;
  public void setValue(ConstMFRotation value);
  public void set1Value(int index, float[] value);
}

// In Java, the String class is a Unicode string
class SFString extends Field {
  public String getValue();
  public void setValue(String value);
}

class MFString extends Field {
  public String[] getValue();
  public void setValue(String[] value);
  public void setValue(ConstMFString value);
  public void set1Value(int index, String value);
}

class SFTime extends Field {
  public double getValue();
  public void setValue(double value);
}

class SFVec2f extends Field {
  public float[] getValue();
  public void setValue(float[] value)
    throws ArrayIndexOutOfBoundsException;
}

class MFVec2f extends Field {
  public float[][] getValue();
  public void setValue(float[][] value)
    throws ArrayIndexOutOfBoundsException;
  public void setValue(ConstMFVec2f value);
  public void set1Value(int index, float[] value);
}

class SFVec3f extends Field {
  public float[] getValue();
  public void setValue(float[] value)
    throws ArrayIndexOutOfBoundsException;
}

class MFVec3f extends Field {
  public float[][] getValue();
  public void setValue(float[][] value)
    throws ArrayIndexOutOfBoundsException;
  public void setValue(ConstMFVec3f value);
  public void set1Value(int index, float[] value);
}


//
// Interfaces (abstract classes that your classes can inherit from
// but that you can't instantiate) relating to events and nodes:
//

interface EventIn {
  public String getName();
  public SFTime getTimeStamp();
  public ConstField getValue();
}

interface Node {
  public ConstField getValue(String fieldName)
    throws InvalidFieldException;
  public void postEventIn(String eventName, Field eventValue)
    throws InvalideEventInException;
}


//
// This is the general Script class, to be subclassed by all scripts.
// Note that the provided methods allow the script author to explicitly
// throw tailored exceptions in case something goes wrong in the
// script; thus, the exception codes for those exceptions are to be
// determined by the script author.
//

class Script implements Node {
  public void processEvents(Events [] events)
    throws Exception; // Script:code is up to script author
  public void eventsProcessed()
    throws Exception; // Script:code is up to script author
  protected Field getEventOut(String eventName)
    throws InvalidEventOutException;
  protected Field getField(String fieldName)
    throws InvalidFieldException;
}

Browser Interface

This section lists the public Java interfaces to the Browser class, which allows scripts to get and set browser information. For descriptions of the methods, see the "Browser Interface" section of the "Scripting" section of the spec.

public class Browser {

  public static String getName();
  public static String getVersion();

  public static String getNavigationType();
  public static void setNavigationType(String type)
    throws InvalidNavigationTypeException;

  public static float getNavigationSpeed();
  public static void setNavigationSpeed(float speed);

  public static float getCurrentSpeed();

  public static float getNavigationScale();
  public static void setNavigationScale(float scale);

  public static boolean getHeadlight();
  public static void setHeadlight(boolean onOff);

  public static String getWorldURL();
  public static void loadWorld(String [] url);

  public static float getCurrentFrameRate();

  public static Node createVrmlFromURL(String[] url)
    throws InvalidVRMLException;
  public static Node createVrmlFromString(String vrmlSyntax)
    throws InvalidVRMLException;

  public void addRoute(Node fromNode, String fromEventOut,
    Node toNode, String toEventIn)
    throws InvalidRouteException;
  public void deleteRoute(Node fromNode, String fromEventOut,
    Node toNode, String toEventIn)
    throws InvalidRouteException;

  public void bindBackground(Node background);
  public void unbindBackground();
  public boolean isBackgroundBound(Node background);

  public void bindNavigationInfo(Node navigationInfo);
  public void unbindNavigationInfo();
  public boolean isNavigationInfoBound(Node navigationInfo);

  public void bindViewpoint(Node viewpoint);
  public void unbindViewpoint();
  public boolean isViewpointBound(Node viewpoint);

}

System and Networking Libraries

To perform system or networking calls, use the appropriate standard Java libraries.

Example

Here's an example of a Script node which determines whether a given color contains a lot of red. The Script node exposes a color field, an eventIn, and an eventOut:

Script {
  field SFColor currentColor 0 0 0
  eventIn SFColor colorIn
  eventOut SFBool isRed

  scriptType "javabc"
  behavior "ExampleScript.java"
}

[[should we rename colorIn to setCurrentColor, or would that imply that one was required to use this naming convention?]]

And here's the source code for the "ExampleScript.java" file that gets called every time an eventIn is routed to the above Script node:

import vrml;


class ExampleScript extends Script {

  // Declare field(s)
  private SFColor currentColor = (SFColor) getField("currentColor");

  // Declare eventOut field(s)
  private SFBool isRed = (SFBool) getEventOut("isRed");

  public void colorIn(ConstSFColor newColor, ConstSFTime ts) {
    // This method is called when a colorIn event is received
    currentColor.setValue(newColor.getValue());
}

  public void eventsProcessed() {
    if (currentColor.getValue()[0] >= 0.5) // if red is at or above 50%
      isRed.setValue(TRUE);
  }

}

For details on when the methods defined in ExampleScript are called, see the "Execution Model" section of the "Concepts" document.



Appendix B: C Bindings for the VRML API

January 30, 1996

This appendix describes the C datatypes and functions that allow scripts to interact with associated scenes.

Language

VRML browsers aren't required to support C in Script nodes; they're only required to support Java. In fact, supporting C is problematic:

Therefore, the bindings given in this document to provide interaction between VRML Script nodes and the rest of a VRML scene are provided for reference purposes only.

Events In and Out (Prototyped Data Structures and Functions)

/*
 * vrml.h - vrml support procedures for C
 */

typedef void * Field;
typedef char * String;
typedef int boolean;

typedef struct {
  unsigned char *value;
  int dims[3];
} SFImageType;

/*
 * Read-only (constant) type definitions, one for each field type:
 */

typedef        const boolean       *ConstSFBool;
typedef        const float         *ConstSFColor;
typedef        const float         *ConstMFColor;
typedef        const float         *ConstSFFloat;
typedef        const float         *ConstMFFloat;
typedef        const SFImageType   *ConstSFImage;
typedef        const int           *ConstSFInt32;
typedef        const int           *ConstMFInt32;
typedef        const Node          *ConstSFNode;
typedef        const Node          *ConstMFNode;
typedef        const float         *ConstSFRotation;
typedef        const float         *ConstMFRotation;
typedef        const String        ConstSFString;
typedef        const String        *ConstMFString;
typedef        const float         *ConstSFVec2f;
typedef        const float         *ConstMFVec2f;
typedef        const float         *ConstSFVec3f;
typedef        const float         *ConstMFVec3f;
typedef        const double        *ConstSFTime;


/*
 * And now the writeable versions of the above types:
 */

typedef        boolean     *SFBool;
typedef        float       *SFColor;
typedef        float       *MFColor;
typedef        float       *SFFloat;
typedef        float       *MFFloat;
typedef        SFImageType *SFImage;
typedef        int         *SFInt32;
typedef        int         *MFInt32;
typedef        Node        *SFNode;
typedef        Node        *MFNode;
typedef        float       *SFRotation;
typedef        float       *MFRotation;
typedef        String      SFString;
typedef        String      *MFString;
typedef        float       *SFVec2f;
typedef        float       *MFVec2f;
typedef        float       *SFVec3f;
typedef        float       *MFVec3f;
typedef        double      *SFTime;

/*
 * Event-related types and functions
 */

typedef        void *EventIn;

String getEventInName(EventIn eventIn);
int    getEventInIndex(EventIn eventIn);
SFTime getEventInTimeStamp(EventIn eventIn);
void   *getEventInValue(EventIn eventIn);

typedef        void *Node;

void *getNodeValue(Node *node, String fieldName);
void  postNodeEventIn(Node *node, String eventName, Field eventValue);

/*
 * C script
 */

typedef void *Script;

Field getScriptEventOut(Script script, String eventName);
Field getScriptField(Script script, String fieldName);

void exception(String error);

Browser Interface

This section lists the functions that allow scripts to get and set browser information. For descriptions of the functions, see the "Browser Interface" section of the "Scripting" section of the spec. Since these functions aren't defined as part of a "Browser" class in C, most of their names include the word "Browser" for clarity.

String  getBrowserName();
float   getBrowserVersion();

String  getBrowserNavigationType();
void    setBrowserNavigationType(String type);

float   getBrowserNavigationSpeed();
void    setBrowserNavigationSpeed(float speed);

float   getBrowserCurrentSpeed();

float   getBrowserNavigationScale();
void    setBrowserNavigationScale(float scale);

boolean getBrowserHeadlight();
void    setBrowserHeadlight(boolean onOff);

String  getBrowserWorldURL();
void    loadBrowserWorld(String url);

float   getBrowserCurrentFrameRate();

Node    createVrmlFromURL(String url);
Node    createVrmlFromString(String vrmlSyntax);

void    addRoute(Node fromNode, String fromEventOut,
                 Node toNode, String toEventIn);
void    deleteRoute(Node fromNode, String fromEventOut,
                    Node toNode, String toEventIn);

void    bindBrowserBackground(Node background);
void    unbindBrowserBackground();
boolean isBrowserBackgroundBound(Node background);

void    bindBrowserNavigationInfo(Node navigationInfo);
void    unbindBrowserNavigationInfo();
boolean isBrowserNavigationInfoBound(Node navigationInfo);

void    bindBrowserViewpoint(Node viewpoint);
void    unbindBrowserViewpoint();
boolean isBrowserViewpointBound(Node viewpoint);

System and Networking Libraries

[[anything special here, or do we just use standard C system and networking libraries?]]

Example

[[need to put in the actual Script node here... And I think the program needs to be completely rewritten to use new entrypoint model, with function named for each eventIn plus an eventsProcessed function. Is FooScriptType even necessary under new model?]]

/*
 * FooScript.c
 */

#include "vrml.h"

typedef struct {
    Script  parent;
    SFInt32 fooField;
    SFFloat barOutEvent;
} FooScriptType;

typedef FooScriptType *FooScript;

void constructFooScript(FooScript foo, Script p) {

    foo->parent = p;

    /* Initialize field(s) */
    foo->fooField = (SFInt32) getScriptField(foo->parent, "foo");

    /* Initialize eventOut field(s) */
    foo->barOutEvent = (SFFloat) getScriptEventOut(foo->parent, "bar");
}

void processFooScriptEvents(FooScript foo, EventIn *list, int length) {
    int i;
    for (i = 0; i < length; i++) {
       EventIn event = list[i];
       switch (getEventInIndex(event)) {
         case 0:
         case 1:
           *foo->barOutEvent = *(SFFloat) foo->fooField;
           break;
         default:
           exception("Unknown eventIn");
       }
    }
}


Index of Nodes and Fields

Appearance
Background
BoxProximitySensor
ClickSensor
Collision
Color
ColorInterpolator
Cone
Coordinate3
CoordinateInterpolator
Cube
Cylinder
CylinderSensor
DirectionalLight
DirectedSound
DiskSensor
ElevationGrid
Fog
FontStyle
GeneralCylinder
Group
IndexedFaceSet
IndexedLineSet
IndexInterpolator
LOD
Material
MFColor
MFFloat
MFInt32
MFNode
MFRotation
MFString
MFTime
MFVec2f
MFVec3f
Navigation Info
Normal
NormalInterpolator
OrientationInterpolator
PlaneSensor
PointLight
PointSet
PointSound
PositionInterpolator
ScalarInterpolator
Script
SFBool
SFColor
SFFloat
SFImage
SFInt32
SFNode
SFRotation
SFString
SFTime
SFVec2f
SFVec3f
Shape
Sphere
SphereSensor
SpotLight
Switch
Text
Texture2
Texture2Transform
TextureCoordinate2
TimeSensor
Transform
Viewpoint
WorldInfo
WWWAnchor
WWWInline