Sensors: input

Gavin Bell, <gavin@sgi.com>, Silicon Graphics
Mitra <mitra@mitra.biz>
Dave Mott, Silicon Graphics
Alan Norton, Silicon Graphics
Paul Isaacs, Silicon Graphics
Rob Meyers, Silicon Graphics

This document can be found at http://www.mitra.biz/vrml/vrml2/behaviors/sensors.html. It was last updated on Dec. 11, 1995.


General notes: Given a "thick enough" API for Logic nodes, Sensors could all be implemented as prototyped Logic nodes. Although it may make sense to define such a rich API eventually, in the short term we believe it makes more sense to build that functionality into pre-defined classes. Later versions of the VRML spec might choose to describe these as pre-define prototypes, along with their implementation using Logic nodes.

TimeSensor

TimeSensors generate events as time passes. TimeSensors remains inactive until their startTime is reached. At the first simulation tick where real time >= startTime, the TimeSensor will begin generating time and alpha events, which may be routed to other nodes to drive continuous animation or simulated behaviors. The length of time a TimeSensor generates events is controlled using cycleInterval and cycleCount; a TimeSensor stops generating time events at time startTime+cycleInterval*cycleCount. The time events contain times relative to startTime,, so they will start at zero and increase up to cycleInterval*cycleCount. The cycleMethod field controls the mapping of time to alpha values (which are typically used to drive Interpolators). It may be set to FORWARD, causing alpha to rise from 0.0 to 1.0 over each interval, BACK which is equal to 1-FORWARD, SWING, causing alpha to alternate 0.0 to 1.0, 1.0 to 0.0, on each successive interval, or it may be set to TICK to generate time and alpha events only once per cycle (in which case the alpha generated will always be 0).

pauseTime may be set to interrupt the progress of a TimeSensor. If pauseTime is greater than startTime, time and alpha events will not be generated after the pause time. pauseTime is ignored if it is less than or equal to startTime.

If cycleCount is <= 0, the TimeSensor will continue to tick continuously, as if the cycleCount is infinity. This use of the TimeSensor should be used with caution, since it incurs continuous overhead on the simulation.

Setting cycleCount to 1 and cycleInterval to 0 will result in a single event being generated at startTime; this can be used to build an alarm that goes off at some point in the future.

No guarantees are made with respect to how often a TimeSensor will generate time events, but TimeSensors are guaranteed to generate final events with alpha==1.0 and time==(cycleInterval*cycleCount) if startTime is set to a time in the future and pauseTime is less than startTime.

??? Clearer wording? The idea is to ensure that any animations or other things being driven by a TimeSensor finish cleanly, and aren't left in a "half-done" state just because the TimeSensor never generated a final alpha==1.0 event.

FILE FORMAT/DEFAULTS
     TimeSensor {
          startTime 0          # SFTime (double-precision seconds)
          pauseTime 0          # SFTime
          cycleInterval 1      # SFTime
          cycleCount    1      # SFLong
          cycleMethod  FORWARD # SFEnum FORWARD | BACK | SWING | TICK
          # eventIn  SFTime  setStartTime
          # eventIn  SFTime  setPauseTime
          # eventIn  SFTime  setCycleInterval
          # eventIn  SFLong  setCycleCount
          # eventIn  SFEnum  setCycleMethod    # FORWARD | BACK | SWING
          # eventOut SFTime  time
          # eventOut SFFloat alpha
     }

Oops, need to add a description of the SFTime field. We could use SFFloat's for time, except that if we expect to be dealing with absolute times( from Jan 1 1970 0:00 GMT perhaps) then 32-bit floats don't give enough precision. Inventor writes SFTime fields as 64-bit double-precision values, but stores time values internally as two 32-bit integers.

Issue: Do we use wall-clock, real-time, or simulation-time, or do we allow this to be selectable by the author, with a sensible default? Does this even need to be defined?

Proximity Sensors

Proximity sensors are nodes that generates events when the viewpoint enters, exits, and moves inside a space. A proximity sensor can be activated or inactivated by sending it an "enable" event with a value of TRUE/FALSE. enter and exit events are generated when the viewpoint enters/exits the region and contain the time of entry/exit (ideally, implementations will interpolate viewpoint positions and compute exactly when the viewpoint first intersected the volume). As the viewpoint moves inside the region, position and orientation events are generated that report the position and orientation of the viewpoint in the local coordinate system of the proximity sensor.

There are two types of proximity sensors: BoxProximitySensor and SphereProximitySensor, differing only in the shape of region that they detect.

Gavin: Providing position and orientation when the user is outside the region would kill scalability and composability, so those should NOT be provided (authors can create proximity sensors that enclose the entire world if they want to track the viewpoint wherever it is in the world). Position and orientation at the time the user entered/exited the region might also be useful, but I'm not convinced they're useful enough to add (and besides, you could write a Logic node with internal state that figured these out...).

Sony/Mitra: to track the viewpoint wherever it is in the world, route an event from the _self pseudo-node.

BoxProximitySensor

The BoxProximitySensor node detects when the camera enters and leaves the volume defined by the fields center and size (an object-space axis-aligned box).

position and orientation events giving the position and orientation of the camera in the BoxProximitySensor's coordinate system are generated between the enter and exit times when either the camera or the coordinate system of the sensor moves.

A BoxProximitySensor that surrounds the entire world will have an enter time equal to the time that the world was entered, and can be used to start up animations or behaviors as soon as a world is loaded.

FILE FORMAT/DEFAULTS
     BoxProximitySensor {
          center          0 0 0 # SFVec3f
          size            0 0 0 # SFVec3f
          enabled  TRUE         # SFBool
          # eventIn SFBool setEnabled 
          # eventOut SFTime enter
          # eventOut SFTime exit
          # eventOut SFVec3f position
          # eventOut SFRotation orientation
     }

SphereProximitySensor

The SphereProximitySensor reports when the camera enters and leaves the sphere defined by the fields center and radius.

position and orientation events giving the position and orientation of the camera in the SphereProximitySensor's coordinate system are generated between the enter and exit times when either the camera or the coordinate system of the sensor moves.

FILE FORMAT/DEFAULTS
     SphereProximitySensor {
          center          0 0 0 # SFVec3f
          radius          0     # SFFloat
          # eventIn SFBool setEnabled 
          # eventOut SFTime enter
          # eventOut SFTime exit
          # eventOut SFVec3f position
          # eventOut SFRotation orientation
     }

Pointing Device Sensors

A PointingDeviceSensor is a node which tracks the pointing device with respect to some geometry. A PointingDeviceSensor can be made active/inactive by being sent enable events. There are two types of PointingDeviceSensors; ClickSensor and DragSensors.

Cascading events and Global Event

Pointing device sensors may be nested, with the rule being that the sensors lowest in the scene hierarchy will have first chance to 'grab' user input.

Issue: There are at least two approaches to the complex issue of what happens next. This is particularly complex in the case where ClickSensors may have been added at run-time by different behaviors.

SGI: Simple approach:
Once an event is 'grabbed' sensors higher in the hierarchy will not have a chance to process it.
Mitra: Explicit approach.
Finer control over the event is done by adding a SFBitMask field to each sensor with fields "HANDLED" and "IGNOREHANDLED", if a Sensor is marked "HANDLED" then once an event has been handled by this Sensor, the event is marked as "HANDLED" as it goes up the hierarchy. If an Sensor is marked as "IGNOREHANDLED" then an event that has already been "HANDLED" will be ignored by it. This allows the semantics of:

Issue: Modality and Global Events. Mitra: If we want to allow for modal behaviors, for example - in an authoring system - an object is clicked, and it's behavior specifies that the next object clicked is a "destination" to attach the object to. Then we need a way to intercept the event before it is passed up the scene-hierarcy to some other event handler.

This can be done by adding a queue of events which are checked before the event is passed up the hierarchy. This queue would be called the GlobalEvent queue, and an event would be added to this by:

ROUTE GlobalEvent.picked -> Foo.destinationChosen

Gavin: Doesn't think we should add authoring system functionality into VRML just yet. We can add it later, if necessary.

ClickSensor

The ClickSensor generates events as the pointing device passes over some geometry, and when the pointing device is over the geometry will also generate button press and release events for the button associated with the pointing device. Typically, the pointing device is a mouse and the button is a mouse button.

An enter event is generated when the pointing device passes over any of the shape nodes contained underneath the ClickSensor and contains the time at which the event occured. Likewise, an exit event is generated when the pointing device is no longer over the ClickSensor's geometry. isOver events are generated when enter/exit events are generated; an isOver event with a TRUE value is generated at the same time as enter events, and an isOver FALSE event is generated with exit events.

Should we say anything about what happens if the cursor stays still but the geometry moves out from underneath the ClickSensor? If we do say something, we should probably be conservative and only require enter/exit events when the pointing device moves.

Issue: enter/exit is for locate-highlighting (changing color or shape when the cursor passes over you to indicate that you may be picked). Is that too much to ask from implementations?

The following text will have to change depending on which approach to pick events is taken; the most complex approach is described here:

If the user presses the button associated with the pointing device while the cursor is located over its geometry, the ClickSensor will grab all further motion events from the pointing device until the button is released (other Click or Drag sensors will not generate events during this time). A press event is generated when the button is pressed over the ClickSensor's geometry, followed by a release event when it is released. isActive TRUE/FALSE events are generated along with the press/release events. Motion of the pointing device while it has been grabbed by a ClickSensor is referred to as a "drag".

As the user drags the cursor over the ClickSensor's geometry, the point on that geometry which lies directly underneath the cursor is determined. When isOver and isActive are TRUE, hitPoint, hitNormal, and hitTexture events are generated whenever the pointing device moves. hitPoint events contain the 3D point on the surface of the underlying geometry, given in the ClickSensor's coordinate system. hitNormal events contain the surface normal at the hitPoint. hitTexture events contain the texture coordinates of that surface at the hitPoint, which can be used to support the 3D equivalent of an image map.

FILE FORMAT/DEFAULTS
     ClickSensor {
          enabled  TRUE  # SFBool
          # eventIn  SFBool    setEnabled
          # eventOut pick stuff:  See below:
     }

Issue: Several possibilities for what the outputs look like:

  1. Sony/Mitra: Single event with API:
        # eventOut PICK pickInfo
    

    Advantages: simplest, easier to write code that deals with PICK events.
    Disadvantages: requires new event type (PICK) with corresponding script API.

  2. SGI: Implicit times, multiple events:
          # eventOut SFBool    isOver
          # eventOut SFBool    isActive
          # eventOut SFVec3f   hitPoint
          # eventOut SFVec3f   hitNormal
          # eventOut SFVec2f   hitTexture
    

    Advantages: uses only standard fields, fairly simple.
    Disadvantages: several events, requires script API to get event timestamp.

  3. SGI: Explicit times, multiple events:
          # eventOut SFBool    isOver
          # eventOut SFBool    isActive
          # eventOut SFTime    enter
          # eventOut SFTime    exit
          # eventOut SFTime    press
          # eventOut SFTime    release
          # eventOut SFVec3f   hitPoint
          # eventOut SFVec3f   hitNormal
          # eventOut SFVec2f   hitTexture
    

    Advantages: uses only standard fields, minimizes script API.
    Disadvantages: lots of events, potentially have lots of ROUTEs.

DragSensors

A DragSensor tracks pointing and clicking over its geometry just like the ClickSensor; however, DragSensors track dragging in manner suitable for continuous controllers such as sliders, knobs, and levers. When the pointing device is pressed and dragged over the node's geometry, the pointing device's position is mapped onto idealized 3D geometry.

DragSensors extend the ClickSensor's interface; enabled, enter, exit, isOver, press, release and isActive are implemented identically. hitPoint, hitNormal, and hitTexture events are only updated upon the initial click down on the DragSensors' geometry. There are five types of DragSensors; LineSensor and PlaneSensor support translation-oriented interfaces, and DiscSensor, CylinderSensor and SphereSensor establish rotation-oriented interfaces.

Issue: These are all supersets of the ClickSensor. Is there some way to build them on top of ClickSensor? What information would ClickSensor need to provide to these classes to allow them to function? SGI's conclusion: these need to do a bunch of math and need access to a lot of information (the screen-space to object-space matrix and mouse positions in screen space, at least) to be implemented correctly.

A drag sensor does not, on its own, cause the geometry it is sensing to move, however routing the appropriate event back to a transform on its parent will. For example:

DEF F Frame {
    translation 0 0 0
    DEF L LineSensor { 
        ...
    }
    Cube { ... }
}
ROUTE L.translation -> F.translation

LineSensor

The LineSensor maps dragging motion into a translation in one dimension, along the x axis of its local space. It could be used to implement the 3-dimensional equivalent of a 2D slider or scrollbar.

FILE FORMAT/DEFAULTS
     LineSensor {
          minPosition         0     # SFFloat
          maxPosition         0     # SFFloat
          enabled  TRUE  # SFBool
          # eventIn  SFBool    setEnabled
          # eventOut:  pick stuff, see issue above
          # eventOut SFVec3f   trackPoint
          # eventOut SFVec3f   translation
     }

minPosition and maxPosition may be set to clamp the translation events to a range of values as measured from the origin of the x axis. If minPosition is less than or equal to maxPosition, translation events are not clamped. trackPoint events provide unclamped drag position along the x axis.

PlaneSensor

The PlaneSensor maps dragging motion into a translation in two dimensions, in the x-y plane of its local space.

FILE FORMAT/DEFAULTS
     PlaneSensor {
          minPosition       0 0     # SFVec2f
          maxPosition       0 0     # SFVec2f
          enabled  TRUE  # SFBool
          # eventIn  SFBool    setEnabled
          # eventOut:  pick stuff, see issue above
          # eventOut SFVec3f   trackPoint
          # eventOut SFVec3f   translation
     }

minPosition and maxPosition may be set to clamp translation events to a range of values as measured from the origin of the x-y plane. If the x or y component of minPosition is less than or equal to the corresponding component of maxPosition, translation events are not clamped in that dimension. trackPoint events provide unclamped drag position in in the x-y plane.

DiscSensor

The DiscSensor maps dragging motion into a rotation around the z axis of its local space. The feel of the rotation is as if you were 'scratching' on a record turntable.

FILE FORMAT/DEFAULTS
     DiscSensor {
          minAngle           0       # SFFloat (radians)
          maxAngle           0       # SFFloat (radians)
          enabled  TRUE  # SFBool
          # eventIn  SFBool     setEnabled
          # eventOut:  pick stuff, see issue above
          # eventOut SFVec3f    trackPoint
          # eventOut SFRotation rotation
     }

minAngle and maxAngle may be set to clamp rotation events to a range of values as measured in radians about the z axis. If minAngle is less than or equal to maxAngle, rotation events are not clamped. trackPoint events provide unclamped drag position in the x-y plane.

CylinderSensor

The CylinderSensor maps dragging motion into a rotation around the y axis of its local space. The feel of the rotation is as if you were turning rolling pin.

FILE FORMAT/DEFAULTS
     CylinderSensor {
          minAngle           0       # SFFloat (radians)
          maxAngle           0       # SFFloat (radians)
          enabled  TRUE  # SFBool
          # eventIn  SFBool     setEnabled
          # eventOut:  pick stuff, see issue above
          # eventOut SFRotation rotation
          # eventOut SFBool     onCylinder
     }

minAngle and maxAngle may be set to clamp rotation events to a range of values as measured in radians about the y axis. If minAngle is less than or equal to maxAngle, rotation events are not clamped.

Upon the initial click down on the CylinderSensors' geometry, the hitPoint determines the radius of the cylinder used to map pointing device motion while dragging. trackPoint events always reflects the unclamped drag position on the surface of this cylinder, or in the plane perpendicular to the view vector if the cursor moves off of this cylinder. An onCylinder TRUE event is generated at the initial click down; thereafter, onCylinder FALSE/TRUE events are generated if the pointing device is dragged off/on the cylinder.

SphereSensor

The SphereSensor maps dragging motion into a free rotation about its center. The feel of the rotation is as if you were rolling a ball.

FILE FORMAT/DEFAULTS
     SphereSensor {
          enabled  TRUE  # SFBool
          # eventIn  SFBool     setEnabled
          # eventOut:  pick stuff, see issue above
          # eventOut SFRotation rotation
          # eventOut SFBool     onSphere
     }

The free rotation of the SphereSensor is always unclamped.

Upon the initial click down on the SphereSensors' geometry, the hitPoint determines the radius of the sphere used to map pointing device motion while dragging. trackPoint events always reflects the unclamped drag position on the surface of this cylinder, or in the plane perpendicular to the view vector if the cursor moves off of the sphere. An onSphere TRUE event is generated at the initial click down; thereafter, onSphere FALSE/TRUE events are generated if the pointing device is dragged off/on the cylinder.

VisibilitySensor

A visibility sensor generates visible/notVisible events, and can be used to make a script take up less CPU time when the things the script is controlling are not visible (for example, don't bother doing an elaborate walk-cycle animation if the character is not visible; instead, just modify its overall position/orientation).

Sony/Mitra: We believe that Visibility Sensor's aren't needed, in our model a Script is attached to a node, and implicitly (under control of a bit-field) receives events as the object comes in and out of view, in and out of LOD range, and is loaded/unloaded from the world.

Gavin: Thinks that adding a bit-field to every script is uglier than adding VisibilitySensors where needed.

Gavin: There are some issues here about when visibility should be determined; I predict it will be a bit tricky to implement a browser that doesn't suffer from "off-by-one" visibility errors, but we might just leave that as a quality-of-implementation issue and not spec exact behavior (I'm sure we won't require exact visibility computation, anyway...).

FILE FORMAT/DEFAULTS
     VisibilitySensor {
          enabled  TRUE  # SFBool
          # eventIn  SFBool     setEnabled
          # eventOut SFBool     visible
     }

CollisionSensor

The CollisionSensor node reports when the camera collides with any of the geometry it is sensing. It complements the CollideStyle node which controls at a course level whether the user can enter the object being collided with.

When a collision is detected, a collisionTime event is generated that gives the time of the collision, and position and orientation events are generated which provide the position and orientation of the camera in the CollisionSensor's coordinate system at the time of the collision.

FILE FORMAT/DEFAULTS
    CollisionSensor {
          enabled  TRUE         # SFBool
          # eventIn SFBool setEnabled 
          # eventOut SFTime collisionTime
          # eventOut SFVec3f position
          # eventOut SFRotation orientation
     }

Issue: Dynamic Sensors

Mitra/Sony: It would be useful to be able to add Sensors at a later point in the scene graph.

In our earlier papers it was possible to do, for example

DEF Foo Cube { ... }
    ... other stuff ...
ClickSensor {
    eventOn Foo 
}

While this example is trivial, there are situations where it is not. Especially if, for example the Cube is buried inside a Prototype. This doesn't appear to be able to be done with ROUTES.

Gavin/SGI: Mitra/Sony are bringing up two issues:
  1. Putting the ClickSensor somewhere in the VRML file other then with the geometry that it is sensing. While that might be convenient, so might be putting the children of a Separator somewhere other than with the Separator:
    DEF FOO Separator { }
    ... other stuff...
    Cube { childOf FOO }
    

    Unnecessary functionality that just makes VRML more complex.

  2. Not being able to access things inside a Prototype. That's exactly what prototypes are for! If the creator of the prototype wants to allow people to ROUTE to their objects then they must expose the appropriate events in the Prototype interface. If they want to allow Sensors to be added to the objects inside the Prototype, then that must also be exposed (there is an issue of how that is done, related to the Frames proposal).