API Documentation

Camera Exercise Configuration

Configure and control the fullscreen camera exercise using URL parameters for object detection or emotion detection.

The Camera Exercise API allows fullscreen camera interaction with different gamemodes. Configure detection mode, camera facing, sensitivity, and debug overlays via URL parameters.

Basic Parameters

Core parameters that define the camera exercise behavior.

gamemode

String

Selects the detection mode. Options: detectObject, detectFaceEmotion.

?gamemode=detectFaceEmotion
Default: detectObject

Examples:

/?gamemode=detectObject

/?gamemode=detectFaceEmotion

cameraFacing

String

Choose which camera to use: user (front) or environment (back).

?cameraFacing=user
Default: environment

Examples:

/?cameraFacing=user

sensitivity

Number (0-1)

Detection sensitivity (0 to 1).

?sensitivity=0.8
Default: 0.6

Examples:

/?sensitivity=0.8

displayHorizontal

String

Controls the horizontal position of the predictions panel: left, center, or right.

?displayHorizontal=left
Default: right

Examples:

/?displayHorizontal=center

displayVertical

String

Controls the vertical position of the predictions panel: top, middle, or bottom.

?displayVertical=middle
Default: top

Examples:

/?displayVertical=bottom

maxDisplayItems

Number

Maximum number of predictions shown simultaneously.

?maxDisplayItems=8
Default: 5

Examples:

/?maxDisplayItems=3

cameraOpacity

Number (0-1)

Control the camera feed opacity/transparency (0.0 to 1.0).

?cameraOpacity=0.7
Default: 1.0

Examples:

/?cameraOpacity=0.5

/?cameraOpacity=0.8

Complete Example

A full camera configuration example:

/?gamemode=detectFaceEmotion&cameraFacing=user&sensitivity=0.7&cameraOpacity=1.0