Main Structures

Communication Robot Acts

CommunicationRobotActs (CRA) describes adjustable robot behaviors. CRA’s consists of a sequence of actions. These actions are named BehaviorActions and include the following behaviors initially:

  • Speech: Voice output of a text message.

  • LookAt: Viewing target points in space.

  • Emotion: Expression of basic emotions.

  • Animation: Execution of predefined animations.

  • Wait: Holding the CRA in a waiting state.

  • SocialExpression: Execution of predefined social expressions.

By lining up these actions in series or in parallel, a sequence of actions can be defined for a specific static behavior. To specify such an ordering of actions one action can wait for the ending of a previous action. Levels of action executions are created. The following visualization shows an example of a CommunicationRobotAct with 4 BehaviorActions and 3 levels of execution. The actions on the second levels are waiting for the ending of the linked action in the first level. The actions LookAt and Emotion on the second level are executed in parallel. A multimodal execution of behaviors can also be realized. The Speech action on the third level is waiting for the execution of the action Emotion. The gray dotted line symbolized only the hierarchical processing of actions, but not a direct connection between the actions.

Example of CommunicationRobotAct

Communication Robot Act containing 4 BehaviorActions on 3 Levels of execution.

The execution of a CRA is seen as the robot’s output and can be configured, interpreted, executed and monitored by the main application RISE. The BehaviorActions are extensible and RISE is able to transfer them to robot wrappers via a ROS-Interface.

Interaction Rules

InteractionRules (IR) are a powerful tool for building reactive and complex behavior. An IR is defined by a state machine graph, where each state contains two processing areas with different functionalities. The first processing area is executed when a state is entered, and it supports several functionalities, including:

  • CommunicationRobotActs: starts a Communication Robot Act (CRA).

  • InteractionRule: starts another IR without stopping the current one.

  • raiseEventTopic: Raise an event with a defined message.

  • assignValue: writes data to memory.

These functionalities can also be nested within an if statement, allowing for more complex behavior.

The second processing area of an IR is where transitions are executed. Transitions define the conditions under which a state will be left and which state will be executed next. Transitions wait for an event topic with a specified message, which can be sent from an external ROS node or another IR.

Multiple IRs can run simultaneously, and an arriving event topic is used by only one IR. Scheduling is done by priorities, which can be visualized in the RISE application along with all the states and transitions of an IR.

Check out this example visualization of an IR’s transitions:

IR Transitions Example

Visualization of an Interaction Rule

With IRs, you can create powerful and flexible reactive behaviors that respond dynamically to the environment.

Working Memory

For allowing the Robot to use historical context in a dialogue and accessing overall stored information of a scenario or an interaction, the Working Memory introduces a supported structure to store this kind of information. The Working Memory with the structure of a dictionary allows all applications in the environment to read and write information into a shared place. RISE will host this memory and schedule the reading and writing processes. This memory is mainly used to personalize actions in relation to different contexts.

Working Memory

The Concept of the Working Memory.

The visualization shows the use of the Working Memory as central place for information. Robot Behaviors in terms of CommunicationRobotActs can access information out of the memory. The execution of behaviors leads to different states in the environment. Each component in the environment, for example.. the human, gives inputs into the environment which can also be presented as information for the memory. The environment itself can also be access information from the memory for decision-making for example.