πŸ“¨Messages

Objects exchange messages with each other. When something happens, an Object sends a message to everyone who is listening. If an Object receives such message, it interprets the content and adjust itself if finds the message relevant.

Three Languages To Write Messages In

Messages are packages of data that can be expressed in one of three languages.

The language of implementation is JSON, while both YAML and Speech are parsed and transpiled to JSON during processing. Below are examples of the same messages for each of the languages.

JSON for Backend Implementation

Examples:

{ "flow" : "activate" }
{ 
   "media" : {
      "folder" : "Presentation",
      "page" : 10
   }  
}
{ 
   "position" : {
      "x" : 10,
      "y" : 20,
      "z" : 30,
      "filter" : {
         "filter" : "if",
         "left" : {
            "property" : "name",
         },
         "right" : "Niko",
         "op" : "is"
      }
   }  
}

YAML for Readability

flow: activate
media:
  folder: "Presentation"
  page: 10
position:
  x: 10
  y: 20
  z: 30
  filter:
    filter: if
    left: 
      property: name
    right: "Niko"
    op: "is"

Speech for In-App Voice

flow activate
media folder Presentation page 10
position 10 20 30 filter if property name is Niko

Messages Content

As mentioned above, messages are used by Objects for two purposes:

  • Inform other Objects of how their state changes or of what is happening to them.

  • Control other Objects by changing their state or by make something happen to them.

With this approach, every message describes either a state change or an occurred event.

It has 1) type and 2) content, which is represented as dictionary of other messages.

Here is a simple example from the table above.

{ 
   "media" : { 
      "folder" : "Presentation",
      "page" : 10
   }  
}

That is equivalent for:

Message of type media containing:

  • a message of type folder

    • with text content "Presentation"

  • a message of type page

    • with numerical content 10

Typed Messages and Interpretation

We introduce types of messages to help Objects with interpretation. Types relate to the nature of what is happening with the Object, thus most of the types are domain specific.

Knowing a type of a message, its receiver can decide if the message relevant or not. Then the receiver might use the domain knowledge to interpret the message and act it out. For example, when a Loudspeaker gets a state on message, the interpretation is to start playing.

However, Objects don’t do any interpretation by themselves. Interpretation is done by Behaviors attached to them.

Message Types

We provide a set of basic types to create a baseline for standardisation and improve on interoperability. However, the developers of both Artifacts and Behaviours can easily extend the vocabulary by sending messages of their own types.

Below is the basic dictionary of message types and how they are used by different Objects.

Message TypeExpected ParametersUnderstood by

flow

next prev first last play pause

  • Slide Screen, Photo Frame, etc.

state

on off reset

  • Microphone

  • Timer

folder

<text>

  • Text Screen

media

<folder> <number> <text>

  • Slide Screen, Photo Frame, etc.

page

<number>

  • Slide Screen, Photo Frame, etc.

time

<number> <number> <number> <number>:<number>

  • Timer

prop

<text>

(All Artifacts)

position rotation scale

<number> <number> <number> (and variants)

(All Artifacts)

channel

<number> <text>

(All Artifacts)*

plane

<text>

(All Artifacts)*

artifact

<text> [<property> ...]

  • Place*

Message Filtering

When an Object issues a message, it can provide also an optional filter to signify that this message is dedicated only to a certain subset of Other objects (from the sets of listeners and of controlled Objects).

This filter is also expressed as a message of a corresponding type filter. Below are expected parameters for this message.

self
all
lost
avatars <filter>
<artifact> <filter>
near <position> [<number>]
first <filter> [<number>]
last <filter> [<number>]
not <filter>
<filter> and <filter>
<filter> or <filter>
(<filter>)
in <channel>
on <plane>
if <text> <comparison*> <message>

Speech Language

Speech is a domain-specific language developed to be used predominantly by human people when they need to control certain objects. They can do it by sending messages via one of the interfaces provided by the platform.

We see two interfaces for the Speech:

  • In-app Speech-To-Text (STT) interface via special Artifacts which support STT and can pass messages further to other objects which they control or to ones which are listening.

  • Place Console text interface exposed in the Web Portal.

Speech Language Design

The interfaces for Speech define the design of the Speech:

  • Speech aims to be as close to natural spoken language as possible.

  • Speech doesn’t use special characters like ^, *, {}, , etc.

  • Quotes and punctuation are optional.

  • Units (e.g. feet, minutes) and prepositions (e.g. to, from) are optional.

  • Message types (e.g. time, flow) are optional.

As a result, Speech has a basic preprocessor that cast types and units when they are needed to disambiguate.

Speech Syntax

Speech syntax is pretty much Lisp-like, making brackets optional as design dictates.

  • Lisp: (time 20:10),

    • Speech: time 20:10

  • Lisp: (media (folder "Presentation") 10),

    • Speech media folder "Presentation" 10

Last updated