Object Glass interrelates between objects in the world and CG widgets. Identifiers can be separated into three major types:

Visual identifiers: We use an image of the object to identify the object. This may involve 3’rd party API for image indexing service that assign Ids to images and provides search API where image data can be sent and an identification of the image is returned in the if the image was identified to match a pre indexed image. The identifier assigned to an Image is used as Glass Id. The fugure below  demonstrate this idea.

Scannable code: A code that can be scanned such as QR Code is attached with the object when creating a Widget and is scanned for fetching related widgets.

Numeric or textual code: A numeric code is entered by the user and is assigned with the widget. Then for searching a widget, the user enters the code and a related widget is fetched. The idea is the same as with scanned code, however the code is entered manually as a combination of digits and/or letters.

The figure below demonstrate the idea of using a code to store and fetch widgets.

 

Lingual:  Object is identified by Term, word or phrase. The text used to identify the object may be written, or may be verbally translated to written format and then used as the code. The figure below demonstrate this idea.

Physical measurement: Any physical measurement that can be formulated as and identifier of some object and can be digitized, can also be used as the Glass Id assigned to stored Widget. Then the same physical condition is detected it can be used to fetch related widgets. The figure below demonstrate this idea.

Object Glass Start Conditions

Generally the start condition in the case of Object Glass is some event or user behavior that triggers the identification process. The result of this identification process is then used as Glass Id for fetching related widgets from storage. The actual presentation of interrelated widgets depends on the specific application that uses Object Glass.

Taking a picture of some object: In this case the application perceives the event of picturing an object as a trigger for finding matched Id by searching in a database of pre-indexed images. This can be handled with various 3’rd party APIs that specialize in this task. If an Id is retrieved it is set into Glass Id and fetching interrelated widgets can start.

Scanning a code: User scans some code attached with an object and this action starts the fetching process.

Verbal user input: User speak either in response to program request or spontaneously. The verbal input is translated to textual input. Then words or phrases are checked against an index of words or phrases and once match is found an Id is return that is set in Glass Id so that searching for interrelated widgets can start. This approach may be very useful both for people with disabilities such as blind people, or when user attention should not be distracted from the visual field, such as when driving.  

Textual user input: is checked against an index of words or phrases and once match is found an Id is return that is set in Glass Id.

Sound sample: In this case the sound is not translated to human language but is sampled and processed with Fourier transform in order to generate profile of harmonies – sound spectrum. The spectrum is like a signature of sound and it can be used as the code.  For example, a sound of a bus can be used to share information about busses.   A user samples the sound of a bus with a recording device, and sends the sample to a sound processing API. The sound processing API performs sound analysis and generates a spectrum. The spectrum is formulated into textual string so it can now be used as a code. The code is attached with a Widget Glass Id. In the detection scenario a sound of a bus is sampled, it is processed, a code is generated and this code is then used to search for Widgets that were shared in relation with a bus sound. 

Physical measurement event: this is more general than the example with sound sample (which is actually a type of physical measurement). Generally speaking, an event of some physical measurement can be used as the trigger for fetching related Widgets. Refer to the surgery critical events sharing system described in the examples below. 

Attach new Widget with object Identifier

In the case of Object glass it is required to explain the process of attaching a widget to an object so it can be stored as interrelated with the object.

Unlike with Website Glass for example, where the system knows in advance to which Id a widget needs to be related, i.e. (some derivative of page URL), in the case of Object Glass the user need to ‘tell’ the system what is the object to which he would like to relate a widget. In other words, the user needs to perform some action that will relate a code with an object.

Attach to code function: A widget that support the Object Glass should therefore have a feature that lets the user set a code related with the widget. This code is of course related with an object. To handle that, a widget may have for example a button that when clicked enables the user to select the option for setting the code:

  • Take a picture of a related object: the user takes a picture of the object and sends it to an indexing service, using 3r’d party API. The result is a code assigned to the image by indexing system that can then be assigned as the Glass Id code of the object.
  • Scan code: user scans code, e.g. QR code and the textual content behind the code is attached to the widget.
  • Verbal: user who selects the Verbal option is requested to record a word or phrase that will be used to identify the object.
  • Textual: user is requested to enter some word or phrase in a text field. 
  •  Other: Take some physical measurement for example record a sound, or use some physical measurement indication (for example blood pressure) as an indication of state 

 

Examples

Example1 - Touring places CG Sharing:

This hypothetical application is for tourists either in tourism sites or museums. The tourist takes a picture of the object, e.g. Eifel tower, picture in museum etc. or says its name to the software. The software checks for recognition based on user input and presents in response widgets shared by friends who visited the same site. To save a review user creates a new widget (e.g. TextBox widget), clicks the “Relate” button on the widget, selects ‘picture’ option or 'name’ option, then takes a picture or say the object name and the remaining process generates an Id and attaches it with the widget, then stores the widget in the database.   

 

Example2 - Morning Newspapre Sharing:

This hypothetical mobile application lets friends share their ideas regarding articles in morning news papaers poeple are getting in the street (many times for free). The application provides predefine set of newspaper names (for the country or region). User select the newspapaer name that gives the initial context for sharing (the Glass). To share a widget regarding an article in the newspapaer the user scan the article. The scanned article is send for indexing, or recognisionwith prescanned article image of the same newspapaer. The combination of the return code and newspapaer name is what compose the state of an object - the Glass ID:

[newspapaer name]_[article image Index]

Widget can now be set in relation with article and a group of freinds for sharing with can be set.

The view of shared widgets can have two flavours: either showing all widgets shared by friends in relation with specific newspaper or showing shared widgets per article. In the first case when user select newspaper from list the app fetch all widgets with the relevant Glass-Type and Glass-Id that stares with newspaper name: [newspapaer name]_*

In the second casewidgets will be fetched only after the shared with user, scan the article and retrive the article's image index that let compose a full state identifier for locating shared widgets: [newspapaer name]_[article image Index]

 

Example3 - Barcode CG Sharing:

This hypothetical application is for sharing reviews of friends on products in the supermarket. Object recognition is handled by scanning the barcode of the product. The scan generates an Id that is then used to fetch widgets with friends' reviews about the product. To save a review user creates a new widget (e.g. TextBox widget ), clicks the “Relate” button on the widget, selects ‘scanning’ option, then scans the barcode and the remaining process generates an Id and attaches it with the widget, then stores the widget in the database.

Example4 - Sharing for blind people:

This hypothetical application uses Braille code as the input for creating Glass Id. Braille code is used as the key for a sharing system that is accessible to blind people. The application may be implemented in several ways. Here is one example

In the first step we need to process a portion of the Braille code – for example book title, into digital code.  If we have a special tool that can translate the Braille code to text then we can use it, and the text will be our digital code. If not, we can scan the Braille code and create an image of it,  then send the image to an image recognition service (for example Moodstock) and get a related code in response. Once we have a digital code that represents part of the Braille code, we can create a Widget and mark the widget as related with the code. The user can create for example RecordableNote widget, record some message and then attach this widget with the code. The Widget can now be shared in response to the same code. A second user, friend of the first one that read the same book can do the same operation of scanning book title written in Braille and the recorded note widget will be played.

Now let's assume that we use a tool that knows how to translate the Braille into text, the text is then our code – the title of the book. This allows a third user to scan the title of an ordinary book and get the same RecordedNote Widget left by the first user (the blind).

* Of course a regular smart phone may not be a device which blind person can use to share widgets, and there is a need for a special device where user can operate CG, for example click a button and record message, and then click another button and scan some Braille code, And use voice recognition to manage peers groups etc.  Yet, this example demonstrates nicely the Idea of sharing in relation with objects, assuming a way does exist to represent the object with a code. In this example the Book is the object, the title of the book is used to generate the identifier of the book (the code) and if we are able to translate Braille to text, or vice versa, and if we provide the special device adapted for blinds, then we are capable of using CG to allow blinds to share with friends in relation with objects.    

Example5 - Sound based sharing:

In this application widgets are shared in relation with sounds. Imagine an app that gets environmental sound as its input and shows widgets related with this sound. For example, a sound of a bus or train or even the vocal signature of our friends and family members. This application processes audio sampling for generating a code based on the spectrum of the sound. The spectrum is formulated into a code and this code is then used as the Glass Id by which widgets are shared.

Example6 - Surgarry Room

This example shows that CG is more than just entertainment technology. This hypothetical application is intend to inspiring and showing what can be done with State Oriented Sharing and is not pretend to present with accurate medial knowledge and understanding. The application is for sharing useful information during surgery. The information is shared between teams of doctors that perform similar surgery and more specifically it is intend to provide professional ideas during critical situations. The application Glass uses the surgery type and the Glass Type. Some physical measurements are used as the Glass Id for example heartbeats and blood pressure which are formulated into codes that reflect physical conditions. The shared information (Widgets) is made of vocal notes that refer to advised operations during critical conditions. Widgets are stored in relation with critical conditions such as critical heartbeat rate.

In real time scenario (surgery) patient measurements are monitored, and when critical condition of some of the physical measurements is detected, the system goes and fetch for professional ideas (Widgets) of how to handle the situation.