Location-Based AR

Our past Augmented Reality projects (such as Gilt City) have made use of the location-based features of Layar. Through the use of GPS your phone identifies your location; we would provide the imagery and information about points nearby; and then the Layar app would display them overlaying the real-world.

location based Layar

The faded items are additional steps in the process: Layar provides the directory listing through which people find your content, and the phone's internal compass is used to determine your orientation.

There have been a number of successful projects, but their designs have always needed to accept a degree of inaccuracy in placing imagery - which varies depending on environmental conditions.

Image Tracking AR

An alternative to location-based AR, is instead of using GPS and orientation sensors, to look for recognisable features in the video stream and use those as reference points. Until relatively recently limitations in computing power restricted the features that could be recognised to bold, high-contrast images. These "markers" are still some of the most recognisable aspects of AR.

location based Layar

http://www.pocket-lint.com/news/38901/georg-klein-interview-ar-ptam

Now it is possible to use "marker-less", or "natural-feature" technologies, such as that shown in the image below. In advance your chosen images are processed and a mathematical description of their features generated. For augmented reality each frame (in the order of 20 a second) is individually scanned for features, which are then compared against the prepared library.

location based Layar

http://code.google.com/p/in-spirit/wiki/ASSURF

Layar's Natural-Feature Tracking

Layar recently added natural feature tracking to their mobile app; and it is this technology that is used in the Joseph Wright AR project. The starting point is the combined painting/ceramic image, as it is intended to appear on the mobile device. It is then split, with the image of the painting being sent to Layar for feature detection, and the augmentation image stored on our own servers.

location based Layar

On the mobile device the camera feed is scanned for matching patterns, and if one is detected the original image is rebuilt using the augmentation fetched from the server.

JSON Response

All uses of Layar involve an exchange of information. The application sends information about the user's location, a code to identify them, the layer that they're browsing etc.(GetPOIs request definition) and in response you have a choice of information to return (JSON response definition). Although you have some choice in what to return, it has to be structured in a certain way, and follows the JavaScript Object Notation (JSON) format. Shown below is the basic structure for returning information about an augmented image.

{
    information about the response
    hotspots: [
        {
            anchor: {
                reference image
            }
            object: {
                augmentation image
            }
        }
    [
}

Using that structure, the sample below shows the minimum required to pair an augmentation with a reference image. Essentially this is returned to the Layar app in the same way that a webpage is returned to a browser.

{
    "layer": jwrightar,
    "errorCode": 0,
    "errorString": "ok",
    "hotspots": [
        {
            "id": "1",
            "anchor": {
                "referenceImage": "jw-alchemist-ref"
            }
            "object": {
                "contentType": "image/png",
                "url": "http://johngoto.org.uk/layar-jwright/augments/Alchemist-Aug.png",
                "size": 1.5
            }
        }
    [
}

Matthew Leach
January 2012