The first round soliciting early adopters (Glass Explorers) for Project Glass has been concluded. Lucky are the engineers/hackers who will first lay their hands on what is undoubtedly the ‘Next Big Thing’ in the augment reality field. I say so because for long we’ve been playing around and exploring augment reality applications, and even though some were very good at showcasing the potential of this field in the near feature and its impact on people’s lives, these apps lacked a primordial component. Glass.
What mobile phones couldn’t offer, Glass can. The ability to apply augmented information on top of real life visual feedback is a crucial element for an immersive augmented reality experience. Google through its videos of Glass has shown the world some basic features tightly related to their products. Now (or very soon) it will be up to the developers to explore the new realm of endless possibilities and applications for this new technology.
My personal interests revolve around meshing AI (Artificial Intelligence) with AR, which inevitably is the next technological leap. Even though AI is still in a very premature phase, algorithms have reached a very satisfying level in data analysis thus statistical machine learning, predictive abilities, computer vision and image processing as well as speech recognition.
Alright, now what?
Now, Imagine The Following Scenarios
Coincidence or not, the Eulerian Video Magnification Algorithm has been developed and its open source implementation has already already been shared with the online community. To better understand what this computer vision algorithm is, watch the video below:
You forgot a username for the YouTube shortcode
The relevant part of the video and the part to be discussed in this post is the capacity to amplify frames from live streams of a camera to see things that are not visible to the naked eye. Merging this new ability with post-rendering analysis and nothing prevents the implementation of a Lie Detection module, currently feasible with a Polygraph or a Body Language interpretation module providing Glass with the capacity to suggest behavioral conduct.
Face.com have developed a platform for efficient and accurate facial recognition, which makes mapping a name to a face a simple matter of querying a large database of tagged faces. No one will be anonymous anymore. If you have a profile image of yourself somewhere on the web, chances are, mapping that face to a name is not a difficult thing to do.
I believe it’s only a matter of half a decade or so before we will be able to implement almost all the features of Sight depicted in the shortfilm below:
You forgot a username for the Vimeo shortcode
The technology is mature enough to allow the development of an application that is capable of detecting a face among many other objects in a specific scene, map a name to the detected face, aggregate all publicly available information related to the name and finally display the relevant results on the HUD.
And it doesn’t require a multi-million dollar company or a research facility to develop the prototype! Most of the components necessary have already been developed and well documented, even variations of their implementations are available for free online. Heck, if I had Glass in my hands now, that’s the first thing I would be doing!
Privacy no more
Yep, well you guessed it, being anonymous to a group of people will be very hard soon. The only way to be totally anonymous is to be offline, but really, who is? Most people are discussing the matter of privacy on the level of social media sharing of footage and snapshots. That’s really only the first layer of the issue. I can’t predict the impact this will have on the daily human interactions knowing that one’s actions can be analysed in real time by a system, always, watching…
I suppose people will adapt, just like they did to previous life changing technologies. A lot will resist in the beginning but eventually everyone will be forced to be part of the new culture.
It happened before and it will happen again.