RFID Reading Field Visualizing Probe Design

Introduction

Last week while I was watching some videos on Vimeo looking for inspiring material, I hit the jackpot: “Immaterials: the ghost in the field”. I was so intrigued by their work; I didn’t even pay attention to the publishing date (2 years ago). While watching the video ideas started rushing through my head, since for a while I’ve been working with RFID and always faced problems with the reading field of the antennas. Of course going to the datasheets and trying to figure out the reading volume’s shape could be one possibility, but it’s just not “real” enough. I wanted to know more about the people behind this project and reached nearfield.org. The research was complete, the papers were published, the website was last updated in 2011 but people were still posting comments.

I started wondering what happened to this amazing technique, and why no one has pursued this study. In the meantime, while I could not argue the beauty of the outcome of this technique I couldn’t resist not thinking of how inefficient, time consuming and limited it was. So I decided to design my own tool that could take this research a step further.

RFID Reading Volume 3D Mapping Probe:

The concept is simple; it is visualized in the diagram below:

RFID Reading Field Visualizing Probe High level architecture
RFID Reading Field Visualizing Probe High level architecture

 

 

The diagram is at a very high level of abstraction and it’s not worth going into its details at the moment as some components might vary upon implementation. However, I’m gonna describe what is illustrated above:

  1. The Probe is made out of 5 modules:
    1. RFID Tag
    2. Coordinates Recording module
    3. Accelerometer
    4. LED
    5. Controller
  1. Once the Probe, specifically the Tag (1a) is at a reading distance from the RFID antenna, the reader dispatches a signal to the processing software layer that will in turn trigger the recording algorithm.
  2. The recording algorithm will ask the Controller to grab data from the 2 modules (1b and 1c) and will ask the LED (1d) to blink.
  3. The data is then gather, analyzed, stored and the coordinates with the accelerometer data will be used to draw 3D point cloud of the reading volume.
It is worth to note, that this Probe can be adapted to different technologies. Actually, any technology that offers instance response.

To be more specific, I drew a simple annotated sketch of how the probe might look like:

RFID Reading Field Visualizing Probe Sketch
RFID Reading Field Visualizing Probe Sketch

This design is currently pending a prototype. I’m gonna be working on it as of next week. I’ll update this post accordingly.

I will choose one of these 2 paths:

  1. Develop a mobile application and embed the missing modules to a smart phone and have the application do all the logic.
  2. Implement the probe using an open source controller (Arduino and the likes). I’m sure I will not need much processing power on the Probe level since all the work will be done by the controlling pc.

Potential Value:

Since this is a side project, I will neglect all business value of this project and focus on the personal educational benefit; maybe some student, researcher might find value in this work as well. I have not yet done my homework with regard to looking for off the shelf solutions; I am going to work on it either way, even if some argue that I’ll be re-inventing the wheel.

 

 

Three.js 3D Javascript Engine using HTML5’s canvas

Three.js particles script
Three.js: Particles example

Introduction

With the first public working draft of HTML5 released, experimentation with the new canvas and audio syntax additions began. The <canvas> tag is pretty stable and is close to completion hence the multitude of project being built upon making use of it. One of these projects is the experimental 3D Javascript engine developed by Ricardo Cabello also known as Mr.Doob. This engine is called three.js and below you can find a brief description about it.

 

What is three.js ?

three.js engine’s functionality is similar to a frameworks’ in certain ways. Typically an engine provides developers with a certain platform that will manage certain tasks thus reducing the time wasted in handling processes, physics, AI, rendering, memory control etc… However in this case three.js will provide you with a renderer (<canvas>, <svg> and WebGL), and control over camera and viewport. The physics and animation work is up to the developer to work out. Nevertheless it’s a great start and a new concept in the web industry.

Mr.doob - Voxels (HTML5) Three.js
Mr.doob – Voxels (HTML5) Three.js

Great examples  have been created by Mr.Doob and the code is released as documentation. The code is very basic and easily understood by even mediocre Javascript developers. If you’re planning on experimenting with it, you should consider that future releases might break backward compatibility as Mr.Doob warns that the API might change between a release and another, so don’t have any plans for projects other than experimentation.

Google chrome is the browser recommended to use in such development scenarios; It is HTML5 compatible to a certain extent and much better than FireFox, Opera, and definitely better than IE. In addition, Chrome’s JS engine is very fast and smooth.

three.js - 3D Portrait Rendering
Three.js – 3D Portrait Rendering
Mr.Doob - Three.js - Audio Processing
Mr.Doob – Three.js – Audio Processing

 

A Small note to be added regarding the Audio Processing image above. The concept is great, it’s a shame the browser doesn’t process the audio and return the amplitude as an instance variable of the audio object. The amplitude is hard coded, in other words if the audio track is changed the visualization will stop working. Yet, it’s a good implementation of audio visualization.

 

Details

You can find more details about Mr.Doob and his experimentation projects on his blog. Follow him on twitter for interesting releases and hopefully be inspired to contribute to such promising project.

http://mrdoob.com/blog

http://mrdoob.com/

http://twitter.com/mrdoob

 

Last but not least, the code for three.js can be found on GitHub, simply follow the link below.

https://github.com/mrdoob/three.js