After The Touch Screen, Voice Assistant and Motion Sensor—What Is Next?

, , Leave a comment

There was a time when the “push-button” features in many gadgets become their major selling point. Back then, when you say a gadget is “push-button” that is indirectly saying that it is high-tech. But nowadays, it seems like people got tired pushing buttons and calling someone by dialing a number may soon become unimaginable. This started with the feather touch buttons followed by the touch screen. Then we now have voice assistants and motion sensors. These technologies are behind our smart phones, smart TVs, digital cameras, etc. so that we can manipulate them with our voices and hand gestures. The pace of communication technology development is accelerated by companies like Google, Microsoft and Apple. And as service providers the likes of Verizon, Skype and RingCentral, continue working on services that complement this technology, I can’t help but wonder what will be next.

touchscreen

Brainwave Sensors

The Muse and the Melon headbands are examples of devices that track brain waves. The Muse headband is connected to smart phones through Bluetooth. One of its features is to allow users to control Smartphone apps just by thinking. How about that? Meanwhile, Samsung has recently demonstrated a brain-controlled tablet using an EEG cap. This is with the hope of one day enabling people with mobility disabilities to connect.

Google Glass

Based on first impression and on the endless speculations and articles written about it, the Google glass simply looks like a micro computer attached to an eyeglass. Google’s FAQ clarifies that the Google Glass serves as an extension of a user’s Smartphone screen and it can perform a set of simple tasks like taking videos and photos. This is where the privacy issues came from because of the possibility that people wearing the glass will use it to take photos or videos then upload them real-time. Google says it just depends on the responsibility of the people using it. Technically, the specs of glass and the quality of the photos and videos it can take will not satisfy a person who intends to spy on other people. The camera is five megapixels, it can’t even zoom. Going to the exciting question of how we can control the gadget, well, the user will look up at the Glass module to make the screen light up so that it can be verbally commanded. It is navigated through deliberate voice and manual controls. Positive thinkers have already eyed Google Glass as a device to help people with disabilities. Maybe in the near future, we wouldn’t be able to distinguish a Google glass from common eyeglasses.

Hologram Interface

I can only imagine that the combination of voice assistants, motion sensors and brainwave tracking devices will result to computers with hologram interface. It is not a very new idea because these are demonstrated in the movies Star Wars and Ironman. If we pretend that all these technologies have already matured, a person who suddenly remembers to call somebody will have his or her brainwave headband automatically prompting a hologram assistant. This hologram assistant will then say, “You were just thinking of calling your father. Shall I dial his number now?” The user could then reply to that in many ways. He or she could nod, say “yes” or think “yes”, or maybe roll his or her eyes. Wow, that would really change the way we communicate personally and in our business transactions.

 

Leave a Reply