Kinesthetics is awareness of the movement of your limbs, or learning based on physical movement. It is also coming to mean gestures that are not those you touch and swipe across the screen of your mobile phone.
It’s no coincidence that one of the first kinesthetic gesture devices you used, the Microsoft Xbox Kinect is spelled that way. Yeah, the Wii also does this, but senses in a different way.
This is one of those technologies that is a bit like voice; it’s been the next big thing for a decade or two. Except that it’s also been sneaking up on us. Sensors have, for years now, been waking up your phone when you pick it up, or locking the touchscreen when it thinks you are putting it up to your head to talk.
But now, these sorts of gestures are becoming a bit more mainstream and general. You can consciously use them on, or with, several new devices.
Leap Motion
If you didn’t know, Leap is one of those popular but somewhat delayed Kickstarter products. I got mine the first few days they were shipping, and have been evaluating it for a couple weeks now.
It’s a tiny box that plugs into your computer via USB and then watches for movement. You set the box in front of your monitor somewhere, then when you wave your hands in front of the computer, it sees them. In some detail, when you run the demos at least, though real-world responsiveness is just okay.
Not that I use it much. Other reviews have complained about not having a consistent gesture library, for example. That didn’t stop touchscreen phones when they came out. It was a gripe, but one you could get over pretty easily.
When I ordered the Leap, and even when I saw other demos, I had visions of how it would work for me. Quite specific ones. I wanted to keep my right hand on my pen tablet (which has been my primary input method since 1993) and then be able to just put my left hand in front of the screen to manipulate the drawing area; open palm to scoot the screen area around, say, or pinch to zoom.
It doesn’t do that. At all. Not that it couldn’t I suppose, but they have gone far, far too much into the app store model. Software is piled on software and linked to websites. You have to install zillions of little apps and plugins. Many are paid. Almost all are very freestanding. Essentially nothing allows you to control an existing application with the Leap directly.
I don’t intend to change my whole way of working by having the one computer with a Leap using special Leap software. So 90% of the use has been me getting the kids out of my hair playing games and doing very cool looking educational, exploring things. It’s very cool, and the hardware is promising, but the integration fails me entirely.
Samsung Galaxy S4
Yes, the contender for today’s purposes is a completely-different device. Not a dongle to complete with the Leap on desktop, but a single, free-standing smartphone.
For a few generations now, Samsung has been adding human-facing sensors, and doing interesting things with them. Many of these have been somewhat secret. Not evil, just not very obvious, with the end goal being not very annoying. Their devices are a bit better at detecting when you are looking at them during a call for example. Yes, others (notably Apple) have a decent assortment of sensors also, but Samsung has really embraced this.
I've been trying out the Verizon version of their newest flagship model, the Galaxy S4, for a few weeks also. It has a few more sensors than the S3, I think, but most of all is using them more directly and is making it all quite obvious to the end user now.
There are a whole series of settings that turn on various UI features based on you waving at the phone, tilting it, or even just looking at it.
When I say they are being obvious, I mean that these features are front and center in their TV advertising.
I should mention that while the sensing isn’t better than Leap, or sometimes really very good at all even, it’s quite good at mentioning when it sees you. My favorite is the eyeball scanner. It shows a little icon (oddly in the middle of the screen) which indicates it can see you, and where it thinks you are looking.
This feedback means you can adjust yourself. I think that’s one reason Leap fails. When in demo mode you are looking at your hands directly, and it’s amazing but when using it for what we have to consider real work, you have no feedback.
Building an Environment
Aside from niggling details of the UI and interaction, the biggest difference between the two, by far, is the way they work.
The Leap Motion has brilliant technology, and the concept of bringing kinesthetic gesture to the desktop is great, but is such an add on that it is of essentially no value to me. And actually, the kids have even gotten bored with it. They want to do much the same as me, and finger paint in my professional drawing tools, or use it to navigate the computer. Which it doesn’t really do.
Whereas the Galaxy S4 totally does this. The technology is not as flashy, and is actually maybe less reliable than the Leap. The eye tracker doesn’t work with glasses, for example. But what works, works in most every app. And pretty seamlessly. If you leave on the gesture scroll, then as soon as you aren’t confused by it, the page you are viewing just scrolls as you naturally want it to.
And this proves out actually using the devices. As I said, no one much uses the Leap Motion, but it’s hard to keep the kids away from the Galaxy S4. It’s not just the big, shiny, new phone in my collection. But also the one that has the features that automatically make it work easily.
I like to talk about how it’s our job to use the sensors and connections of the phone to create ecosystems, or environments that support the way the user works. This is another extension to that. And you don’t even have to spend the usual time to get used to it and set it up your way. Turn everything on and just let it try to automatically create an information environment around you, and for you.
1 comment:
مغسلة لغسيل السجاد بجدة
شركة غسيل سجاد بجدة
Post a Comment