Thursday, April 2, 2015

Smart TVs

The TVs have become smart from box shaped to flat screens, HD and then now made the flexible smart TV with 3D. This is a product called LG OLED TV. With deeper and richer colors, stunning contrast and ingeniously curved screens, it is unlike any technology we've ever seen, and it delivers a picture that exceeds your wildest imagination.








Wednesday, April 1, 2015

Google Glasses


Google glasses are designed it is functions to be more easy to use and easy to see. The glasses have the physical approach of typical glasses however they have a small camera built into the side and when the person wears the google glasses the projection is on the lenses. The camera is able to take pictures and videos, showing progress on the lenses, capturing life moments more easily. 


Similar to phones, google glasses can record the user’s voice and send it in a message to another person, saving them time on typing. The device is a hands free device increasing the simplicity. A really good feature of the google glasses is the translator, the glasses are able to translate this and tell them how to pronounce it in their language as well as how to write it out. This affects culture and communication barriers. Unlike average glasses, these google glasses are proven to be strong and light in weight. There are several designs and colours which are available to the public.







Tuesday, March 31, 2015


Leap Motion


Researchers proposed sensors that can track a user's hands. For instance, the Leap Motion can interactively track both hands of a user by identifying the positions of finger tips and the palm center, and later computing finger joints using an inverse kinematics solver. Some car makers are already proposing a hand-tracking based alternative interaction modality in lieu of traditional touch screens devoted to managing infotainment functions. Similarly, some smart TVs let users control their choices with a set of gestures, thus replacing the traditional remote control.



https://www.youtube.com/watch?v=gby6hGZb3ww

https://www.leapmotion.com/solutions

References : 
                 https://www.leapmotion.com/product
                 http://www.computer.org/

Microsoft Kinect


Kinect for Windows v2 brings you the latest in human computing technologies. Sensors such as the Microsoft Kinect are a further step toward the implementation of fully natural interfaces in which the human body becomes the controller. The device lets users provide commands to the machine via gestures and body poses as embedded hardware performs real-time processing of raw data from a depth camera, thus obtaining a schematic of a human skeleton comprising a set of bones and joints. Recognizing the position and orientation of bones lets the hardware identify poses and gestures, which can be mapped to commands for the machine.



According to Microsoft, 

"The sensor is physical device with depth sensing technology, a built-in color camera, as infrared(IR) emitter, and a microphone array, enabling it to sense that location and movements of people as well as their voices. With up to 3 times higher depth fidelity, the latest sensor provides significant improvement in visualizing small objects and all objects more clearly.The latest sensor and the SDK 2.0 take natural user interactions with computers to the next level, offering greater overall precision, responsiveness, and intuitive capabilities to accelerate the development of applications that respond to movement, gesture, and voice. "

You will able to develop applications in New and better scenarios in fitness, wellness, education and training, entertainment, gaming, movies, and communication.


References :
www.microsoft.com

Friday, March 27, 2015

Article 3: Game Usability Heuristics (PLAY) for Evaluating and Designing Better Games: The Next Iteration
by Heather Desurvire and Charlotte Wiberg

Authors of the “Game Usability Heuristics (PLAY) for Evaluating and Designing Better Games: The Next Iterationdescribes their research finding of how game developers applying human-computer interaction (HCI) principles in design. They have adapted a set of Heuristics for productivity software to games. They presented the result at CHI 2004, Heuristics to Evaluate Playability (HEP). Their follow-up study was focused on follow-up study focused on the refined list, Heuristics of Playability (PLAY), that can be applied earlier in game development as well as aiding developers between formal usability/playability researches during the development cycle.

Based on the research from the game research community, they gathered set of heuristics. They categorizes HEP Heuristics into four areas: Game Play, Game Usability, Game Mechanics and Game Story. They created there sets of questionnaires, one to correspond to each of the three different dame genres (Action Adventure, FPS and RTS)  also included questionnaires for high rank and low rank games. From their analysis was able to identify a number of principles that helped to differentiate between good and bad games.

I. Category 1: Game play
A.     Heuristic: Enduring Play
B.     Heuristic: Challenge, Strategy and Pace
C.     Heuristic: Consistency in Game World
D.     Heuristic: Goals
E.      Heuristic: Variety of Players and Game Styles
F.      Heuristic: Players Perception of Control

II. Category 2: Coolness/Entertainment/Humor/Emotional Immersion
A.     Heuristic: Emotional Connection
B.     Heuristic: Coolness/Entertainment
C.     Heuristic: Humor
D.     Heuristic: Immersion

III Category 3: Usability & Game Mechanics
A.      Heuristic: Documentation/Tutorial
B.      Heuristic: Status and Score
C.      Heuristic: Game Provides Feedback
D.      Heuristic: Terminology
E.       Heuristic: Burden On Player
F.       Heuristic: Screen Layout
G.      Heuristic: Navigation
H.      Heuristic: Error Prevention
I.         Heuristic: Game Story Immersion



Monday, February 16, 2015

Peripheral Interaction: Embedding HCI in Everyday Life

The increase of computing technology in everyday life creates opportunities as well as challenges for interaction design. One of these challenges is the seamless integration of technology in our everyday routines. Peripheral Interaction, is based on the observation that in everyday life, many actions occur outside the focus of the attention.

A Context Server to Allow Peripheral Interaction for people with disabilities

There is an increasing variety of services provided by local machines, such as ATMs, vending machines, etc. These services are frequently inaccessible for people with disabilities because they are equipped with rigid user interfaces. The application of Ubiquitous Computing techniques allows access to intelligent machines through wireless networks by means of mobile devices. Smartphones can provide an excellent way to interact with ubiquitous services that would otherwise be inaccessible. People with disabilities can benefit from this type of interaction if they are provided with accessible mobile devices that are well adapted to their characteristics and needs.

The INREDIS project created by B.G, L. G. and J. A., at Laboratory of HCI for Special Needs. In this project laboratory developed EGOKI for disabled users an automatic interface generator that is able to create adapted and accessible user interfaces that are downloaded to the user device when she or he wants to access a ubiquitous service. Peripheral interaction includes all the implicit activities that are conducted to interact with an application.

Context server can contribute to peripheral interaction by providing the applications with valuable information that would otherwise be explicitly requested from the user. The context provider can assist developers to make use of the context in a simpler way. For instance, the context server allows applications to select the most appropriate modality to interact with a user with communication restrictions, due to disability or to a situational impairment. For instance, if the microphone detects that the local level of noise is too high the application can avoid voice commands and prioritize text or images; or, if the inertial sensors detect that the user is walking, driving or riding a bicycle, touch input can be switched to voice input.

Use of context server application, examples:
  • Affective Interaction
  • Smart Wheelchair
  • Smart Traffic Lights
  • Peripheral Interaction with EGOKI - EGOKI is a UI generator for ubiquitous services. The user’s abilities, device characteristics and service functionalities are taken into account to create an accessible UI

  
References:

http://peripheralinteraction.id.tue.nl/
http://peripheralinteraction.id.tue.nl/interact/paper/Proceedings_PeripheralInteractionWorkshop2013.pdf

"A Context Server to Allow Peripheral Interaction",  B. G., L. G. and J. A. E., Laboratory of HCI for Special Needs, University of the Basque Country (UPV/EHU), Donostia, Spain, 2013


Thursday, February 12, 2015

Apple Watch

Apple watch is a latest innovative product from Apple. It is unlike any device they have ever made. According to their product release Interacting with it is easy and intuitive as iPhone or working on Mac. They invented all-new ways to select, navigate and input.  They have reimagined it as a versatile tool found answers the fundamental challenge of how to magnify content on small display, pinching to zoom. As per their reviews it is navigation is fluid and responsive. Simple and orderly arrangement of apps. Watch has new typeface to maximize legibility.




A Retina display is the primary surface for every interaction with Apple Watch. The incredibly high pixel density makes numbers and text easy to read at a glance, even while you’re moving. Images and graphics render with remarkable sharpness and contrast, including finely detailed ones like the rotation of a hair-thin second hand on a watch face. Sensitive enough to tell a tap from a press. Flexible Retina display to distinguish between a light tap and a deep press.

This is a results of latest HCI. Soon to be available. (4.24.2015)

References : 
https://www.apple.com/watch/

Wednesday, February 11, 2015

Aricle 2 : What is Interaction? Are There Different Types?
By Hugh Dubberly, Paul Pangaro & Usman Haque


The authors of “What is Interaction? Are There Different Types?” article discuss about interaction and whether it has different types. They described interaction is a “way of framing the relationship between people and objects designed for them- and thus a way of framing the activity of design.”
During their research on HCI, they evaluate how the static object different from integration with a dynamic system by using Canonical models which base on archetypal structure – the feedback loop.



Diagram 1

Don Norman has proposed a “gulf model” of interaction. A “gulf of execution” and a “gulf of evaluation” separate a user and a physical system. 


Diagram 2

In the feedback-loop model of interaction, a person is closely coupled with a dynamic system. The nature of the system is unspecified.

A systems- Theory View

They distinguished between static and dynamic systems those that cannot act and this have little or no meaningful effect on their environment.
Dynamic systems that can only react and interact linear and close loop systems. Closed-loop systems have novel property and they can self-regulating. Example: natural cycle of water. A self-regulating system has goal. The goal defines a relationship between the system and its environment.
Learning systems nests first self-regulating system inside a second self-regulating system. They outlined level of systems.


Diagram 3

They varied interaction can be,
  • reacting to another system
  • regulating a simple process
  • learning how actions affect the environment
  • balancing competing systems
  • managing automatic systems
  • entertaining (maintaining the engagement of a learning system)
  • conversing


 References : 
Course reading week 1 article  “What is Interaction? Are There Different Types?” By Hugh Dubberly, Paul Pangaro & Usman Haque 




Wednesday, January 28, 2015

Article 1 : Interaction beyond the keyboard by Schmidt, A. & Churchill, E.

The authors of “Interaction beyond the keyboard” describes latest technologies and researches on input devices to the computers. Even though the keyboard and mouse are dominant when interacting with computers other form of input such as touch-based interaction is becoming primary way for users to interact with digital media. Ex: smart-phones and tablets. There are wide variety of touch-enabled surfaces available because of less manufacturing cost. Ex. Public display, table tops and walls. As well as in gaming industry they use this technology in Will Remote, PlayStation Move and Microsoft Kinect. Also the article explains, in mobile phones and other devices how they incorporate variety of sensors, such as accelerometers, light detectors and proximity detectors, these sensors used to interacting with data and series as well as can reliably detect people’s body movements and brain state.

Researchers, Jan van Erp, Fabien Lotte, and Michael Tangerman identify several nonmedical applications for Brain Computer Interfaces, which measure and process the brain’s activity and use these signals to control devices and detect context. MIT researchers, Hirohi Ishiii worked on introducing tangible user interfaces (graspable user interfaces) in 1992 early examples of tangible user interfaces done by Durrell Bishop at Royal college of Art in London. In 1995 George Fitzmaurice, Hiroshi Ishii, and William Buxton explored this further in their project; users could control virtual objects through physical brick placed on display surface. (ActiveDesk)

Along with those technologies voice recognition technology is improving and currently use in interactive software’s like information search, navigation and device control. According to current researches by Hans Gellersen and Florian Block on novel Interactions on the Keyboard, it is possible to enhance and extent the traditional keyboards form by combining it with an overlay to create physical keyboard with the touch screen. It is amazed to see how far we came on input devices and how the technology changing and getting better day by day.

Reference:
Interaction beyond the keyboard" by Schmidt, A. & Churchill, E. (2012) IEEE Press.