Blogpost #7 Continued… – Kenton O’Hara and others’ “Touchless Interaction in Surgery” Review

**This is a review post of Kenton O’Hara and others’ “Touchless Interaction in Surgery.” It contains both the summary and the critique/commentary.

Summary

Kenton O’Hara and others’ “Touchless Interaction in Surgery” focuses on the tool that provides direct image manipulation to surgeons without worrying about sterility.

1. Introduction

In medical fields, visual displays, such as MRI and CT, are essential as they support diagnosis and planning. Also, they provide a virtual “line of sight” into the body during surgery. The problem that the visual displays are constrained by typical interaction mechanisms, such as keyboard and mouse, arises in the medical fields. In surgeries, the most important matter is sterility. However, the current typical interaction mechanism is not risk free as it is direct hands-on control, meaning that surgeon has to touch the gown, the mouse, and the keyboards to manipulate the images. These unsterile objects can affect the surgery. Therefore, the author discusses about how to give surgeons direct control over image manipulation and navigation while maintaining sterility during surgery. Despite of the fact that there are some suggestions, such as barrier-based solutions or avoidances of contact with input devices, these solutions that the author suggest still involve certain risks.

2. Main Technologies of the System

The author is especially focusing on is the tool that provides direct image manipulation to surgeons without risk of sterility. The kinetic sensor and software development kit is the main technology that is used to build this system. The kinetic sensor enables surgeons to manipulate images without touching. It would read the surgeon’s gestures.

3. Socio-Technical Concerns

There are issues that people need to consider and need to be worked on. These challenges range from gesture vocabulary to the appropriate combination of input modalities and specific sensing mechanisms. Also, there are concerns about how it could affect the practices that the surgeons perform. Since the touchless interaction in medical fields, especially surgeries, is in the early stage of development, more careful researches, development, and evaluations are required. This will not only change the practice of surgery, but also will provide opportunities to change this into a new and radical way to understand the entire design and layout of surgery-operating theatres in the future.

 

Critique/Commentary

By developing this new tool that provides direct image manipulation to surgeons without risk of sterility, I believe that there will be great development in the medical field. It is very interesting how humans are trying to improve the current condition and state of the medical fields. I have never considered that traditional interaction mechanisms in the medical field can be problematic in a sense.

If this new tool is fully developed, I became curious of the attitude that the doctors and the medical students will have. Currently, I believe that the doctors and the medical students are in precise and meticulous preparation for any practices of surgery. However, if the doctors and the medical students start relying on the virtual reality, I questioned myself if they would prepare meticulously enough like right now. If they start relying on virtual reality, I believe that their preparation for any practices of surgery will become less meticulous and more effortless.

 

**Source:

  • O’Hara, K., et al.(2014) “Touchless interaction in surgery,“ Communications of the ACM, Volume 57 Issue 1, pp. 70-77.

Blogpost #7 – Connie Golsteijn and others’ “VoxBox: A Tangible Machine that Gathers Opinions from the Public at Events” Review

**This is a review post of Connie Golsteijn and others’ “VoxBox: A Tangible Machine that Gathers Opinions from the Public at Events.” It contains both the summary and the critique/commentary.

Summary

Connie Golsteijn and others’ “VoxBox: A Tangible Machine that Gathers Opinions from the Public at Events” introduces VoxBox, which is a tangible machine that gathers opinions from the public at events without disrupting positive experiences. The author suggests that this will engage interactions and increase response rates of the participants.

1. Introduction/Background

Traditional means of survey involve approaching people directly. This caused their limitations – people felt uncomfortable and biased against these surveys. It can disturb people who are having pleasant experience, and people are hesitant to answer. Therefore, this result in low respone rates. In order to overcome this situation, Voxbox was invented. This tangible machine can gather opinions from the public at events by not disturbing people, rather by attracting the participants with interesting visual appearances.

2. Design Principles

Design principles (encouraging participation, grouping similar questions, encouraging completion and showing progress, gathering answers to closed and open questions, and connecting answers and results) were studied throughout the observation of people utilizing VoxBox. The author argues that observations they have conducted proved the possibilities of VoxBox as an novel prototype that is effective for gathering public opinions. Furthermore, the study focuses on how to make people to answer all questions thoughtfully while providing enjoyable experience and interaction.

3. Discussion

Based on the observations made from implementations and initialdeployment of VoxBox, the author analyzed and went on further discussion about the five design principles:

  1. Encouraging participation: VoxBox met the goal of encouraging participation, as its appearance is very attractive and rises curiosity. Although there were some usability issues such as people not noticing some signals, VoxBox was very effective system to encourage people to give their opinions.
  2. Grouping similar questions: Unanticipated problems arose as some users were confused by having to follow a sequence, in terms of grouping similar questions. Therefore, the researchers decided that VoxBox does not need to incorporate a fixed sequence of interaction. Therefore, they started to consider others way in which such more sophisticated functions could be integrated into the prototype.
  3. Encouraging completion and showing progress: The aim of encouraging completion and show progress first did not work as well as it was planned. The reason for this is that the participants failed to notice the ball. They noticed the ball afterwards they earned it, not while the ball was moving. Therefore, the researchers suggested to move the ball tube forward to make it more visible to the users. The ball, which does not seem to be a strong incentive, brought joy to people.
  4. Gathering answers to closed and open questions: Gathering answers to open questions using the telephone resulted as gaining elaborate verbal response while providing participants amusement at the phone ringing.
  5. Connecting answers ad results: The goal of this was the part that needs more improvements. The visualization did not work as strongly as hoped. Therefore, the researchers suggested that considering other ways to link the data input and visualizations more strongly is necessary.

Critique/Commentary

In almost every classes in Yonsei University, I always experienced in activities of gathering public opinions. I always have experienced hard times with gathering public opinions. It was difficult to encourage people’s participation and attention. Also, expecting people to provide detailed responses for open-eneded questions was almost impossible. I think VoxBox will be a crucial instrument to encourage people to give their opinions easily with very low-cost and simple incentives. I hope this instrument can be further more developed and help people like me in gathering public opinions more easily.

While looking at the pictures of VoxBox from the paper, the invention seems to occupy large spaces because of its size. In order for VoxBox to become a tool that all people can use when they need to gather public opinions, I believe that its size should become smaller. If possible, I think that the VoxBox team should consider making this into an application, or into a smaller size. However, this may take a long time to develop it into a smaller size or into an application since VoxBox is still in development.

**Source:

  • Golsteijn, C., et al. (2015) “VoxBox: A Tangible Machine that Gathers Opinions from the Public at Events,” Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 201-208.

Blogpost #6 Continued… – M. De Choudhury and others’ “Moon Phrases: A Social Media Facilitated Tool for Emotional Reflection and Wellness” Review

**This is a review post of M. De Choudhury and others’ “Moon Phrases: A Social Media Facilitated Tool for Emotional Reflection and Wellness.” It contains both the summary and the critique/commentary.

Summary

De Choudhury and others’ “Moon Phrases: A Social Media Facilitated Tool for Emotional Reflection and Wellness” introduces “Moon Phrases,” which is a web-based tool that provides interface of lunar shape information based on individual’s social activity to promote emotional wellness.

1. Introduction

The importance of emotional wellness is underestimated in terms of overall health. It is difficult to track emotional states of people because of various reason. In order to overcome this problem, the author introduces “Moon Phrases” that can track people’s emotional changes in social media platforms so that this tool can “serve as an unobtrusive mechanism to facilitate emotional wellness (De Choudhury 41).

2. Background Information/Literature

From previous researches, the author concludes that individuals’ social activity is an useful information to comprehend one’s emotional wellness. Also, the user of the language illustrates human intentions, moods, and disorders. Previous researches have pointed out the potentials of social media as a mean to improve emotional wellness. Finally, the previous researches say that various HCI tools for emotional wellness have proven that social media where individual reveals one’s emotion can support health-related awareness and self-reflection.

3. Design Process/Interface and Interaction Design

The challenge of the design process was “identify what are the best social media cues to be visualized, based on an end user’s activity that can promote emotional wellness” (De Choudhury 42). The design process started with Twitter. However, they found out that their tool could be easily adapted to other social media platforms as well. The project team began creating low-fidelity prototypes of Moon Phrases and gathered feedback from six social media users. By using the feedbacks from these users, the author and the team concluded that Moon Phrases is a tool to show people’s social activity trends over time and the linguistic styles that are related to the psychological environment of the people. Linguistic styles and expressions in SNS reflect thoughts and feelings of individuals around and act as psychological markers since they convey information about social surroundings of individuals.

4. Implications

Despite of the fact that there are biases on how accurately social media can truly reflect one’s behavior and emotional state, the author concludes that Moon Phrases will support people’s emotional wellness while tracking their social media usage. For further research, longitudinal trend of affect and behavior of users will enable additional diagnosis of affective disorders. Also, it will help with diagnosis or early intervention as it provides deeper understanding of users’ emotional states.

 

Critique/Commentary

While looking at the screenshot of Moon Phrases for a Twitter user, I found the interface quite hard to intuitively learn. The section about positive words, negative words, and writing style is easy to learn. However, despite of the fact that it is easy to interpret when you learn the directions to interpret this chart, the phase chart is quite confusing to see and interpret. Therefore, I believe that a short explanation of interpreting the phase chart should be included, for example in bullet points, with the phase chart. I believe that this way of including the explanations of interpreting the phase chart will help the users to interpret the chart faster and easier.

Also, based on my experience, I believe that people do not post things that really reflect their life. For me, SNS seems to be a place that people show-off how amazing their lifestyles are. Instead of writing the posts truthfully, I believe that people tend to post things that contradict, and even oppose, their real feelings and life. Therefore, I believe that it is difficult for this tool to support people’s emotional wellness correctly and truthfully. Moreover, I am not sure how this could help people to improve their mental state since SNS is one of the reasons why people are going through feelings of social isolation and loneliness based on my experience. By looking at the posts, the users seem to compare their lifestyles with other people’s lifestyles that seem to show as if these others are having a miraculous life that may not be true.

 

**Source:

  • De Choudhury, M., et. al. (2013) “Moon Phrases: a social media facilitated tool for emotional reflection and wellness,”  Proceedings of the 7th International Conference on Pervasive Computing Technologies for Healthcare, pp. 41-44.

Blogpost #6 – Steve Yohanan and Karen E. MacLean’s “The Haptic Creature Project: Social Human-Robot Interaction through Affective Touch” Review

**This is a review post of Steve Yohanan and Karon E. MacLean’s “The Haptic Creature Project: Social Human-Robot Interaction through Affective Touch.” It contains both the summary and the critique/commentary.

Summary

Steve Yohanan and Karon E. MacLean’s “The Haptic Creature Project: Social Human-Robot Interaction through Affective Touch” introduces “The Haptic Creature,” which is a robot device that mimics small lap animals, such as cats and dogs. It interacts through the sense of “touch” and regulates its emotions state based on this interaction.

1. Goal

The goal of this project is to investigate the use of affective touch in the social interaction between human and robot, especially in the display, recognition, and influence. These areas will be explained in the design considerations, architecture, and user studies.

2. Key Terms/Concepts

There are some key terms that need to be clarified before learning about the Haptic Creature Project:

  1. Social interactive robotics is a subfield of human-robot interaction studies for which social interaction plays a key role. The Haptic Creature Project is an example of social interactive robotics.
  2. Affect display the external indication or expression that shows the internal emotional state. This term is an important concept of social interaction because affect display helps to add significance and regulate to the interaction. Through affect display, we can know how other people are feeling. Also, we can communicate our emotions to other people. An example of visual affect display is the facial expression.
  3. Affective touch is a touch that communicates or evokes emotion. There are few related studies about the affective touch because interpersonal touch leads to discomfort. Therefore, the Haptic Creature Project chose to research and create models of interaction between humans and animals since it feels less uncomfortable.

3. How is the Haptic Creature Project different from others?

Unlike Sony’s dog Aibo, Shibata’s baby seal Paro, and Stiehl’s teddy bear the Huggable, the Haptic Creature Project has a strong concentration on the modality of touch for affect display. Other robots focus less on touch for affect display originating from the robot itself. They rely more on visual and auditory expression. Also, unlike others, the Haptic Creature Project would have a more amorphous appearance and be recognizable as animal-like. Other robots have more clearly defined features and overall shape.

4. Design considerations/Display

There are three design considerations:

  1. Interaction centers around the modality of touch, which means concerning with affect display through touch. Therefore, all communication of emotional state and sensing by the and from the creature is haptic and touch-based.
  2. The Haptic Creature deals with providing an organic interaction whereby the sensing and the affect display seems a coordinate whole. This means and sensing and affect display harmonize each other. The project team wants to avoid random and unrelated set of actuations.
  3. The project team wants high level of zoomorphism to show in the Haptic Creature. The creature’s form will be intentionally minimalistic, using simple elements. Also, the creature’s form should not be limited to that of a single species since the model of the creature is lap animal, including dogs and cats that are different species.

5. Development Phases

There are three development phases:

  1. Wizard of Oz Prototype: This is an initial phase of the development and already has been completed. In this prototype, all interactions are controlled by a human operator and effectors are controlled based on the pressure and exhaustation of air. This phase allowed the team to quickly explore the ideas of affect display through touch within human-animal interaction.
  2. Automated Prototype: This is the current stage of the development. In this phase, the team is trying to further concepts explored before and prevent the need for human operations. This prototype will sense touch across its entire body and effector will be manipulated via servos and motors. This is currently being tested and enhanced through successive iterations.
  3. Final Creature: This phase will be constructed and is expected for majority of the software architecture to be reused while introducing more robust hardware elements.

6. Architecture/Recognition

그림1.jpg

Human interacts with the Haptic Creature solely through touch. This input passes through the various components of the creature, eventually resulting in an appropriate haptic response to the human.

First, low-level sensing handles the aspects of the platform that deal with sensing information from the real world. Then, the gesture recognizer takes information from the low-level sensing component and constructs an initial model of the physical data. This manages the sensor information to provide a cohesive view. Next, the emoter, which represents underlying emotional state of the platform, is affected externally through information from the gesture recognizer or by means of its own internal mechanisms, such as temporal considerations. The physical renderer listens for change in the emoter component, then translates the results into an orchestrated manipulation of the effectors. Finally, the low-level actuation saves adjusting normalized data appropriately for individual hardware devices and is charged with directly interfacing with the platform’s effectors.

7. User Studies/Influence

The main goal of the Haptic Craeture Proect, again, is to investigate the use of affective touch in social interactive robotics, especially in display, recognition, and influence of touch. Furthermore, the team is developing a suite of studies that examines the interaction between humans and the Haptic Creature.

  1. Preliminary investigation: This is already completed using the Wizard of Oz prototype. Participants physically interact with the Haptic Creature to perform specific haptic interactions and to check how the creature render a corresponding emotional response. In order to do so, the participants were to identify each emotional response from the creature and state any positive and negative shift in their own emotional state from each interaction. As a result, the team has concluded that emotions can be communicated through primarily haptic means, and that this communication affects the participants.
  2. Interaction Decomposition: Throughout the current development of the automated prototype, various studies will be conducted to concentrate on the direct interaction between human and creature. The interaction divided into its component parts. s.jpgFirst, we isolate on a specific cell. We study the variety gestures a human use in the display of affective touch (cell 1). Then, we examine the interaction across two cells, such as the output from the human and the ability of the creature to recognize it (cell 1 –> 2). The goal here is to characterize low-level aspects of the interaction, then use these to construct higher-order models, eventually ending with an understanding of the entire interaction cycle.
  3. Companionship: Once the final creature is made, this final study to gain a deeper understanding of the role affective touch plays in companionship will be conducted. Unlike other user studies, this study will concentrate more on the effects and the emotional result. This will be in the form of longitudinal studies, where participants interact with the Haptic Creature for an extended period.

 

Critique/Commentary

Since the Haptic Creature Project is still in development, I hope that this creature can be created as soon as possible to soothe loneliness of people in places where lap animals are not allowed. For me, dogs and cats are companions that support you whenever you are in trouble by allowing it to touch them. When the Haptic Creature Project appears in public, I wish to buy this robot to help me feel supported in places where dogs and cats are not allowed, such as dorms. Also, the author mentions that there is hope that some insights may be applicable to interpersonal interactions. Like the author, I also hope that there will be a development in enhancing the interpersonal interactions.

Before reading the paper, the professor showed us an example of this project. It is called Qoobo, a therapeutic robot in the form of a cushion with a tail. It is currently developing in Japan. After reading the paper, I thought that the design of the robot should contain a strong level of zoomorphism by having the overall appearance of the lop animal, instead of having a round appearance like Qoobo. For example, the team should include the overall outline of the dog-like and cat-like nose on the part of the robot that represents the face. This is because dogs and cats react differently and have different emotions depending on where the humans touch them. Because of these different reactions and interactions depending on where the humans touch them, I think that the robot should give a hint to the users where they are touching it by possessing an overall shape of the lap animal. Even though the team wishes to make an amorphous appearance, I think that they should reconsider about their approach to the design.

While reading about the third user study that the project team is planning to do relating to companionship, I questioned myself, “Can robotic pets, like the Haptic Creature, compared to biological pets, provide users with similar outcomes related to social companionship or improved quality of life?” In other words, I wanted to predict the outcomes of the third user study. I wanted to discuss this as a class during our discussion. Just to mention my own thought, I think that this really depends on the delicacy of the technology of the Haptic Creature. I think that if the technology is delicate enough to distinguish even small differences of emotional responses, then I think the Haptic Creature will provide users with similar outcomes related to social companionship. On the other hand, I believe that robotic pets may not give similar outcomes related to social companionship as the biological pets since biological pets give emotional responses to all senses, while the Haptic Project gives emotional responses only to haptic interactions with the users.

 

**Source:

  • Steve Yohanan and Karon E. MacLean (2008) “The Haptic Creature Project: Social Human-Robot Interaction through Affective Touch,” In Proceedings of the AISB 2008 Symposium on the Reign of Catz & Dogz: The Second AISB Symposium on the Role of Virtual Creatures in a Computerised Society, volume 1, pp. 7-11, Aberdeen, Scotland, UK, April 1-4 2008. (Best Paper Nominee).

 

Here is the powerpoint ppt file regarding this topic:

Interaction Design Presentation

Blogpost #5 Continued… – Steptoe’s “Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented Reality” Review

**This is a review post of Steptoe’s “Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented Reality.” It contains both the summary and the critique/commentary.

Summary

Steptoe’s “Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented Reality”  introduces the experiment on Augmented Reality (AR) experience through AR-Rift. The experiment was implemented in order to see how people react differently in discernability and presence according to three AR rendering modes, conventional (unprocessed video and graphics, stylized (edge-enhancement), and virtualized (edge-enhancement and color extraction), that they are exposed to.

1. Core Research Ideas:

AR systems augment the real environment with virtual content in real-time. The goal of AR is to integrate virtual and real imagery in a way that they are visually indistinguishable. In order to do so, the easiest approach is to change the view of the real world to make it appear closer to the computer-generated graphics, or the virtuality of the computer. This can be achieved by using non-photorealistic rendering (NPR), which is a way to apply artistic filters to both the real-world imagery and the graphical/virtual content.

However, NPR makes the task of discerning whether objects are real or virtual more difficult since NPR disguise visual artifacts and inconsistencies associated with real-time computer graphics by applying image filters and non-photorealistic effects. The filters make both the real and virtual content to transform similarly. Therefore, the issue of discernability becomes critical in immersive head-mounted AR.

2. Research Question:

How do people react differently in discernability and presence of three AR rendering modes, conventional, stylized, and virtualized modes, in immersive head-mounted video see-through AR (AR-rift)?

 

3. Key terms:

#1: Discernability: Can be defined as “a user’s ability to correctly distinguish between objects that are real and objects that are virtual” (Steptoe 213).

#2: Presence: Can be defined as “a user’s psychological response to patterns of sensory stimuli, resulting in the psychological sensation of ‘being there’ in the computer-generated space” (Steptoe 214).

#3: Conventional mode of AR rendering: “The conventional mode does not post-process the image, showing unaltered video feeds and uses standard real-time graphics algorithms” (Steptoe 213).

#4: Stylized mode of NPR: “The stylized mode applies an edge-detection filter to silhouette edges of objects within the full image frame including both video and graphics” (Steptoe 213).

#5: Virtualized mode of NPR: “The virtualized mode presents an extreme stylization by both silhouetting edges and removing color information” (Steptoe 213).

 

4. Methods:

There are three tasks in the experiment:

#1: The first task relates to discernability, where participants are required to judge each of ten objects in the experiment as real or virtual. First, the participants are blindfolded while setting up real objects. Once the users are not blindfolded, they are to distinguish which are real and virtual among ten objects on the table.

#2: The second task relates to user’s sense of presence by measuring behavior relating to the extent to which the mixed reality environment is acted upon as the salient physical reality. Participants are give the task to walk from their current seated position to sit on chair. Between the starting position and the chair, there is a number of cardboard boxes on the floor. The boxes and the target chair are virtual; however, the users do not know that the boxes and the chair are virtual.

#3: Participants are then to complete the questionnaire relating to the experience in terms of visual quality, presence and embodiment, and system usability.

 

5. Results & Findings: 

#1: Discernability

Th overall mean accuracy for conventional mode was 73%, stylized mode 56%, and virtual mode 38%. This shows that participants in the stylized condition were unable to discriminate real and virtual objects. This also suggests that the visual difference between real and virtual objects provided enough information to make more accurate judgements. Furthermore, the visual characteristics of the virtualized condition are inadequate or misleading in the judgement process. Therefore, the judgement accuracy in the stylized mode suggests that NPR can be effective in unifying the appearance of an environment in immersive AR despite the range of perceptual sensorimotor cues afforded by the system.

#2: Presence

Participants of the conventional mode believed that objects were virtual. However, the participants of the virtualized mode believed the objects to be real. These findings support our definition of presence in immersive AR being the perceptual state of non-mediation arising from technologically-facilitated immersion and observed environmental consistency, and gives rise to behavior realism.

Movements in the virtualized condition were also the most careful among three modes. This shows that characteristics of the ambulation varies with photorealism. The tendency for more careful movement in the virtualized condition is likely due to the diminished visual realism of the physical environment, which features and shadows being difficult to distinguish.

 

Critique/Commentary

It was interesting to learn more about the AR and how the experiment encourages for future low-cost immersive AR systems. I believe that this experiment is the beginning of support towards future low-cost immersive AR systems and the development of AR. Furthermore, this study proved NPR to have effectiveness in developing a perceptual illusion. I believe that this study can be a useful source in developing our team project since we are focusing on “AR technology” and are planning to perform further research in understanding the interaction related to the AR technology.

 

**Source:

  • Steptoe, W. et. al. (2014) “Presence and discernability in conventional and non-photorealistic immersive augmented reality”  In Proc. IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 213-218.

Blogpost #5 – Jorda, S.’s “The Reactable: Tangible and Tabletop Music Performance” Review

**This is a review post of Jorda, S.’s “The Reactable: Tangible and Tabletop Music Performance.” It contains both the summary and the critique/commentary.

Summary

Jorda, S.’s “The Reactable: Tangible and Tabletop Music Performance” introduces the Reactable, which is a new electronic musical instrument. This musical instrument has a simple and intuitive tabletop interface that turns music into a tangible and visual experience. Furthermore, it allows musicians to experiment with sound, change its structure, control its parameters, and be creative.

The Reactable is built upon a round tabletop interface. In order to control this musical instrument, performers can manipulate tangible acrylic pucks on its surface.

Before the creation of Reactable, there were many interfaces for music controllers, such as MIDI and NIME. However, these interfaces do not pursue “multithreaded and shared control” approach. After the creation of Reactable, performers no longer need to control directly and permanently all the aspects of the sound production. They can perform control strategies instead of performing data.

The Reactable is an example of TUI, or Tangible User Interfaces, that combine control and representation within a physical artifact. The digital information can become graspable with the direct manipulation of simple objects.

The hardware of the Reactable consists of a round interactive surface, the tracking system, and the projector inside the table. The Reactble uses “reacTIVision” as its software. By using the Reactable, musicians can rotate and connect the pucks to combine different elements like synthesizers and effects. Also, the musicians can interact the puck with other neighboring pucks, according to their positions and proximity. When controlling the Reactable, it provides instant visual feedback, which “communicate” the states and behaviors of the musical processes. It can turn music into something visible and tangible. The visual feedback solves the perception difficulties and the the audience’s lack of understanding these types of concert. Furthermore, the instrument can also allow multi-user collaboration.

The integration of visual feedback and physical control brings a more direct, intuitive, and rich interaction between instruments and instrumentalists in the new live computer music performance paradigm.

**Here is a video of the Reactable performance to check out the beauty of this musical instrument.

Critique/Commentary

This is an interesting and convenient invention for musicians who want to create a computer music in a creative way. However, while observing the pictures and videos of the instrument, I believe that musicians who are learning for the first time how to use this instrument will have a difficult time. Not only the invention is very new, but also it is hard to operate the instruments because the symbols in each puck cannot be recognized at first sight in the perspective of beginner musicians, including myself. The symbols may be easily recognized by professional musicians, but for beginner musicians, like myself,  it would be hard to intuitively operate the instrument.

The Reactable only provides visual feedback when controlling it. However, I think it would be more easy to operate the instrument if there were haptic feedbacks. For example, when rotating the controllers, I believe that there can be different ways of rotating the controllers. If there were haptic feedbacks that allows the musicians to know that they are rotating the pucks in the right way, it would be much more intuitive to use the Reactable.

 

**Source:

  • Jordà, S., (2010) “The Reactable: Tangible and Tabletop Music Performance” In Proc. Human Factors in Computing Systems (CHI ‘10). ACM, pp. 2989-2994.

Blogpost #4 Continued… – William W. Gaver’s “Technology Affordances” Review

**This is a review post of William W. Gaver’s “Technology Affordances.” It contains both the summary and the critique/commentary.

Summary

William W. Gaver’s “Technology Affordances” develops the concept of “affordances” as “properties of the environment relevant for action systems, consider how they might be perceived,” and explains the effects of culture on their  perception (Gaver 2).

1. What are Affordances?

According to Gaver, “Affordances” indicate the complementarity of the acting and the acted-upon environment. For example, cursors (arrows, hands, brushes) offer various affordances for interaction; they afford different “actions.”

Affordances are independent of perception: they exist whether they are perceived or not. Therefore, distinguishing affordances from perceptual information is useful in understanding ease of use. Gaver categorizes four different kinds of affordances by separating affordances and perceptions: “false affordance,” “perceptible affordance,” “hidden affordance,” and “correct rejection” (Gaver 2). “Perceptible affordance,” in which there is perceptual information available for an existing affordance, is inter-referential: object related to actions are available for perception. The actual perception of affordances will be determined by one’s culture, social setting, experiences, and intentions.

2. Affordances for Complex Actions

Gaver applies the concept for complex actions and defines concepts, such as “sequential affordances” and “nested affordances.”

The concept of “sequential affordances” refers to acting on a perceptible affordance that leads to information indicating new affordances. This can be revealed over time. The affordances of complex objects are often grouped by the continuity of information of activities in it.

“Nested affordances” are affordances that are grouped in space. They are grouped by the continuity of information about activities they reveal.

3. Modes, Media, and Affordances

Affordances can be perceived by using various senses, such as sound and tactile information. Understanding the affordances by media, such as vision and sound, other than graphics can help design transparent systems.

 

Critique/Commentary

After reading this article, I was able to gather more knowledge about the concept of “affordance” than from reading Norman’s article. I thought that comparing these two articles, which are both about the concept of “affordance,” would help me understand different thoughts toward the concept of “affordance” in interaction design.

I have found Gaver’s presentation more analytic than Norman’s writing of “Affordances, Conventions, and Designs.” Norman focuses more on distinguishing the concepts of “real and perceived affordances” and “constraints” whereas Gaver concentrates on explaining what the concept of “affordance” is analytically. From Gaver’s point of view, the most important thing in affordances is that they link action and perception together and push us to analyze and design technologies and media in terms of actions.

I believe that the class discussion about “affordance” will help me to gather more various viewpoints and thoughts toward “affordance.” I am looking forward to have a thorough discussion about this in class.

 

**Source:

  • Norman, D. A. (1999) Affordances, Conventions, and Design. In Interactions, 6 (3) pp. 38-41.
  • Gaver, W. (1991) Technology affordances. In Proc. Human Factors in Computing Systems (CHI ’91). ACM, pp. 79-84.

Blogpost #4 – Donald A. Norman’s “Affordance, Conventions, and Designs” Review

**This is a review post of Donald A. Norman’s “Affordances, Conventions, and Design.” It contains both the summary and the critique/commentary.

Summary

Donald A. Norman’s “Affordances, Conventions, and Design” explains the misunderstandings of the concepts, such as  “affordance,” “perceived affordance”, and “convention.” 

In order for us to understand how we managed to use different objects existing in our world, we need to perceive what appearances of objects can give us clues about how to use these objects. Therefore, in order to understand how to operate such things, we need to understand three major dimensions: “conceptual model,” “affordances,” and “constraints.”

1. Conceptual model

Conceptual model is the representation of models that help users to understand what the concept is. The most important and difficult part in making a successful design is to formulate the conceptual model appropriately and assure the consistency that will exist when using such object.

2. Real Affordance vs. Perceived Affordance

Affordances are “actionable properties between the world and an actor” (Norman 39). In other words, affordances are the qualities of an object that show the users how the object should be operated.

There is a confusion between real and perceived affordances when using such terms. These two terms have different meanings in physical objects rather than in screen-based products. In physical objects, both real and perceived affordances exist; however, in screen-based interfaces, the designer is only able to control perceived affordances since the computer system already contains real affordances.

Sometimes, designers refer adding “affordances” to the computer system as adding targets on the traditional computer screen, such as icons and cursors. However, the designers have misused the word “affordance” in this situation. Affordances exist independently of the visual feedbacks or displays that are on the screen. These visual feedbacks and displays are perceived affordances.

Affordances, feedbacks, and perceived affordances are all different and independent design concepts. Perceived affordances  are useful even though the system does not have the real affordance. Real affordances do not have to be visible. And feedbacks that designers add can affect the usability and the understandability of a system while being independent from affordances.

3. Constraints and Conventions

Actions of clicking icons on screens are not affordances but conventions and feedbacks. The shapes of cursors are also considered as a learned convention and a visual information. And knowing that you are unable to click unless you have the proper form of the cursor is the same as following a cultural constraint.

Convention are constraints “in that they [it] prohibit[s] some activities and encourage[s] others” and are also “cultural constraint[s] […] that has evolved over time” (Norman 41). Conventions are not arbitrary and are slow to be adopted and left from people’s minds.

Constraints are not affordances but are “examples of the use of a shared and visible conceptual model, appropriate feedback, and shared, cultural conventions” (Norman 41).

In POET, the author “introduced the distinctions among three kinds of behavior constraints: physical, logical, and cultural” (Norman 40). It is important to learn where each behavior constraint is being used.

  • Physical constraint: Closely related to real affordance [e.g. “it is not possible to move the cursor outside the screen, […] locking the mouse button when clicking is not desired, […] restricting the cursor to exist only in screen locations where its position is meaningful” (Norman 40).]
  • Logical constraint: Use reasoning to determine alternatives [e.g. “how the user knows to scroll down and see the rest of the page, […] how users know when they have finished a task (Normal 40).]
  • Cultural constraint: Conventions shared by a cultural group [e.g. “the fact that the graphic on the right-hand side of a display is a ‘scroll bar,’ […] that one should move the cursor to it, hold down a mouse button, and ‘drag’ it downward in order to see objects located below the current visible set” (Norman 41).]

 

Critique/Commentary

After reading this article, not only did I learn about important terms of interaction design, which I am unfamiliar with, but I also was able to clarify the distinctions between each of these terms. Moreover, I was able to have a chance to think about examples of both real and perceived affordances in everyday lives and HCI.

Example #1) Coffee mug (REAL)

커피, 낯 짝, 컵

First, I tried finding affordances in my room. After observing objects in my room, I was able to discover “real” affordance in my coffee mug.

The handle of the coffee mug is shaped for users to easily grasp. From looking at this handle, we can see that the object “affords” being picked up, or it is able to be picked up. It also looks like it can be drunken out of because the mug has a large opening at the top with an empty well inside. This coffee mug  is looked as if it can hold things, such as pencils and pens because of the shape as well.

Example #2) Default icons on the iPhone dock (metaphors in HCI / PERCEIVED)

스크린샷 2018-03-18 오전 12.22.04.png

Understanding the underlying meanings of metaphors in HCI is very important in communication. While looking through my computer and iPhone to look for real and perceived affordances, I was able to discover the “perceived” affordances in default icons on my iPhone dock.

Take the case of “Music” icon. If there was a label below the “Music” icon, or even if there was no label like the image above, I would be able to understand that the icon represents listening to music. This is because I know what the music notes look like and what they are. I am able to connect my knowledge of music notes with this icon. However, there may be some people who may not understand the meaning of this icon literally because they do not know about music notes.

Take another case of “Safari” icon. Even if there was a label of Safari below the icon, the users may not understand the meaning of this icon literally. Based on my experience, when I first used iOS, I was not able to perceive what this icon of Safari meant even if there was a label. I expected this icon (and the label since it was there when I first used iOS), to be a kind of mapping application. Therefore, this perceived affordance has the potential to backfire and set up the wrong expectations.

After reading this article and applying the concept of “affordance” into different objects from everyday lives and HCI (Web, Internet, mobiles, etc.), I am confident that I can properly use these terms from now. Thank you Donald A. Norman!

 

**Source:

  • Norman, D. A. (1999) Affordances, Conventions, and Design. In Interactions, 6 (3) pp. 38-41.

Three Images related to Me (Portfolio)

These are three images that I have edited to show who I am and what I love. The pictures include several quotes from myself and other people related to each subject.

I have chosen to share these photos as my portfolio because these photos show who I am:

interaction design picture (1) week 1
#1: Sarang – I possess ardent love towards dogs, especially my pet Sarang.
interaction design picture (2)
#2: Dual nationality of Korea and USA – I possess dual nationality of Korea and USA. My parents are Koreans but I was born in America. I have experience living in both Korea and America that resulted in developing the international mindset.
interaction design picture 3
#3: Fiction American TV dramas – Through Netflix, I love watching fiction American TV dramas, such as “The Grimm,” “Timeless,” and “Vampire Diaries.” From them, I collect various ideas for my design works and release stress.