Sunday, September 30, 2012
Book Response #4: Design of Everyday Things - Chapters 5, 6, 7 + Overview
The Design of Everyday Things
By: Donald A. Norman
Response to Chapter 5:
To Err is Human
Slips
Types of Slips
Capture Errors
Description Errors
Data-Driven Errors
Associative Activation Errors
Loss-of-Activation Errors
Mode Errors
Detecting Slips
Design Lessons from the Study of Slips
Mistakes as Errors of Thought
Some Models of Human Thought
Connectionist Approach
The Structure of Tasks
Wide and Deep Structures
Shallow Structures
Narrow Structures
Nature of Everyday Tasks
Conscious and Subconscious Behavior
Explaining Away Errors
Social Pressure and Mistakes
Designing for Error
How to Deal with Error - and How NOT To
Forcing Functions
A Design Philosophy
Response to Chapter 6:
The Design Challenge
The Natural Evolution of Design
Forces that Work Against Evolutionary Design
The Typewriter: A Case History in the Evolution of Design
Why Designers Go Astray
Putting Aesthetics First
Designers are Not Typical Users
The Designer's Clients May Not Be Users
The Complexity of the Design Process
Designing for Special People
Selective Attention: The Problem of Focus
The Faucet: A Case History of Design Difficulties
Two Deadly Temptations for the Designer
Creeping Featurism
The Worshipping of False Images
The Foibles of Computer Systems
How to do Things Wrong
It's Not Too Late to Do Things Right
Computer as Chameleon
Explorable Systems: Inviting Experimentation
Two Modes of Computer Usage
The Invisible Computer of the Future
Response to Chapter 7:
User-Centered Design
Seven Principles for Transforming Difficult Tasks into Simple Ones
Use Both Knowledge in the World and Knowledge in the Head
Three Conceptual Models
The Role of Manuals
Simplify the Structure of Tasks
Keep the Task much the Same, but Provide Mental Aids
Use Technology to make Visible what would otherwise be Invisible, thus Improving Feedback and the Ability to Keep Control
Automate, but keep the Task much the Same
Change the Nature of the Task
Don't Take Away Control
Make Things Visible: Bridge the Gulfs of Execution and Evaluation
Get the Mappings Right
Exploit the Power of Constraints, both Natural and Artificial
Design for Error
When All Else Fails, Standardize
Standardization and Technology
The Timing of Standardization
Deliberately Making Things Difficult
Designing a Dungeons and Dragons Game
Easy Looking is Not Necessarily Easy to Use
Design and Society
How Writing Method Affects Style
From Quill and Ink to Keyboard and Microphone
Outline Processors and Hypertext
Home of the Future: A Place of Comfort or a New Source of Frustration
Response to the Book in General:
By: Donald A. Norman
Response to Chapter 5:
To Err is Human
Slips
Types of Slips
Capture Errors
Description Errors
Data-Driven Errors
Associative Activation Errors
Loss-of-Activation Errors
Mode Errors
Detecting Slips
Design Lessons from the Study of Slips
Mistakes as Errors of Thought
Some Models of Human Thought
Connectionist Approach
The Structure of Tasks
Wide and Deep Structures
Shallow Structures
Narrow Structures
Nature of Everyday Tasks
Conscious and Subconscious Behavior
Explaining Away Errors
Social Pressure and Mistakes
Designing for Error
How to Deal with Error - and How NOT To
Forcing Functions
A Design Philosophy
Response to Chapter 6:
The Design Challenge
The Natural Evolution of Design
Forces that Work Against Evolutionary Design
The Typewriter: A Case History in the Evolution of Design
Why Designers Go Astray
Putting Aesthetics First
Designers are Not Typical Users
The Designer's Clients May Not Be Users
The Complexity of the Design Process
Designing for Special People
Selective Attention: The Problem of Focus
The Faucet: A Case History of Design Difficulties
Two Deadly Temptations for the Designer
Creeping Featurism
The Worshipping of False Images
The Foibles of Computer Systems
How to do Things Wrong
It's Not Too Late to Do Things Right
Computer as Chameleon
Explorable Systems: Inviting Experimentation
Two Modes of Computer Usage
The Invisible Computer of the Future
Response to Chapter 7:
User-Centered Design
Seven Principles for Transforming Difficult Tasks into Simple Ones
Use Both Knowledge in the World and Knowledge in the Head
Three Conceptual Models
The Role of Manuals
Simplify the Structure of Tasks
Keep the Task much the Same, but Provide Mental Aids
Use Technology to make Visible what would otherwise be Invisible, thus Improving Feedback and the Ability to Keep Control
Automate, but keep the Task much the Same
Change the Nature of the Task
Don't Take Away Control
Make Things Visible: Bridge the Gulfs of Execution and Evaluation
Get the Mappings Right
Exploit the Power of Constraints, both Natural and Artificial
Design for Error
When All Else Fails, Standardize
Standardization and Technology
The Timing of Standardization
Deliberately Making Things Difficult
Designing a Dungeons and Dragons Game
Easy Looking is Not Necessarily Easy to Use
Design and Society
How Writing Method Affects Style
From Quill and Ink to Keyboard and Microphone
Outline Processors and Hypertext
Home of the Future: A Place of Comfort or a New Source of Frustration
Response to the Book in General:
Monday, September 17, 2012
Book Response #3: Design of Everyday Things - Chapters 2, 3, 4
The Design of Everyday Things
By: Donald A. Norman
Response to Chapter 2:
In this chapter, Norman first discusses how people typically blame themselves when making errors which results in a repeating cycle of inability to avoid error. These errors often arise from misinterpreting actions throughout everyday life, either as a result of learned assumptions or through incorrect conceptual models built on observations of a poor system image. He goes on to mention that human always have to justify their actions, and that it usually leads to blaming something other than ourselves for the occurring error. This reinforces the feeling of helplessness in users that are unable to correct their construed mental model, leading to further failure. I really enjoyed the way he depicted the slippery slope of helplessness because often times I feel desperately helpless after making the same stupid mistake over and over. He goes on to break down exactly how people analyze their actions, noting seven precise steps, though I agree with him that typically steps are skipped when they shouldn't be. Unfortunately he gives the impression that it is quite trivial to span the gulf of execution and evaluation when I personally believe that these can also be attributed to user incompetence. Why does a user need a light to know if a tape has been inserted into the VCR when they can just lift the flap and check? This boarders on the line of added complexity with little benefit to the general user.
Response to Chapter 3:
What I gathered from this chapter is that the precision of human actions do not solely depend on the knowledge stored in the head of that person doing the action. Typically, for routine tasks, I do them without even thinking about them, but I always had the belief that the knowledge was in my subconscious. I rationalized that even though I wasn't actively thinking of what I was doing while I was doing it, there was always some little spot hidden away in my brain that told me "I've seen this happen like that before, so doing this should lead to that", but I felt it was more a reference book on how to do things I've done before (like looking up a word in a dictionary), and didn't view it as though my brain had a list of guidelines gathered over the years that led me reason and deduce new actions (like creating a grammatically correct sentence vs. just stringing words together from the dictionary). Norman obviously spends a lot of time doing Introspection, and over the past year or two I have thought considerably more about how my thoughts are constructed. I find the four reasons that precise knowledge is not needed is very important for designers to consider. In addition, I really liked the way Norman broke down the way memories are kept into arbitrary things (rote memorization), meaningful relationships (grouping), and explanations (derived). I find that I typically try to fully understand something while trying to memorize it, therefore when I have to recall the fact I can explain the reasoning behind it. I do this because my ability to memorize random things is very poor!
Response to Chapter 4:
I really enjoyed reading this chapter because it brought to my attention the reasoning behind actions, especially in social situations. Although most people wouldn't see the correlation immediately, I feel like I try to approach social situations in the same manner I would approach a piece of machinery. Also, when trying to learn how new things are supposed to work, I find myself highly interested in the constraints and often have found myself telling friends that I prefer to learn things by 'shading in the grey areas'. This means that I try to identify what the object in question can and cannot be used for in a general sense. This is probably why in class I typically ask questions that progress the discussion instead of having the professor repeat himself. If the professor's reply, a new constraint, goes against what I previously understood, I try to clarify instead of just accepting the reply as fact. Once, in 7th grade I accidentally made my Math Teacher leave the room from embarrassment because she kept saying a negative number times a negative number results in a negative number, which just isn't true. I kept arguing that it was a positive number, oblivious to her visible feedback that she wanted to move onto the next question. In regards to the doors and switches, often I pull instead of push or vice versa but I don't even think about the mistake, and move on, but I often have no issues with switches because I explore them in a very systematic manner.
Tuesday, September 11, 2012
Book Response #2: Chinese Room Thought Experiment
Minds, Brains, and Programs
By: John R. Searle
Response to Published Article:
Searle argues that instantiating a program (running one to accomplish a specific task) does not lead to a computer that 'understands' the information it is processing. He uses a specific example of a Turing Test using a Chinese story in which the 'being' inside the room answers questions to. He states that if it were an English speaking man in the room that used a set of rules to transcribe Chinese characters received as input into appropriate Chinese characters as output would not actually understand Chinese. This goes directly against the views of functionalism and computationalism which state that the mind is an information processing system operating on formal symbols. Searle approaches this argument by clarifying that information processing does not actually mean one understands the information. To demonstrate this, I would point to the fact that I have had classes in the past where it is easy to deduce the answer to a question based on another question that is of the same format, but occasionally I find myself struggling to determine the cause, which is to say I don't truly understand the material.
This brings up another point by Searle, which is that simulation shouldn't be considered the same as duplication. Behaviorism and operationalism classify objects by how they appear or act, but Searle points out that you wouldn't confuse a human and a dog just because they both eat food. He argues that creating a strong AI needs to be viewed as creating some sort of meta-program that happens to function like a mind in the framework of a brain. Since strong AI implies understanding and intentionality, strong AI cannot form from the simulation of just one instantiation of understanding, but would rather form from the creation of another instantiation of the mind, but not in the construct of the brain.
So this leads back to the Chinese Room example, where Searle tries to boil it down to the fact that if there is an English speaker in the room who actually does understand the Chinese story the way another Chinese speaker would, then the original English speaker must also be a Chinese speaker. Searle chose Chinese and English as the examples because the languages are so dramatically different, but for the argument at hand, I'd prefer to call it the Language Room. This means that if the original machine in the room is seen as understanding a language not native to itself, then it must have been able to learn that language. It also means that we will never get a strong AI just by trying to mimic understanding of something in particular, but that a strong AI can only be developed by creating something that understands in general and can therefore be instantiated to understand something in particular.
Book Response #1: Design of Everyday Things - Chapter 1
The Design of Everyday Things
By: Donald A. Norman
Response to Chapter 1:
In the book The Design of Everyday Things, Norman discusses the phycological aspects of designing everyday objects properly to ease user interaction. After reading the preface to the book, I felt that Norman was going to talk more about the design process, but after reading the first chapter I get the impression that his intention is to go over the reasons for which people attempt to interact with objects and how the design process can be tailored to suit such elementary interaction. He raises quite a few great points regarding the principles of great design which are listed below. One thing I noticed though, throughout reading his multiple examples of poor design, was that I am typically less burdoned by poor design. I rarely push doors that should be pulled, and have never had much trouble with any telephone system even though he makes a few examples out to be horrendously contrived.
- Visibility
- Using natural signals to convey the mapping between intended actions and actual operations.
- Mapping
- Shows the relationships between actions and results, between the controls and their effects, and between the system state and what is visible.
- Affordance
- The percieved and actual fundamental properties of a device that determine how to properly operate the device.
- Feedback
- Full and continuous feedback regarding the current state as a result of a particular action.
- Conceptual Models - Formed largely by interpreting the devices perceived actions and its visible structure.
- Design Model - The designer's conceptual model of how the user should perceive the system.
- User's Model - Mental model developed through interaction with the system.
- System Image
- The actual visible part of the device to the user.
Paper Reading #6: ShutEye - Encouraging Awareness of Healthy Sleep Recommendations with a Mobile, Peripheral Display
Intro -
Title:
ShutEye: Encouraging Awareness of Healthy Sleep Recommendations
with a Mobile, Peripheral Display
Reference Information:
CHI '12, May 5-10, 2012, Austin, Texas, USA
Author Bios:
Jared S. Bauer - jaredsb@uw.edu
Jonathan Schooler - jschools@uw.edu
Eric Wu - ericwu@uw.edu
Nathaniel F. Watson - nwatson@uw.edu
Julie A. Kientz - jkientz@uw.edu
University of Washington
Seattle, WA USA
Sunny Consolvo - sunny@consolvo.org
Benjamin Greenstein - ben@bengreenstein.org
Intel Labs Seattle
Seattle, WA USA
Summary -
9 Lines
Image Goes Here
"2 Line Quote."
Related Work -
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
6 Lines
Evaluation -
9 Lines
Discussion -
3 Lines
Image Goes Here
8 Lines
Paper Reading #5: ZeroN - Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation
Intro -
Title:
ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation
Reference Information:
UIST '11, Octiober 16-19, 2011, Santa Barbara, California, USA
Author Bios:
Jinha Lee - jinhalee@media.mit.edu
Hiroshi Ishii - ishii@media.mit.edu
MIT Media Laboratory
75 Amherst St.
Cambridge, MA, 02139
Rehmi Post - rehmi.post@cba.mit.edu
MIT Center for Bits and Atoms
20 Ames St.
Cambridge, MA, 02139
Summary -
This paper presents a 'novel' approach to creating a physical representation of a 3D virtual coordinate system through which users can interact with the tangible interface element in order to "see, feel, and control computation." In all the demonstrations, a metal magnetic sphere was used as the interface element, and it was controlled in the predefined 3D volume with a magnetic control system. The control system was able to keep the magnet centered in a specific spot, so they used stepper motors to control the actuation (up / down movement) and the x / y coordinates. An optical tracking and display system was combined with the ability to precisesly control the position of the element in order to project images onto it. A built in physics simulator enables physical simulations of Planetary Motion, Shadows, and Camera Angle virtualization.
"ZeroN, a new tangible interface element that can be levitated and moved freely by a computer in a three dimensional space."
Related Work -
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
6 Lines
Evaluation -
9 Lines
Discussion -
3 Lines
Image Goes Here
8 Lines
Paper Reading #4: Not Doing But Thinking - The Role of Challenge in the Gaming Experience
Intro -
Title:
Not Doing But Thinking: The Role of Challenge in the Gaming Experience
Reference Information:
CHI '12, May 5-10, 2012, Austin, Texas, USA
Author Bios:
Dr. Anna L Cox - UCL Interaction Centre - University College London - anna.cox@ucl.ac.uk
Pari Shah - Psychology & Language Sciences - University College London - zcjtbb4@ucl.ac.uk
Dr. Paul Cairns - Dept of Comp Sci - University of York - paul.cairns@york.ac.uk
Michael Carrol - Dept of Comp Sci - University of York - mjpc@cs.york.ac.uk
Summary -
This paper presents studies performed to further research related to the role of challenge in producing a good gaming experience (GX). They collected both qualitative and quantitative data, gathering objective and subjective feedback from the users tested in the study. They tried to determine if altering the level of challenge of the gaming experience increased the users feeling of immersion. After performing three studies, they were able to deduce that raising the challenge by increasing the interaction level did not increase the flow of the user, but that decreasing the time limit for the user actually effectively increased his/her level of immersion.
"The level of challenge experienced is an interaction between the level of expertise of the gamer and the cognitive challenge encompassed within the game."
Related Work -
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
- ASDF
6 Lines
Evaluation -
9 Lines
Discussion -
3 Lines
Image Goes Here
8 Lines
Tuesday, September 4, 2012
Paper Reading #3: PaperSketch - A Paper-Digital Collaborative Remote Sketching Tool
Intro -
Title:
PaperSketch: A Paper-Digital Collaborative Remote Sketching Tool
Reference Information:
IUI '11 Proceedings of the 16th International Conference on Intelligent User Interfaces
Author Bios:
Dr. Nadir Weibel, Ph.D.
Postdctoral Researcher
Department of Cognitive Science
University of California, San Diego
Distributed Cognition and Human-Computer Interaction Lab
Ubiquitous Computing and Social Dynamics Research Group
Beat Signer
Professor of Computer Science at the Vrije Universiteit Brussel (VUB) in Belgium
Co-director of the Web and Information System Engineering (WISE) Laboratory
Investigating interactive paper solutions, multimodal and multi-touch interaction.
Moira C. Norrie
Professor at Swiss Federal Institute of Technology Zurich
Use of Object-Oriented and Web Technologies for Next Generation Information Systems
One of few Leading Research Groups on Technologies for Interactive Paper
Hermann Hofstetter
Could not find any Information regarding this Author.
Hans-Christian Jetter
PhD Researcher at Information Systems University of Konstanz
Interests in Cognitive Foundations of “Natural” User Interfaces
Harald Reiterer
Professor at Information Systems University of Konstanz
Department of Computer and Information Science
Summary -
Sketching with paper and pencil has been a long used method for "rapid capture of visual information to be shared in the simplest possible way." This paper documents the research and development of a collaborative sketching tool PaperSketch which aims to enable synchronous editing of a diagram or sketch. Since no methods currently exist that can actually capture and print to paper whilst the paper is still being used, the paper pad in this program is actually a virtual whiteboard and the data is shared via an underlying communication layer based on Skype.
Related Work (not referenced in paper) -
- Paper Augmented Digital Documents - http://dl.acm.org/citation.cfm?id=964702
- Paper Windows: Interaction Techniques for Digital Paper - http://dl.acm.org/citation.cfm?id=1055054
- Bridging the Paper and Electronic Worlds: The Paper User Interface - http://dl.acm.org/citation.cfm?id=164986
- SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces - http://dl.acm.org/citation.cfm?id=503397
- Capturing the Capture Concepts: A Case Study in the Design of Computer-Supported Meeting Environments - http://dl.acm.org/citation.cfm?id=62287
- Beyond the Chalkboard: Computer Support for Collaboration and Problem Solving in Meetings - http://dl.acm.org/citation.cfm?id=7887
- Use of Drawing Surfaces in Different Collaborative Settings - http://dl.acm.org/citation.cfm?id=62286
- HandJive: A Device for Interpersonal Haptic Entertainment - http://dl.acm.org/citation.cfm?id=274653
- Managing a Trois: A Study of a Multi-User Drawing Tool in Distributed Design Work - http://dl.acm.org/citation.cfm?id=108893
- Shared Workspaces: How Do They Work and When are they Useful? - http://dl.acm.org/citation.cfm?id=182800
The paper sufficiently documented studies done in collaborative sketching, but I found additional papers that would add additional value to the PaperSketch Project such as haptic feedback. Also, although not noted in the paper, a lot of research has been done to try to digitize paper in order to produce sketches on it whilst someone is still sketching but nothing has been quite as successful as a digital collaborative whiteboard.
Evaluation -
This paper was designed as a research topic to determine what design professionals desire out of collaborative environments. All the feedback attained was qualitative and wholly subjective. The authors developed a collaborative environment, but used digital pens to capture movement as sketches were drawn. Once someone was done sketching other updates were also uploaded to their view. Roughly 90% of participants in the study said they would use a similar tool for remote sketching based on a pen and paper interface. It does not explicitly say a Likery scale was used, but from my understanding they asked open-ended questions to help mold the development of the work space.
Discussion -
Although a variety of research has been done in the area of collaborative digital environments, it's seemingly difficult to translate that to the physical world. This study helped refine the development of a tool to enable sharing of sketches on physical mediums. Unfortunately, the shared GUI is only viewed and isn't actually transmitted back to the users until after they finish a portion of their sketch. The users in the study enjoyed the idea of this shared GUI but since it would block other users from updating it while another user was, it was not synchronous but rather sequentially building a collaborative sketch. I could see this being used to jot down rough design concepts, but this is far from a tool I would actually see being used by professional design artists.
Paper Reading #2: The User as a Sensor - Navigating Users with Visual Impairments in Indoor Spaces using Tactile Landmarks
Intro -
Title:
The User as a Sensor: Navigating Users with Visual Impairments
in Indoor Spaces using Tactile Landmarks
Reference Information:
CHI 2012, May5-10, 2012, Austin, Texas, USA
Author Bios:
Ilias Apostolopoulos and Navid Fallah have been working in the PRACSYS (Physics-aware Research for Autonomous Computational SYStems) Group of the Robotics Research Lab at the University of Nevada, Reno, towards their PhD's for a year. They study under the guidance and direction of Associate Professors Kostas Bekris and Eelke Folmer who share the title of Director at the UNR Robotics Research Lab.
Summary -
This paper evaluates a system called Navatar that visually impaired users can use to help them locate and navigate around an indoor environment using tactile landmarks. Previous indoor systems required expensive alterations to the environment or sensing and computing equipment which has prevented large scale implementation. Navatar uses the accelerometers in a user's smartphone coupled with an annotated virtual representation of the indoor environment in order to guide the visually impaired user through a sequence of tactile landmarks.
"A user study with six visually impaired users evaluated the accuracy of Navatar and found that users could successfully complete 85% o."
Related Work -
- Mobility in Individuals with Moderate Visual Impairments - http://psycnet.apa.org/psycinfo/1990-23089-001
- Personal Guidance System for People with Visual Impairment: A Comparison of Spatial Displays for Route Guidance - http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2801896/
- Drishti: An Integrated Navigation System for Visually Impaired and Disabled - http://www.harris.cise.ufl.edu/projects/publications/wearableConf.pdf
- Indoor Wayfinding: Developing a Functional Interface for Individuals with Cognitive Impairments - http://informahealthcare.com/doi/abs/10.1080/17483100701500173
- An Integrated Wireless Indoor Navigation System for Visually Impaired - http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5929098
- RFID in Robot-assisted Indoor Navigation for the Visually Impaired - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1389688
- Comparing Methods for Introducing Blind and Visually Impaired People to Unfamiliar Urban Environments - http://cogprints.org/1509/
- The Development of the Navigation System for Visually Impaired Persons - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1020485
- A Model-Based, Open Architecture for Mobile, Spatially Aware Applications - http://dl.acm.org/citation.cfm?id=719105&CFID=109411254&CFTOKEN=12089765
- Where did that Sound come from? Comparing the Ability to Localise Using Audification and Audition - http://informahealthcare.com/doi/abs/10.3109/17483107.2011.602172
The previous works I found that were similar in topic, as well as the ones listed in the paper's references, all show that the specific way they are attempting to navigate a blind user through an annotated virtual representation of the users local indoor area using accelerometers and other sensors is truly 'novel'. They presented one previous paper as a case study to determine if this method could work for visually impaired (blindfolded) users, but now are presenting a new paper that has shown effectiveness for actual blind users.
Evaluation -
This second paper on Navatar presents studies conducted to determine the effectiveness of this system used for indoor navigational purposes by blind users. They conducted the study on 6 participants and received both quantitative and qualitative feedback. There were 11 paths tested, and the system showed to help successfully navigate the user through the physical landmarks 85% of the time. Also, the users were able to answer open-ended questions after the study to collect qualitative data and assessed a 5-point Likert scale that showed an average liking of 4.66 out of 5 for each category.
Discussion -
Although Navatar did make navigation more efficient and reduce the overall effort by the user, there were several areas that were noted could be sufficiently improved. Since an annotated virtual representation of the indoor space is needed to be able to mark tactile landmarks, robotic mapping could be used to gather this information in real-time to aide a user in an unfamiliar setting. Also, another improvement was simply using a headset in order to keep the hands free while blindly navigating the terrain.
"The application provides directions through text to speech using the smartphone's speaker and the user confirms executing each direction by tapping the screen.
Thursday, August 30, 2012
Paper Reading #1: PolyZoom - Multiscale and Multifocus Exploration in 2D Visual Spaces
Intro -
Title:
PolyZoom: Multiscale and Multifocus Exploration in 2D Visual Spaces
Reference Information:
CHI 2012, May5-10, 2012, Austin, Texas, USA
Author Bios:
Waqas Javed and Sohaib Ghani have both been PhD students in the School of Electrical and Computer Engineering at Purdue University since 2008. They both work in the PIVOT Lab under the direction of Niklas Elmqvist doing research in information visualization and visual analytics.
Niklas Elmqvist currently serves as an assistant professor in Purdue University's School of Electrical and Computer Engineering, where he advises and aides graduate students in pursue of their PhD's. Some areas of research he has most notably been involved with are human-computer interaction, information visualization and visual analytics.
Summary -
Title:
PolyZoom: Multiscale and Multifocus Exploration in 2D Visual Spaces
Reference Information:
CHI 2012, May5-10, 2012, Austin, Texas, USA
Author Bios:
Waqas Javed and Sohaib Ghani have both been PhD students in the School of Electrical and Computer Engineering at Purdue University since 2008. They both work in the PIVOT Lab under the direction of Niklas Elmqvist doing research in information visualization and visual analytics.
Niklas Elmqvist currently serves as an assistant professor in Purdue University's School of Electrical and Computer Engineering, where he advises and aides graduate students in pursue of their PhD's. Some areas of research he has most notably been involved with are human-computer interaction, information visualization and visual analytics.
Summary -
This paper presents studies Javed and Ghani performed to determine if their 'novel' approach to 2D Visual Space representation (called PolyZoom) made human users more efficient in exploration of that Visual Space. Although they improved on spatial awareness, this layout method causes reduced viewing size for each region as well as underutilized areas of the viewing screen but showed to be more effective for visual space exploration (which is similar but not the same as navigation).
"PolyZoom was designed to proved spatial awareness simultaneously at multiple different levels of scale."
Related Work -
- A Comparison of Navigation Techniques Across Different Types of Off-Screen Navigation Task - http://www.springerlink.com/content/yv00750u37023831/
- Workspace Awareness in Real-Time Distributed Groupware - http://hci.usask.ca/publications/1997/gutwin-phd.pdf
- Using Distortion-Oriented Displays to Support Workspace Awareness - http://hci.usask.ca/publications/1996/distortion-final.pdf
- An Experimental Investigation of Magnification Lens Offset and Its Impact on Imagery Analysis - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1382916&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1382916
- Design and Evaluation of Navigation Techniques for Multiscale Virtual Environments - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1667642&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1667642
- Guidelines for Using Multiple Views in Information Visualization - http://dl.acm.org/citation.cfm?id=345271
- Browsing Zoomable Treemaps: Structure-Aware Multi-Scale Navigation Techniques - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4376147&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4376147
- View Size and Pointing Difficulty in Multi-scale Navigation - http://dl.acm.org/citation.cfm?id=989881
- ZoneZoom: Map Navigation for Smartphones with Recursive View Segmentation - http://dl.acm.org/citation.cfm?id=989901
- Hyper Mochi Sheet: a predictive focusing interface for navigating and editing nested networks through a multi-focus distortion-oriented view - http://dl.acm.org/citation.cfm?id=302979.303145
All the previous research I found, included referenced work by the authors, indicates that they took inspiration from multiple different sources to piece together a completely unique information representation of Visual Spaces. Previous attempts to attain spacial awareness after increasing the zoom either utilized one birds-eye-view overhead map, with the view in question blown up from the overhead map with graphics to show where the view was from, or distorted the view as to provide extra information such as the use of a fish-eye lens or magnifying glass.
Evaluation -
They performed two studies which tested volunteers on how quick they could zoom into a particular spot on the map by following a visual que, and how fast they could determine which of 4 selected areas matched the current view. This new method of information visualization for Visual Spaces like Google Maps showed through their research to produce a 6.5% speed increase in correctly zooming into a particular part of the map. Most of this speedup came from the fact that it is really easy to backtrack from or fix a slightly off view since all other views stay onscreen. The second study showed an 11% increase in the it took for users to find two regions that matched out of a predetermined selection. Both of these studies compared this new display format to a simply view that allowed panning and zooming.
Discussion -
This research project led to the creation of a new way to display different focus levels of a 2D map (arranged by tree hierarchy) and to effectively utilize the viewers screen by scaling up certain focus levels the user felt were of more importance to them.
One of the most impressive features in my opinion was that they kept the aspect ratios all the same to prevent distortion. Also it is a novel idea to store Visual Space representations such as map views in a tree format to switch between zoom levels rather quick and precisely.
In addition, I really enjoyed this article because it was written so clear and concise. It takes the reader from prior research and current exploration methods of visual spaces, the benefits and negatives associated with those methods, how their new proposed layout algorithms help maintain certain design specifications, and the evaluation of this new layout.
Wednesday, August 29, 2012
About Me
This is my first blog post for CSCE 436 - Computer Human Interaction.
The first Assignment asks for personal information so I'm listing that here:
Photo of Yourself (real photo):
Look above ^^^.
E-mail Address:
cbodolus<at>tamu<dot>edu
<or>
cbodolus12<at>gmail<dot>com
Class Standing:
5th Year Senior
Why are you taking this class?
I am taking this class because I first met Prof. Hammond as a guest lecturer in another class and have heard great things from previous students. Unfortunately, after hearing the first day that this class focuses much on the ethics, I realize that this may not be the right class for me. If I have to present my article tomorrow I will probably drop the class in favor of taking my Computer Engineering Area Electives in the Communications and Networks Track.
What experience do you bring to this class?
Well, I've always been able to do everything I tried to, so there is that.
What are your professional life goals?
Most of my life goals are not professional, but some that can be consider are:
Controlling (or Owning) an Empire (or Business)
Investing in Promising Students
Accelerating Brilliant Ideas
Making History from the Present
Write Software a "Couple Mill" Lines Long
What are your personal life goals?
Audio Engineering for some Major Record Companies.
Enhancing my perception of my own mind, body, and soul.
I really want to go Skydiving!
Short term I'm focused on getting a new motorcycle.
What do you want to do after you graduate?
All sorts of things, especially travel,
but I have to work instead for 40 years before I do that!
What do you expect to be doing in 10 years?
Surfing and Bar-b-queuing every Sunday in California.
I'll have a wife and three kids (twin boys and a younger daughter).
CTO of a currently unfounded Neo-Technological Company
Hopefully I'll be able to teach one class a semester at a local college.
What do you think will be the next biggest technological advancement in computer science?
The development of analog components (like transistors) to replace their digital binary counterparts.
If you could travel back in time, who would you like to meet and why?
John Browning
He is credited with 128 gun patents.
That's quite the Engineer.
Describe your favorite shoes and why they are your favorite?
Slippers are LEGIT. Especially the soft, comfy ones.
Excluding pink ones*
If you could be fluent in any foreign language that you're not already fluent in, which one would it be and why?
Chinese (Mandarin) because it is completely different from English and Spanish.
Give some interesting fact/story about yourself.
I would have become a Doctor (Reconstructive Surgeon helping to bridge Artificial Limbs with Brain Function for Veterans) but Dr. Bodolus just doesn't have a good ring to it.
Subscribe to:
Posts (Atom)