×
YOU DID NOT WAKE UP TODAY TO BE MEDIOCRE.
--Your friends at LectureNotes
Close

HUMAN COMPUTER INTERACTION

by Krishna MohanKrishna Mohan
Type: NoteInstitute: Krishna University Course: MCA Specialization: Master of Computer ApplicationsOffline Downloads: 34Views: 777Uploaded: 4 months ago

Share it with your friends

Suggested Materials

Leave your Comments

Contributors

Krishna Mohan
Krishna Mohan
UNIT - II Expert Reviews, Usability Testing, Surveys, and Continuing Assessment 4.1 Introduction • Designers can become so entranced with their creations that they may fail to evaluate them adequately. • Experienced designers have attained the wisdom and humility to know that extensive testing is a necessity. • The determinants of the evaluation plan include: o stage of design (early, middle, late) o novelty of project (well defined vs. exploratory) o number of expected users o criticality of the interface (life-critical medical system vs. museum exhibit support) o costs of product and finances allocated for testing o time available o experience of the design and evaluation team The range of evaluation plans might be from an ambitious two-year test to a few days test. • • The range of costs might be from 10% of a project down to 1%. 4.2: Expert Reviews • While informal demos to colleagues or customers can provide some useful feedback, more formal expert reviews have proven to be effective. • Expert reviews entail one-half day to one week effort, although a lengthy training period may sometimes be required to explain the task domain or operational procedures. • There are a variety of expert review methods to chose from: o Heuristic evaluation o Guidelines review o Consistency inspection o Cognitive walkthrough o Formal usability inspection
• Expert reviews can be scheduled at several points in the development process when experts are available and when the design team is ready for feedback. • Different experts tend to find different problems in an interface, so 3-5 expert reviewers can be highly productive, as can complementary usability testing. • The dangers with expert reviews are that the experts may not have an adequate understanding of the task domain or user communities. • To strengthen the possibility of successful expert reviews it helps to chose knowledgeable experts who are familiar with the project situation and who have a longer term relationship with the organization. • Moreover, even experienced expert reviewers have great difficulty knowing how typical users, especially first-time users will really behave. 4.3: Usability Testing and Laboratories • • • • • • • The emergence of usability testing and laboratories since the early 1980s is an indicator of the profound shift in attention to user needs. The remarkable surprise was that usability testing not only sped up many projects but that it produced dramatic cost savings. The movement towards usability testing stimulated the construction of usability laboratories. A typical modest usability lab would have two 10 by 10 foot areas, one for the participants to do their work and another, separated by a half-silvered mirror, for the testers and observers (designers, managers, and customers). Participants should be chosen to represent the intended user communities, with attention to background in computing, experience with the task, motivation, education, and ability with the natural language used in the interface. Participation should always be voluntary, and informed consent should be obtained. Professional practice is to ask all subjects to read and sign a statement like this one: o I have freely volunteered to participate in this experiment. o I have been informed in advance what my task(s) will be and what procedures will be followed. o I have been given the opportunity to ask questions, and have had my questions answered to my satisfaction. o I am aware that I have the right to withdraw consent and to discontinue participation at any time, without prejudice to my future treatment. o My signature below may be taken as affirmation of all the above statements; it was given prior to my participation in this study. Videotaping participants performing tasks is often valuable for later review and for showing designers or managers the problems that users encounter.
• • • • Field tests attempt to put new interfaces to work in realistic environments for a fixed trial period. Field tests can be made more fruitful if logging software is used to capture error, command, and help frequencies plus productivity measures. Game designers pioneered the can-you-break-this approach to usability testing by providing energetic teenagers with the challenge of trying to beat new games. This destructive testing approach, in which the users try to find fatal flaws in the system, or otherwise to destroy it, has been used in other projects and should be considered seriously. For all its success, usability testing does have at least two serious limitations: it emphasizes first-time usage and has limited coverage of the interface features. These and other concerns have led design teams to supplement usability testing with the varied forms of expert reviews. 4.4: Surveys • • • • • • Written user surveys are a familiar, inexpensive and generally acceptable companion for usability tests and expert reviews. The keys to successful surveys are clear goals in advance and then development of focused items that help attain the goals. Survey goals can be tied to the components of the Objects and Action Interface model of interface design. Users could be asked for their subjective impressions about specific aspects of the interface such as the representation of: o task domain objects and actions o syntax of inputs and design of displays. Other goals would be to ascertain o users background (age, gender, origins, education, income) o experience with computers (specific applications or software packages, length of time, depth of knowledge) o job responsibilities (decision-making influence, managerial roles, motivation) o personality style (introvert vs. extrovert, risk taking vs. risk aversive, early vs. late adopter, systematic vs. opportunistic) o reasons for not using an interface (inadequate services, too complex, too slow) o familiarity with features (printing, macros, shortcuts, tutorials) o their feeling state after using an interface (confused vs. clear, frustrated vs. in-control, bored vs. excited). Online surveys avoid the cost of printing and the extra effort needed for distribution and collection of paper forms. Many people prefer to answer a brief survey displayed on a screen, instead of filling in and returning a printed form, although there is a potential bias in the sample.
4.5: Acceptance Tests • • • • • • For large implementation projects, the customer or manager usually sets objective and measurable goals for hardware and software performance. If the completed product fails to meet these acceptance criteria, the system must be reworked until success is demonstrated. Rather than the vague and misleading criterion of "user friendly," measurable criteria for the user interface can be established for the following: o Time to learn specific functions o Speed of task performance o Rate of errors by users o Human retention of commands over time o Subjective user satisfaction In a large system, there may be eight or 10 such tests to carry out on different components of the interface and with different user communities. Once acceptance testing has been successful, there may be a period of field testing before national or international distribution.. The goal of early expert reviews, usability testing, surveys, acceptance testing, and field testing is to force as much of the evolutionary development as possible into the prerelease phase, when change is relatively easy and inexpensive to accomplish. 4.6: Evaluation During Active Use • • • • • A carefully designed and thoroughly tested system is a wonderful asset, but successful active use requires constant attention from dedicated managers, userservices personnel, and maintenance staff. Perfection is not attainable, but percentage improvements are possible and are worth pursuing. Interviews and focus group discussions o Interviews with individual users can be productive because the interviewer can pursue specific issues of concern. o After a series of individual discussions, group discussions are valuable to ascertain the universality of comments. Continuous user-performance data logging o The software architecture should make it easy for system managers to collect data about the patterns of system usage, speed of user performance, rate of errors, or frequency of request for online assistance. o A major benefit of usage-frequency data is the guidance they provide to system maintainers in optimizing performance and reducing costs for all participants. Online or telephone consultants o Online or telephone consultants are an extremely effective and personal way to provide assistance to users who are experiencing difficulties. o Many users feel reassured if they know there is a human being to whom they can turn when problems arise.

Lecture Notes