Thursday, April 7, 2011

Paper Reading #20: Automatically Identifying Targets Users Interact with During Real World Tasks

Comments: Cindy Skatch, Patrick Frith.
Reference Information:
Title: Automatically Identifying Targets Users Interact with During Real World Tasks
Authors:Amy Hurst, Scott E. Hudson, Jennifer Mankoff.
Venue: IUI’10, February 7–10, 2010, Hong Kong, China.

Summary: In their paper, the researchers attempted to improve computer accessibility APIs. The existing Microsoft Active Accessibility API have roughly a 75% success rate, but are rather rigid and lack flexibility. They also examined previous attempts at accessibility, which do a purely visual analysis of a GUI.

Their approach is mixture of the two. Using existing MSAA APIs, they created a system of analyzing various other visual cues in order to build a system which is more robust than either one alone. Their system particularly took advantage of mouse over visual effects to detect click-able buttons. The researchers found that their system was 89% accurate.

Discussion: Buzzwords! Jargon! Buzzword buzzword! I read the first page of this article and honestly had no idea what they were talking about. They kept saying real-world in the introduction, so I was visualizing some sort of smart phone based object detection system. If it weren't for the screenshots they included, I would probably still be lost.

2 comments:

  1. I too found this a little jargon filled. The screenshots they had were pretty awesome though of their computer vision.

    ReplyDelete
  2. Accsessibility systems still have way to go I feel. I'm glad to see that current accessibility APIs are being evaluated though.

    ReplyDelete