peterme.com   Thoughts, links, and essays from Peter Merholz
petermescellany   petermemes

Home

Archives
Archives before June 13, 2001

RSS Feed

Adaptive Path (my company!)

About peterme

Coordinates
Most of the Time
Oakland, CA

Interests
Current
American history around the time of the Revolution, figuring out how to marry top-down task-based information architecture processes with bottom-up document-based ones, finding a good dentist in San Francisco Oakland
Perennial
Designing the user experience (interaction design, information architecture, user research, etc.), cognitive science, ice cream, films and film theory, girls, commuter bicycling, coffee, travel, theoretical physics for laypeople, single malt scotch, fresh salmon nigiri, hanging out, comics formalism, applied complexity theory, Krispy Kreme donuts.

surf
Click to see where I wander.

Wish list
Show me you love me by
buying me things.

Spyonme
Track updates of this page with Spyonit. Clickee here.

Essays
[Editor's note: peterme.com began as a site of self-published essays, a la Stating The Obvious. This evolved (or devolved) towards link lists and shorter thoughtpieces. These essays are getting a tad old, but have some good ideas.]
Reader Favorites
Interface Design Recommended Reading List
Whose "My" Is It Anyway?
Frames: Information Vs. Application

Subjects
Interface Design
Web Development
Movie Reviews
Travel

 
A point from my last post. Posted on 08/23/2001.

So, that last post was pretty addled, and had a bunch of stuff in it. One of the key points that I don't think I represented well was exploiting the computational facility for representing data/information in any number of fashions. Yes, it's often talked about separating "content" from "presentation," but usually that goes no deeper than allowing the same words to be seen on different displays.

I'm interested in fundamentally different displays derived from the same data. Too often, data is displayed in a form little different from how it is inputted. But there's no reason for this. This is what I find exciting about LineDrive--it intelligently visualizes cartographic data in order to meet the needs of a particular task. This is what makes SmartMoney's MarketMap so exciting--it's not just a listing of numbers. I guess this is the promise of any data and information visualization, but one we so rarely see well-fulfilled. I think it goes unfulfilled because people aren't equipped to think of data in such fluid ways--we seem to have a bias that a thing is a thing is a thing, and can't be something else.

This was one of the things that excited me about Epinions' potential, and that excites me about Amazon. There's an overwhelming pool of information that provides the basis of the system. But through personalization technologies, the system can present fairly idiosyncratic views into that information--my home page does not look like yours (unless we've expressed similar behavior). True, the visual aspects of the interface are pretty much the same, but the content is different.

Which, I guess, addresses how I wanted to see LineDrive-like manipulations of data used in the presentation of content around specific tasks. Library information retrieval systems exploit metadata to help people find stuff. But they don't know what metadata is interesting to you in your task--so it presents a wide range of stuff, much of it not useful to the task at hand. What if a library knew that I had a task of "getting healthier," and that was why I was looking up books on exercise and nutrition... What kinds of stuff could fall away, what kinds of stuff could be promoted in such an instance?

Fundamentally, the issue at hand is to encourage people to rethink how they approach data and it's visualizations within the computer. I've always been frustrated by interaction design that treated the elements of the screen as these fixed static objects. I think things like ToolTips are fucking great and amazing, exploiting the changeable-pixel-by-pixel nature of displays to provide useful information when needed. I want people to think about how they can take advantage of this near-infinitely adaptable display and do exciting things with it.

2 comments so far. Add a comment.

Previous entry: "Tying together some threads in my head"
Next entry: "The State of Web Surfing."

Comments:

COMMENT #1
Actually, the promise of visualization technologies goes unfulfilled more because the software/hardware can't realistically keep up with the fluidity of human thought rather than the other way around. It has more to do with the data that needs to be displayed than the display of the data itself. There's a holy grail in the world of Decision Support/Analytics of "information at the speed of thought". Current software and hardware simply can't sift through information in the way you describe, which is at speed at which we think. The traditional response to this has, indeed, been to offer "everything and the kitchen sink" and let humans do the sifting (hey, use the right tools for the right job). Only recently has computing power become inexpensive enough and data management techniques (dimensional modeling, data mining, etc.) robust enough to allow us to even begin to approach "speed of thought" computing. While the display may be infinitely adaptable, unfortunately, current mechanisms for feeding the display are not (but we're working on it!)

I'm pretty sure you've referenced Inxight before, but you might also want to check out Visual Insights (while Inxight grew out of Xerox/PARC in the west, VI grew out of Bell Labs/Lucent in the east). IMO, though, AVS is doing the most interesting and diverse stuff right now.
Posted by Dick Chase @ 08/24/2001 07:29 AM PST [link to this comment]


COMMENT #2
You're reminding me somewhat of my feelings re XML v RDBs. For every cool thing I think one could do with XML, it seems like an RDB implementation would be easier, more practical and often just as good. Similarly, in the last post you wondered whether "having a task-based foundation for an information architecture makes such spaces more usable than the more standard metadata-based structures from information retrieval."

But the Amazon goodness is all about standard metadata manipulation (AFAIK): the data being the [item] properties and the metadata being the ways in which that item has stood in relation to other items vis-a-vis the behavior of the system's users. The Amazon IA seems (again, from the outside) to be just one level of abstraction above the more mundane business as usual. (Rather than, say, "here is a fixed box which contains an ad for item A" it is "here is a box of variable length which contains an ad for things which fall into the matrix of implicit preferences we infer this user's behavior to indicate".)

Ultimately, I doubt whether designers are capable of anticipating the tasks and building a good task-based IA -- instead, the system would have to be "trained up" (analogously to a neural net) to learn the rules to use for constructing task-based displays. The designer's task becomes coming up with a system ontology and whatever-we'd-call-it which was analogous to the back-propagation algorithm.

Also, FYI, the August issue of Communications is all about visualization.
Posted by Stewart @ 08/24/2001 12:01 PM PST [link to this comment]


Add A New Comment:

Name

E-Mail (optional)

Homepage (optional)

Comments Now with a bigger box for text entry! Whee!


All contents of peterme.com are © 1998 - 2002 Peter Merholz.