peterme.com

 
petermeme Archives

Home
June 01 - June 09, 2001
May 01 - May 31, 2001
April 01 - April 30, 2001
March 01 - March 31, 2001
February 01 - February 28, 2001
January 01 - January 31, 2001
December 01 - December 31, 2000
November 01 - November 30, 2000
October 01 - October 31, 2000
September 01 - September 30, 2000
August 01 - August 30, 2000
July 01 - July 27, 2000
June 01 - June 30, 2000
May 24 - May 31, 2000
May 1 - May 23, 2000
April 1 - April 30, 2000
March 1 - March 31, 2000
February 1 - February 29, 2000
January 1 - January 31, 2000
December 1 - December 31, 1999
November 1 - November 30, 1999
October 16 - October 31, 1999
October 1 - October 15, 1999
September 8 - September 30, 1999
August 29 - September 7, 1999
August 13 - August 27, 1999
August 6 - August 12, 1999
July 25 - August 5, 1999
July 17 - July 24, 1999

July 11 - July 16, 1999
July 01 - July 10 1999
June 09 - June 30 1999
June 01 - June 08 1999

May 1999

April 1999
March 1999
February 1999
January 1999
All of 1998

 
past petermemes
April 27, 2001

Sniff. Sniff. Among the best papers at CHI2001 were the ones that folks from Xerox PARC presented on information scent. And the best of those papers was "Information Scent as a Driver of Web Behavior Graphs: Results of a Protocol Analysis Method for Web Usability" [PDF]. In it's introduction:

The development of predictive scientific and engineering models of users’ cognition and interaction with the World Wide Web (WWW) poses some tough and interesting problems. Cognitive engineering models, such as GOMS, fit user interaction with application software (e.g., word processors) when error rates are small, tasks are well-structured, exploration is virtually nonexistent, and
content is not a major determinant of behavior. Typical interactions with the WWW, on the other hand, are very likely to involve many impasses, ill-structured goals and tasks, navigation and exploration, and substantial influences from the content that is encountered. In this paper we present an approach to the analysis of protocols of WWW use that is aimed at capturing these phenomena and that is aimed towards the development of predictive models that are of use in science and design.

The purpose of this paper is to introduce a replicable WWW protocol analysis methodology illustrated by application to data collected in the laboratory. To support replicability and reuse, we are developing a bank of ecologically valid WWW tasks and a WWW Protocol Coding Guide, which will be available on the WWW...

April 26, 2001

peterme, Task Master.
So, all this discussion of multi-tasking and attention focus have set me on a couple of paths, one a thoughtwander and one a research flurry.

Thoughtwander
Humans can attend to only one thing at a time. This doesn't mean that they can't receive multiple channels of stimulus--we simultaneously see, smell, touch, hear. Hell, we can even receive multiple types of information from our eyes--that which is the focus of our vision, and the periphery. And we make decisions of action based on these multiple inputs. But we're not truly multi-tasking. We can only pay attention to a single thing... And if something in my periphery triggers my brain, I then attend to that, and lose focus on whatever I was dealing with.

So, we can have displays with multiple readouts and successfully attend to them, because, by and large, the information on them doesn't change. And when something aberrant happens, we are drawn to it because it DOES change. The problem would be if two aberrances (is that a word?) occur simultaneously, because we wouldn't be able to deal with both.

Now, while we can't pay attention to more than one thing at a time, we can crudely multi-task by switching our attention around a number of different things at once. When cooking, you've usually got the a couple pots on the stove and something in the oven and your chopping something on the board, and you're bopping around, dealing with all of this. Or, on the computer, you're reading email while surfing the Web while chatting in AIM.

So, for all reasonable intents and purposes, you ARE multi-tasking, because most tasks don't require continual focused attention. To somebody observing you from the outside, you seem to be perfectly handling a number of parallel tasks with little trouble (of course, until one of those tasks requires inordinate attention, like an email needing immediate thoughtful response, or someone in AIM telling you they broke up with their boyfriend, etc... at which point you forget about all the other tasks.)

Research Flurry
Kicking the search query "multitask cognition" into Google lead to some delightful results (and a bunch of less interesting ones).

David Weinberger, whose comment started this whole thread over on Ev's site, wrote about this over three years ago in JOHO, in a piece titled "The Price of Multi-Tasking: Your Soul."

I then found the home page of Harold Pashler, a professor at UCSD (an institution that's lead in cognitive science research), and he has a page of manuscripts for downloading, including a soon-to-be-published article on "Task Switching and Multi-Task Performance" (PDF) which discusses actual scientific 'n shit research in this field, and is too drily academic, and has a big chunk (starting on page 13) on Dual Task Performance, which starts:

Dual-Task Performance
We turn now to the limitations that arise when people attempt to perform two different tasks at the same time. While there is a large literature on relatively complex and continuous dual-task performance, the focus here will be on discrete tasks. The reason for this is that with more continuous tasks interference and switching are easily disguised for reasons that will emerge clearly below. Not surprisingly, limitations on simultaneous mental operations
evidently arise at various different functional loci. Perceptual analysis of multiple stimuli can often take place in parallel, but when perceptual demands exceed a certain threshold, capacity limitations can become evident (Pashler, 1997) although non-perceptual factors (such as statistical noise in search designs) often masquerade as capacity limitations (Palmer, 1995). These limitations appear largely, but probably not entirely, modality-specific (Treisman & Davies, 1973; Duncan, Mertens & Ward,1997). Similarly, response conflicts arise when responses must be produced close together in time. These perceptual limitations are often most acute when similar or linked effectors are used, such as the two hands (Heuer, 1985).
The most intriguing, and for the present topic, the most relevant limitations are those that arise in central stages of decision, memory retrieval and response selection. Intuitively, most laymen assume that the cognitive aspects of two tasks can be performed simultaneously unless one or both is intellectually demanding. This seems not to be the case, however. This is most clearly seen when people try to carry out two speeded but relatively simple tasks, each requiring a response to a separate individual stimulus. As Telford (1931) first observed, people almost invariably respond to the second stimulus more slowly when the interval between the two stimuli is reduced...

This guy's got tons more like that, but... I... just... don't.... have.... the... time...

Somewhat of a tangent, but related, are two Malcolm Gladwell stories from The New Yorker:

  • "The Art of Failure: Why Some People Choke and Others Panic."
  • "The Physical Genius"

  • April 24, 2001

    Pay attention! Ev wonders, "Can humans multi-task?" The simple answer is, "No." The more complex answer is this:

    Humans can *pay attention* to only one thing at a time. Folks who claim to "multi-task" are usually just rapidly switching their focus of attention among a group of things. This is what the Air Warfare Coordinator that Ev quotes is doing.

    Now, what about walking and chewing gum? Or being able to carry on a conversation while driving a car? Well, humans habituate actions, such that they don't require attention. In fact, humans can't NOT habituate. Among other things, this leads to mis-use of tools (be they physical or "virtual") when interfaces aren't standardized.

    To learn more about this, read Jef Raskin's The Humane Interface.

    April 23, 2001

    More to read:
    The Storytelling Symposium
    "Humans have told stories since the cave, and there is a resurgence of interest in the art among today’s business leaders. What is new is the purposeful use of narrative to achieve a practical outcome. In this seminar, four leading thinkers on knowledge management explain why storytelling will become a key ingredient in managing communications, education, training and innovation in the 21st Century."

    George Brett's notes on the symposium.

    Transforming Information Retrieval on the Web: a new direction
    Social networks, auto-taxonomies, web-as-brain

    There Must Be A Better Way...
    David Gelertner, new UI metaphors, coping with information overload

    April 22, 2001

    To read:
    History of the Global Brain
    GlobalBrain-L FAQs (benefits | danger | future)
    A Curriculum for Cybernetics and Systems Theory

    Leader of the Packie. I received many refutations of the packie --> Paki claim, some calling me racist. I'm aware that "Paki" is a derogatory term for a Pakistani, but, well, if it were the appropriate etymology, I wouldn't let PC get in the way of discussion. As it turns out, enough people convincingly presented the "package store" lineage that it's most likely the root. Though, isn't it interesting (if perhaps a bit upsetting) that some think it comes from "Paki"?

    Dot-to-dot. "Connected: Life in the Wireless Age" is James Gleick's take on the present-and-future state of wireless networking and its implications on our lives. Runs the gamut from GPS in cars to personal digital assistants to Bluetooth-vs-802.11b and more. Most compelling to me is the discussion of social knowledge:

    [Bernardo Huberman's] research consistently finds informal communities making better decisions than any of their members—knowing more and thinking better than experts. “We now know that society can work better than any individual,” he says. “There is this notion of a collective mind, a social mind, and today the Internet allows us to tap that.” We are distributing intelligence. We are creating social organisms that carry out continuous computation.

    This is all reminiscent of Global Brain ideas, such as the Principia Cybernetica Web. The PC folks have a GlobalBrain mailing list requires group acceptance to join--however, the archives are posted on the Web for all to see (and, not surprisingly, contain a pointer to the Gleick article!)

    Some research into Huberman turned up these links:
    Xerox PARC Internet Ecologies (where he used to work)
    A paper on cooperative problem solving
    An overview page on cooperative problem solving

    Unfortunately, Huberman doesn't seem to have a page himself--links to one on Xerox PARC turn up 404, and I can't find anything about "Sand Hill Labs" at HP.com.

    April 20, 2001

    News from all over. This post is all about email I've received on the various topics discussed over the last week or so.

    First, Regionalisms:
    I love that my readers dig regionalisms as much as I do. It's spurred more email than any topic I've written about in months. Two folks wrote in to say that liquor/convenience/corner stores are called "Offies" in the UK -- short for "off-license", meaning licensed to take alcohol off the premises. In Ohio and Connecticut they're called "carry-outs." I received word that Masschusetts stores might be called "Packies" not because it's short for "package store," but because it's short for "Pakistani," the supposed ethnicity of many proprietors of such stores. And Michael Boyle wrote in with this:

    Just reading your little entry about regionalisms on peterme.com and I thought you might be interested to hear that in Quebec everyone, English and French, uses the Quebecois (it's not used in France) term for corner store (where you can buy beer and wine, but not liquor) "depanneur" which we shorten to "dep". So if I'm going over to a friend's house on a Friday night I'll ask, "hey, should I stop by the dep on my way?"

    The really funny thing about this is that the word in French (again, it's mostly used in Quebec) is odd even in its native tongue - it's derived from the phrase "en panne" which means "broken down". If you're driving in France and you say, "my car is en panne" it means you're on the side of the road with the hood up waiting for help. So, literally, a depanneur is the place where you de-panne - or fix yourself up.

    Which makes sense considering that's where you buy your beer.

    Not as interesting, but for completeness' sake, we buy liquor (and better quality wine) at the SAQ, which is just the acronym for Societe des Alcools de Quebec - or the liquor commission. It's a government-run retail outlet. It's pronounced, in English, "sack".

    Then, "What We Do"...
    Matt added to the discussion of "What We Do", my meandering post from April 15:

    I think the other side to your argument is that the Web *attracted* a lot of people who were generalists, 'connectors' etc., in the first instance, but the commercial structures of practice that were built have not encouraged the growth of more people like that - a lot of kids who have been laid off don't know a hell of a lot more than how to use flash5.

    Matt also posted his thought-provoking presentation called Survivability in Networks, that addresses parallels in network architecture and design team formation. Matt is big on generalists. I prefer to think of us as "professional dilettantes."

    Jason Rothstein also contributed to the "What We Do" dialogue:

    I read your blog-blurb on "what we do" with great interest. I've actually been thinking about this topic a lot lately, especially with regard to what companies like the one I work for can de described as doing.

    It's a mess. Sometimes I wish that English had the capacity of Spanish or French to simply add a suffix onto a noun and have a business description. Perhaps next to the lavanderia or patiserie we could simply exist as a weberia, or an interneterie.

    Obviously, that's a little silly, but it begs the question: Why hasn't a single word yet evolved to describe what web design/programming/implementation shops do?

    Unsurprisingly, I blame the marketers. And it relates very closely to the "turf war" you describe in your piece. It wasn't useful to distinguish such companies by name (i.e. Harvey's) or location (North Avenue), or, for lamentable reasons, reputation. So a series of increasingly convoluted words were implemented to describe what should be simple processes in a complicated way.

    Imagine, if you will, that the word "bakery" did not exist in English. I might discuss with my friend where I like to get my Bread. Perhaps I favor D'Amatos, because they are a Risen Dough Solutions Provider, whereas my friend thinks that the offerings at Red Rooster are better, because they are a Flour, Yeast, and Water Integrator.

    Again, a little silly, but I think it illustrates the fundamental absurdity the way we, as a profession, attempt to describe ourselves.

    I wish I had an insightful conclusion to offer, or the great idea for the one word that would allow us to get around all of this nonsense, but I do not. I fear we will all be working under this linguistic obfuscation for some time.

    And finally, Group Forming Networks...
    Mitsu wrote in about Group Forming Networks, discussed April 12:

    I've been marvelling at the power of this phenomenon also --- something reproduced in my weblog experience in a way that I haven't seen before --- weblogs are a much more effective way to form these networks than bulletin boards or conferencing systems were in the past --- I think because of the architecture of weblogs, which involves linking (forming new associations) but doesn't encourage flame wars or as much noise (you only read the weblogs you find interesting, and you kind of ignore the rest)...

    This rule of 150 is interesting... I have also noticed a "rule of 10" and a "rule of 30", however. The rule of 30 has to do with the number of people I think you can really keep track of at a time in a social group. For example, I lived in a coop in college with 35 people, and everybody knew everybody else, and we all felt pretty close to each other. My friends who lived in coops at other colleges of 60 people or more, however, said that those communities tended to break into factions, ironically much smaller than the maximum, usually only around 5 to 10 people. In the dorms, some people only knew their immediate roommates. So part of the reason I lived in the coop I did was so I could know and hang out with more, not fewer, people.

    The rule of 10 seems to be about the optimal size for a tightly focused development team, however. 30 is too large. If your project needs more than 10 it seems to me you need to break it into pieces that can be handled by teams of 10 or less.

    Which brings me to thinking about organizational group size. It seems that if all of these rules hold, organizational networks ought to form naturally into these sorts of subgroupings. Collaborative projects ought to have subgroupings or teams of 10 or less (including, if there is a "management" team, 10 or less there) --- maybe people can belong to more than one team at a time (a team lead could belong to a management team as well as the team they lead). Then there might be informal "pools" of 30 or less people, maybe people that they regularly collaborate with. Then larger groupings of five or more of these things into networks of 150. Scaling beyond that I think requires carefully building interfaces to avoid too much scattering of attention.

    The more you folks write, the less I have to think!

    April 17, 2001

    I <heart> regionalisms. I found out last night that what most people call a 'liquor store' or a 'convenience store,' Michiganders call a 'party store.' Like, 7-11 is a 'party store.'

    This just in: Meg informs me that in Massachusetts, liquor stores are called "packies" short for "package store."

    April 15, 2001

    Where we're headed with "experience design," "user experience," "information architecture," whatever. Over on Elegant Hack, Christina's been writing about the relationship between IA and usability. Coincidentally, I've been mulling the "What I Do" question actively for the last couple weeks, pretty much since CHI2001. I've written lots of notes, and talked to lots of people. I'm having trouble developing cogent thoughts about it all, but I figure there's no harm in sharing the muddle in my head.

    Let me start by saying that this problem, this trying to come to grips with digital product design and all its ins and outs, the various parties necessary to make it work, the infinite viewpoints for getting it done--it's big. The more I think about it, the harder time I have getting my arms around it.

    In the past year, I've attended three professional conferences: AIGA's Advance for Design, ASIS&T's Information Architecture Summit, and CHI 2001. At each of them, I've heard the same question asked--"What is it that we do?" At each conference, the participants have the same "Them" -- marketing. And at each event, I've witnessed the same head-scratching over turf wars in the design process -- for instance, at CHI, someone said that it sounded like "information architecture" was the same stuff that the folks at CHI had been talking about for 20 years, and calling it "interaction design."

    So, I've been thinking about this.

    Here's what I believe happened. For the last 100 or so years, a series of design disciplines have developed into fairly well-defined professions. You've got industrial designers, architects, graphic designers, interaction designers, information scientists, etc. etc. They've all been toiling away at their various problems, understandably ignorant of the works of other designers, because the products these designers made were all so distinct. A automobile dashboard is not a building is not a subway map is not a software interface is not a search-and-retrieval system for massive amounts of data.

    And then the Web happened.
    (Cue chorus angels, and light streaming down from above.)

    And all these different design professions ran headlong toward this new medium, because they saw the value they could bring to this new medium. And they butted heads with a resounding THUMP, and, staggering back, wondered who these *other* designers were, and why they were all jockeying to do similar work.

    And it became apparent that, all these seemingly different designers were actually more alike than they'd realized. If you abstracted up a level, they all did pretty much the same thing: attempt to solve complex problems through a process of design.

    The gut reaction to see these foreign specimens claiming similar ground was to engage in a turf war. To claim their way was the best way, and who are these others trying to take my work away from me?

    This is an unfortunate response because what's clear to me, more than anything else, is that we need to draw from backgrounds as varied as can be. The Web has created a space for a uniquely synthetic type of design, and a need for processes to address this.

    It's also calling for generalists to lead project teams. The analogy that's most obvious is the film director. Directors typically come from a particular specialty--writing, editing, cinematography, acting, etc.--but must be able to work with a vast variety of craftspeople, from set designers to sound effects specialists to costume designers, etc. etc. The role of the director, more than anything else, is to provide a guiding coherent vision for the film being made. This does not (necessarily) mean being a micromanaging ogre, telling people how to do their work. The best directors know to give their team autonomy in their individual areas.

    From what I understand, most architects work in this fashion, too. They have a vision for a building, but the actual work is done by draftspeople, structural engineers, plumbing consultants, interior designers, etc. etc. The architects job is to hold it all together.

    So, if nothing else, there seems to be a role worth distinguishing in Web design, an Experience Architect or Experience Director, an acknowledged creative lead for web projects, a single person responsible for holding the vision of the final product. This person should probably not be the Project/Product Manager (in the same way that a film's director is often not its producer) -- dealing with the administrivia of a project and dealing with it's vision are two very separate things.

    Um. This thought train has ran out of steam. Good night.

    Yeah. That's what I meant. Nick, The Smartest Person On The Web (Oh, wait, maybe that's Steve. Anyway...), wrote in about the Semantic Web:

    Your point, sirrah, is modern linguistics: semantics is all very well, but when is someone going to come up with an adequate take on the Pragmatic Web?

    http://www.linguistlist.org/issues/10/10-1596.html

    Otherwise, it's SGML repeated as farce.

    (langue and parole, langue *and* parole.)

    Nick

    I don't know what languid parolees have to do with the Web, but it might be worth looking into.

    April 12, 2001

    Beyond the individual. I've recently been reading about the power of Group-Forming Networks. GFNs are "Networks that support the construction of communicating groups create value that scales exponentially with network size," says David Reed, creator of the model, in his essay "That Sneaky Exponential--Beyond Metcalfe's Law to the Power of Community Building." This stuff has interesting relationships to the Rule of 150, which Malcolm Gladwell discusses in The Tipping Point -- up to 150, there is a "community memory"
    that works such that either you know how to find something out/do something, or you know the person who does. Beyond 150, you no longer know whom to turn to, and need to go through intermediaries, and a communication breakdown occurs. Research in cognition supports this, primarily Robin Dunbar's work written up in Grooming, Gossip, and The Evolution of Language.

    Dave has a point. Winer called me out on dissing the Semantic Web, particularly dismissing Engelbart and TBL. And he's right... Whatever their intentions, those guys invented products that have enormously benefitted the world. I don't know exactly what *my* point is, except to say that I guess I just hope that pointy-headed academics would spend more time addressing the real needs and concerns of people, and less time hand-waving about how Things Should Be.

    Speaking of Daves... I had a delightful lunch with David "JOHO" Weinberger, perhaps best known as a co-author of The Cluetrain Manifesto. We talked about all manner of things, from personalization, to The Semantic Web, to riding the Clue-Gravy-Train for all it's worth, to how he and I can travel and, thanks to the Web/email, find folks we "know" in almost any city we're in, to how communities subvert designers' intent (he'd never heard of the riotous Family Circus reviews on Amazon), etc.

    April 11, 2001

    So many conferences, so little time. Via the Red Rock Eater I've learned of "Social Dimensions of Engineering Design," the Mudd Design Workshop III, taking place May 17-19 at Harvey Mudd College in Pomona, Ca. With topics like "Social Issues and Themes in Design," "Collaboration in Design," "Design In and For A Complex World," it all sounds very interesting.

    The Semantic Web--Who Cares? The May issue of Scientific American features an article outlining the Semantic Web, Tim Berners-Lee's latest drum to beat (see the W3C papers on it here.) The article explains:

    The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users.

    This is one of those Extremely Noble and massively complex endeavors wherein academics, removed from the real world, attempt to solve a problem nobody has. (I fear MIT's Oxygen project will suffer a similar fate.) The only reason the Semantic Web gets any press is that Tim Berners-Lee, the "Father of the Web," is working on it.

    History has shown us that technology inventors often haven't the faintest clue about the device's actual use. Did folks line up to hear what Philo T Farnsworth had to say about television?

    Hypertext creators tend to have Extremely Noble intents for their technology. Douglas Engelbart was obsessed with "augmenting" intellect, and the first development of the WWW was definitely along those lines (for academics to compare notes). And the Semantic Web is no different. From the SciAm article:

    If properly designed, the Semantic Web can assist the evolution of human knowledge as a whole.

    Problem being, no one, apart from some self-appointed Bringers Of Fire, wants their intellect augmented, nor really cares about the "evolution of human knowledge." The Web, this extremely exciting hypertext platform, serves other human needs and desires--primarily to communicate, also for sexual release (porn!), and for finding information of personal relevance (what's the weather where I'm traveling? how can I do my job better? where's my favorite band playing?).

    April 10, 2001

    My face will freeze like that. This pic captures how I feel much of the time. The loverly lady on the right is the inestimable Jennifer Kilian, with whom, after this shot was snapped, I proceeded to get into a delightful shouting argument about the various roles of designers.

    April 8, 2001

    "Blue Boy with Blue Dog." And other great works can be found at the Museum of Depressionist Art. Worth many chuckles.

    Greetings from... Providence! So now I'm in Providence, RI, home of Brown University, RISD, and, most importantly, Jen and Jeff. I'm staying here while attending and speaking at the Seybold Seminars in Boston.

    Research in... comics? Yes, some kooky kids at CMU experimented on 'the best way to put comic books in electronic form' and wrote about it in the paper "Comic Books: A Case Study for Redesigning Traditional Media and Assessing Entertainment Value." (PDF) It's actually an information-rich paper on measuring responses that aren't easily quantifiable.

    April 6, 2001

    If it's Friday... It must be SF. But for less than 24 hours. At which point I get on another plane and head to Boston for Seybold. If you live in Boston and want to play, let me know.

    Some Quick Notes on CHI 2001. This website began nearly three years ago with my notes from CHI 98. I'll probably write up something more fully, but in the meantime, stuff to chew on.

    Wireless social navigation. A bit back I mused on the notion of leaving notes via wireless devices for others to pick up. Well, someone's done it.

    GeoNotes allows users to mass-annotate physical locations with virtual (multimodal) 'notes', which are then pushed to or accessed by other users when they come into the vicinity of the location. It is based on location positioning technology. GeoNotes employs a number of social filtering techniques, which all rely on logging of usage rather than content.

    What's great is that they're using social navigation techniques to help users sort through the inevitable clutter of such a system.

    Info Viz. Some kids at Microsoft are using the massive data repository that is USENET to inform the design of information visualization systems to help visitors understand community dynamics. It takes some clicking around to get, but there are many good ideas here. For the thoughts behind the tool, download "Visualization Components for Persistent Conversations," they paper they submitted to the conference. Folks interested in online communities would do well to read Marc Smith's home page.

    Go with the VisualFlow. Sony's VisualFlow was demo'ed. It's a super-nifty media browsing tool that uses Zooming User Interface models to allow people to quickly sift through their media collections (typically photos).

    April 4, 2001

    You know, it's hard to keep up with ~200 email messages a day while you're travelling. I'm at CHI 2001. I simply don't have time to write. Lord knows when I will. Lots of ideas, tho. Lots.

    April 1, 2001

    Snf.

    Memorable. Enjoyed Memento last night. A flick very much worth seeing. A treat for the mind, but not so cerebral that it's distancing. The narrative conceit (revealing the story back-to-front) works amazingly well--it puts you in Leonard's head. Great for post-viewing discussion. Though, I found myself quite disoriented for a spell following the show--when you spend 90 minutes (or so) consciously reversing the order of witnessed events, it takes a bit to get your mind out of reverse and back into drive.

    Nice work. XPLANE redesigned their site, to great success. Alongside their inestimable xBlog they've added the bBlog, focusing on business issues (like I need more links to follow). Niftiest is the full site map they include on the bottom of every page. I first saw this kind of thing at Peter van Dijck's Move To Colombia site. I think it's a great device. Peter was good enough to write about the map's success. I'd love to hear what kinda clicks it gets for the XPLANE folks.