in Uncategorized

A key challenge in delivering great user experiences

In my last post, I suggested that ,ost of us work in markets where products and services have matured out of “technology” and “features” and into “experience”, and so design should be driving the conversation, because delivering on experience is what design does best. Instead, we find design hamstrung into organization models that are still “features”-driven.

The more I’ve thought about it, the more I’ve realized that there is a potentially intractable issue “experience design” faces. When you study how people behave, and propose design interventions to support better experiences, you’re engaging in a holistic and continuous activity. Human experience is continuous. It flows seamlessly.

However, in order to deliver products and services to people, we must break up this continuous experience into discrete pieces that are achievable by teams. So, to use the example from the last post, the Shopping Experience becomes a series of features (Search, Browse, Product Page, Checkout, Gifting, etc.) Working in producible chunks inevitably means losing the holism that defines human experience, and the thing I struggle with is figuring out how to manage this liminal shift so that what we deliver doesn’t become defined by the features (and the teams dedicated to the features), and it maintains its more subtle, nuanced, integrated qualities.

How could we/should we reorganize development teams and processes to achieve this experiential holism?

  1. We have similar challenges at my software company.

    We have several flavors of digital agents that live out on an IT network doing various things to protect a company’s data and IT services: checking the network for secure configurations/hardening insecure configurations, checking for shape-shifting vulnerabilities, watching for anomalies and suspicious changes, and so on. Each agent collects a constant stream of state information and radiates that out for the humans involved (some of the agents also act based on automated or human-initiated triggers). Historically, we’ve “reported” those states in fragmented fashion, each flavor in isolation with no awareness of the others. Historically, each of these agents was thought of in a product silo, with new features considered only in the context of that single facet of information security. But a company’s information security team needs to understand these facets working together as a system with evolving, emergent behaviors.

    Our teams have always been set up for silos (including PM), but now that we’re moving toward visualizing whole-system states and one flavor of agent responding based on state info gathered by other flavors of agents, we’re trying to solve exactly the problem you describe for ecommerce silos. How to thread system thinking across each silo’d team…?

    I don’t have answers, but some small things are helping.

    One is to make sure the teams have the same sense of meaning, share the same conceptual domain ontology. There are industry taxonomies and even formal ontologies out there for information security, but I’m finding it incredibly useful to facilitate all the various disciplines within our company to co-create our collective domain ontology from the ground up. We are creating a physical model on the wall to represent the key concepts, or “objects” within our domain, the core attributes for each object, and the important relationships among the objects (also capturing preferred terms and definitions along the way). What this physical domain ontology map helps us do is break out of talking about product silos and feature sets, and talk instead about facilitating/instrumenting the key relationships we’ve collectively identified. The relationships *are* the flow. It’s also helping me make a case for letting our agents formally *recognize* the existence of key attributes of some of the objects in our domain so that our agents know what they mean and may automate tasks around them, and our ops tools and visualizations are more meaningful for humans to act with. These fundamental objects, their core attributes, and relationships, are becoming something of a set of first principles of design to map back to in order to see how once-isolated agents & decision-support visualizations fit into the overall flow of our domain. We haven’t restructured teams around it, but are at least are pointing to it and talking about it conceptually.

    I also believe that if we take seriously that what we are doing, regardless of industry, is designing elements of dynamical systems (pervasive, ambiguous, cross-channel in the Resmini/Rosati sense, involving context in the Andrew Hinton sense, and dynamical in the embodied cognition sense), there is an important role for *simulations* in order to sample and study the resulting dynamics (another way to think about flow). I picture a team (IA/UX folks along with data infrastructure folks) specifying key elements and state change rules for the system (informed by ontology objects & relationships) and then individual design teams can, via simulation, understand how their isolated “feature set” impacts the system’s behavior or evolution.

    Such systems simulation tools may be rudimentary to start, but I believe simulation tools should(will) become a vital tool in the IA/UX toolbox to facilitate and instrument flow in the systems sense.

  2. Oh you know, my usual comment: a mix of praise and critique 🙂

    I agree with your conclusions, but not necessarily your reasoning. I think you’re spot on when you point out that the industry has remained in the feature-driven state of technology-centric design, and this is a less than desirable position now that we’re (ostensibly) designing experiences.

    The appeal to holism, however, on the grounds that “Human experience is continuous. It flows seamlessly.” seems a little shaky to me. I’m not so sure experience is continuous or seamless. I think our experience(s) is/are guided by focal points and areas of perceptual/cognitive concentration throughout the day, which constantly shift according to attention. There are definitely seams where continuity is broken: attention is lost, things break, objects become obtrusive (cue the whole phenomenology bit). Even outside the negative language of things breaking, becoming lost, and obtrusiveness, there is a foreground and background to perceptual/cognitive attention: certain things are in focus at certain times.

    So while it might be a bit nit-picky, I would say there are seams in experience, but humans are just really good at adapting to those seams. It takes a complete rupture in experience for us to really remember it; otherwise, we simply adapt and move on.

    Back to the feature/tech-centric trap, I think (and this is admittedly half-baked) I would support seam-centricity over holism. That is, focus on the seams and design for adaptability and coping. Instead of features that only support an optimized/idealistic version of user/business goal attainment, what if we supported creative misuse and adaptive coping? I’m actually working on a new talk that gets at some of this stuff, so looking forward to seeing what others think 🙂

  3. I’m a latecomer, but I resonate with what Marsha and Thomas are saying up there. Experience isn’t so much seamless as integrated. Good seams are just invariant structures that “fit” and make a kind of embodied sense, so perceivers aren’t so much conscious of them explicitly as much as perceiving and acting on them tacitly. Of course, for people doing the design work, these structures have to be considered explicitly, but with the aim of making them feel “natural” (i.e. tacitly perceived & understood) for end-users.
    To me, a big part of what IA does is establish such invariants with language — which is a significant part of the human environment. Do it well, and people don’t have to think about it much, but the structures are definitely there.
    I bring up this take on IA because I’ve used this approach for establishing IA strategies where the architecture is the map everyone is working “in” even before anything is fully made. If you have a half dozen agile teams cranking on the atomized stories, what context are they working in, and do they understand it? Are stories being prioritized based on the needs of the team in the moment, or based on when and how they’re needed for the whole environment?
    So, an effective way to do this is to create environmental structures that everyone is working in together, so the context of the atomized bits isn’t lost. Some of that is physical — keeping people in cross-disciplinary, co-located places, and creating lots of opportunities for cross-team face-to-face discussion and sharing.
    But the real kicker is making sure the conceptual model of the whole is clearly articulated — even at a very high, simple level — and represented where people are always seeing it and being reminded of it. Not a convoluted diagram that’s hard to grasp, but simple models that reinforce the to-be state of the whole environment, plus supplementary models that teach principles behind what is being made. The details can change, but those over-arching principles and structures should persist.
    Also, ideally, the people making the stuff have at least one person on the team who was part of the generative research, observing users (ethnographically, hopefully) and getting them under their skin, so they have a full experiential feel for the behaviors and needs of those users.

Comments are closed.