All Posts in UX

March 08, 2014

You Don’t Have To Speak To Critique

This UX and design critique technique uses silence and post-its to make traditional design reviews more efficient and honest. By not speaking at all, everyone gets heard. So it’s basically like magic.

The inspiration for this critique method came from Leah Buley’s book The User Experience Team of One. She describes a “Black Hat Session” when a designer gets a bunch of stakeholders together, sets aside niceties for a moment and has them write down everything they feel is wrong with a set of designs or flows. Since everything they do is pretty much pure gold, Google Ventures also uses a version of this technique in their design sprints.

Our product design team at Fullscreen has a variant of these exercises that adds a little bit of positivity to the “Black Hat” approach, timeboxes everything and leaves us with a visual record of what’s working and what isn’t with a set of designs. It’s a super-efficient way to distill feedback from a range of different voices into simple, actionable next steps.

2014-01-14 14.40.41

Such silent. Much contemplative. So honesty.


  1. Round up your team: designers and product folks, engineers, any other interested parties. And start by hanging work up on a wall. Sketches, printed mockups, whatever you’ve got. The more the better, especially if you’re working on distilling a bunch of concepts into one or two directions.

  2. The first ten minutes is a silent critique, so no one should be asking questions and designers shouldn’t be trying to explain their work (yet). Give everyone some post-its or colored stickers. One color for positive feedback, and another for critical feedback.

  3. Take five minutes for everyone to mark elements of the designs that they think work well. You might ask participants to write a little note about what they like on a post-it, or you might just have participants stick the positive color to elements without a note. (Just using stickers is quicker and gets more feedback, but writing a quick note gets a little more depth and also ensures that everyone’s thoughts get brought up.)

  4. Then take the next five minutes and have the participants mark up everything they don’t like about the designs.

  5. Now that you’ve got a visual record of what resonates and what doesn’t, it’s time to get the team talking. Start with each designer giving a quick overview and some context for each sketch or comp, and then go through the feedback. Every participant should have a chance to speak about what they marked up, both positive and critical.

2013-10-10 17.53.30

The session’s done, but you can still see what worked and what didn’t.


  1. Starting out with a silent critique evens out the levels. Not having participants speak at all until the final step normalizes the differences between loud and quiet voices so everyone gets heard.

  2. Giving participants a non-verbal means of identifying weak spots helps overcome tendencies to just be too damn nice in critiques. It’s just easier to be honest when you’re writing a critical thought down or just using a sticker to represent that critical thought.

  3. It’s a very economical way to get feedback from a group. Timeboxing the negative and positive feedback rounds keeps things efficient and you end up with a punch list of points to review in the final step which keeps everyone on track.

  4. The final outcome visually surfaces the things that work and the things that don’t. Once everyone’s marked up the work with stickers and post-its, you’ve got a heat map of the aspects of each design that resonated with the group.

February 03, 2014

Building A User Testing Lab, Part 3

I wrote a couple of posts about preparing our discount usability testing and user research lab at Fullscreen with software, hardware and a physical space. In this third and final post, we get to put it through the paces for the first time.

Faking It (since we don’t want to make it yet)

The team used Keynote to prototype two different takes on a feature we’re building into the Creator Platform. Keynote is basically awesome for early-stage interactive prototypes because not only is it super-quick to learn and use (especially if you start with something like Keynotopia’s UI stencils) but also it’s relatively simplistic design features force a lower-fidelity approach than using Photoshop to create screens for a prototype. Bonus: it’s free!

It has limitations, though: for example, you can’t scroll within a screen. Since one of the concepts we wanted to test was heavily reliant on vertical scrolling, we had to (really inelegantly) fake it. This particular concept tested unanimously worse than the other one, which didn’t happen to require such overt trickery. And while I don’t attribute that to the scrolling fakery alone, I’m sure they didn’t help make the experience any easier to understand.


Using big blue arrows to “scroll” up and down a page: not even minimally real.

More faking: we used a static screenshot of browser chrome to frame the content of each prototype and make them seem minimally real. Which worked so well that it only took a couple minutes for someone to try the static browser “back” button when clicking through prototypes. It just took a quick adjustment to build a fake back button by placing a transparent hyperlink over it leading to the “previous slide” and it was a reminder that anything we present users with during testing is fair game and the more we can anticipate, the more realistic interactions we’ll be able to observe.

Capturing It

Since we were testing prototypes of a web app, there was no need for much of the laundry list of equipment that we’ll need to properly capture mobile user testing. Just a Macbook running Silverback to record a screencast and the user’s reactions. It worked pretty well to capture the 45-minute sessions, although sometimes we had a tough time making out what the subjects were saying when we played them back. Next time I’ll definitely try using an external mic which should take care of that.

Once the sessions were wrapped up, we got the team together to watch them and talk through the results. This almost didn’t happen because I didn’t realize that you actually need to export recordings from Silverback, which can take a good amount of time for these 45-minute sessions. Watching low-res previews within Silverback worked in a pinch, but next time I’ll make sure to allow for a little bit of time to get those exports done before getting everyone in a room.

All in all, testing web app prototypes was pretty seamless. I’m looking forward to getting some users in to do some mobile testing next time.

January 17, 2014

Building A User Testing Lab, Part 2

In this follow-up to my first post about building a discount usability testing and user research lab at Fullscreen, I’ll talk about setting up a space for testing and getting the right equipment on a budget.

The Space

Though it’d be nice, a permanent room for our user lab just isn’t in the cards so I needed to find a space to set up shop in and conduct interviews and sessions for a few hours at a time. It was easy to narrow down the candidate pool since there are really only two private and distraction-free rooms in our building that aren’t also occupied by people who, you know, need to work in them.

We’ve used our conference room for user testing sessions in the past. It’s worked out fine, but it also feels a bit overwhelmingly large when you’ve only got two or three people in it. And we also have a little loungey game room with a couch and two comfy chairs—which is what we’re going to use. Since the room itself is more like a place where people would actually use the products we’ll be testing, on some level it’ll be easier for them to engage in authentic behaviors under admittedly artificial circumstances.


Even if you don’t have an Arduino-controlled keggerator at your house, it’s a pretty easy guess which room you’d feel more at home in

The Equipment

OK, that was easy. I’ve basically got a living room to work with the users in, and the Silverback and Reflector apps installed on my laptop. Now it’s time to go shopping, keeping in mind a budget of $1000 max including devices.

Webcam: Silverback uses the built-in iSight by default, but to test mobile prototypes I’ll need a webcam to capture the user’s reactions. We had a 1080p Logitech C615 ($49) sitting around, but if I had to buy one I’d just go for this Microsoft LifeCam ($19) which is still 720p and still more than adequate.

Mouse: Good to have one handy for users who aren’t used to trackpads. Since non-Mac users won’t be familiar with the $70 Magic Mouse anyway, a standard-issue Logitech USB mouse ($17) should do it.

Microphone: I’ve had a Blue Snowball ($59) for a couple of years. Could probably get away with using the laptop’s built-in microphone, but I don’t want to miss a thing.

Mobile devices: Here’s where things get a little expensive. I went for an entry-level iPod Touch ($229) to test iOS apps and demo mobile prototypes (using Invision, an app that makes it simple to string together screens into passably-real prototypes and get them onto devices). Way cheaper than a $600 unlocked iPhone, and no need to pay another $10 for a blank SIM card just so you can open mobile Safari.

The total cost for all this equipment, as well as the software to record sessions and stream from devices, would be just $410. Way under budget, I also took a protip to heart from Pocket’s experience setting up a mobile testing lab. They note that the iPad mini is a perfect size for demoing iPhone prototypes built in Keynote, which is another amazingly quick and effective way to get early validation of rough concepts. So I got hold of one of those as well. All in, the total cost for this multi-device testing setup including software, camera, microphone, mouse, iPod Touch and iPad Mini is $659.

I’m excited to put it all to use (tomorrow!) and report back on what went wrong and hopefully a lot of what went right.

January 10, 2014

Building A User Testing Lab, Part 1

This is the first of three posts about my experience building a discount usability testing and user research lab with the team at Fullscreen. I’ll talk about why we’re doing this and touch on the software that we’ll use to capture sessions.

Google Ventures-style product design sprints seem to be popping up all over the place these days. Design sprints have become an important piece of our process toolkit at Fullscreen, not just because I love me a good trendy methodology but really because they’ve helped us us immensely with quick parallel concept development and rapid validation.

In a product design sprint, you bring ideas from concept to low-fi prototype in less than a week and then get them in front of people—preferably, real users of your product—to see how they hold up. So you need a place to work with the users during these sessions as well as a way to capture the sessions so the whole team can review and “score” them.

Now our relatively new office, despite its many lovely features including a human-size terrarium, doesn’t have one of those fancy interview chambers with a one-way mirror and HD simulcast capabilities to the adjacent observation room’s three flatscreen displays. What we do have, though, is our first design sprint of 2014 starting next week. And we need a way to conduct and capture those user sessions by Friday.

So we’re building a user testing lab that makes use of our flexible space and can be set up and taken down with extremely minimal effort, that can handle both desktop and mobile prototypes, and that costs less than $1000 including devices. I’m going to document this exercise for posterity starting with the software we’ve chosen to capture the sessions, then the hardware and the space itself and finally what will hopefully be a triumphant recounting of the first user sessions we run in it.

The Software

Our discount usability lab needs to record two things: the interaction with the prototypes onscreen and the user’s face while they’re interacting with it. We want this captured in a single video file so it’s easy to review and discuss later.

A few years back I used an app called Silverback to record some user testing done while building a group messaging app called Volly. Silverback doesn’t seem to have changed altogether too much, but that’s a good thing. It’s designed for doing exactly what we need, which is recording two streams of video—a screencast and the user’s face so we can get the full brunt of their reactions and thought process. It’ll use your laptop’s built-in iSight by default but getting it to recognize a USB webcam is as easy as plugging it in. It even records hotspots whenever the user clicks so you can tell exactly what’s going on.


Note the awesome little hotspot where I clicked. Also the depth of that v-neck

I’ve also heard that ScreenFlow gets the job done despite having an old-timey website, and it also has built-in editing features which could be handy if you also wanted to use it for something else, like recording product demos or tutorials. ScreenFlow’s extra features have a $99 price tag, though; since Silverback is only $70, that’s what I went with.

We’re sorted for capturing desktop user sessions. But what about testing mobile apps and prototypes? My first thought was to try awkwardly clamping a USB document camera to the mobile device. But this post by one of the creators of Silverback points out how ridiculously simple it is to mirror any iOS device using the $13 Reflector app.


Silverback x Reflector. Since you’re wondering, the app is called Tuxedo Kittie

With Reflector, you can mirror whatever’s happening on any iOS device that supports AirPlay mirroring (devices with just plain old AirPlay need not apply; my old iPhone 4 was a no-go). Open Reflector, and your computer will appear as an AirPlay receiver to your phone. From there, it literally just works.

That is, for iOS. Looming large is the day when we find ourselves wanting to capture user testing using Android devices. There’s no doubt in my mind that this will in no way be as painless as Reflector makes it for iOS devices. But as someone wiser than myself once said, any testing is better than no testing at all.

September 21, 2013

Don’t Get Interviewed By Your Interviewees

Saw Steve Portigal speak at the LA UX Meetup this week. Lots of insights from the guy who literally wrote the book on user interviewing techniques as a critical part of the product development process. One thing that he spent a little bit of time on was how important it is to subtly but firmly maintain control as an interviewer when you’re doing customer development or user experience research.

You’re asking questions of customers or prospective users, and they naturally have questions of their own. But as soon as you answer “How do I see how many people tweeted about that link?” or “When will .pdf export be available?” with a straightforwardly helpful answer, your altruistic spirit risks turning a valuable opportunity to garner validated learnings into a really, really expensive technical support session or pitch meeting.

Instead, Steve suggested responding to questions like “When can you give me Feature X?” with questions of your own. Responses like “Where would you want to see Feature X?” and “Why is Feature X important to you?” keep the interview’s momentum in your hands and allow you to learn from the responses.

I’m definitely guilty of letting customer development interviews turn into impromptu roadmap presentations. Not only am I deep down inside just a tender-hearted people-pleaser who wants to help, but it’s hard to ignore an opportunity to show a customer how forward-thinking your team is. But if I’m doing customer development or user research, I can’t forget that my goal is to learn from the subject and build a better solution for the pain points that matter to them. Not to teach them how to use my product, or convince them that we’re building something great and that my ideas are the right ideas.