UX Writing

I’m a seasoned producer and creative writer with a deep-rooted passion for storytelling and immersive experiences.

From crafting audio dramas to exploring spatial journalism in XR, and from designing medical simulations to working in film, theatre, comic books, and video games, I’ve demonstrated my ability to bring narrative structure to a wide range of clients.

I’m committed to using these skills to not only meet but exceed the storytelling expectation of our clients.

Conversation Design
See independent project below: Attic Escape

Interactive Spatial Storytelling
History All Around Us

Narrative Design
Shadow Health

Video Game Writing
Publizenego

Attic Escape

Welcome to my independent conversational design project! The Attic Escape bot is an entertainment bot. I built an escape room in my attic and turned it into a 360 tour and Twine game. The conversational design element comes in with the implementation of a voice assistant to guide the user as they play. I used Voiceflow to design a prototype for two separate users, then combined those two user experiences into one prototype for my final design. Please follow my design process, below.

Password needed to play.

Reach out if you’d like to play the game.

Once you have the magic word, enter here.

About the Product

Attic Escape is a virtual escape room that you can play with the assistance of a bot as your guide.  If you have trouble navigating the room, you can ask your guide for help and they will answer your question without giving away any answers.  If you want a clue, you can ask your voice assistant for help in solving a puzzle.  The voice assistant will respond to your request based on the command you give it.  The bot will respond appropriately based on the amount of help that you need, and it will joke with you and encourage you as you play. 

 
Untitled.png

1. Empathize: Recorded observations of two users

2. Define: Reviewed user research and defined problem space

3. Ideate: Make a choice-oriented voice bot?

4. Prototype: Mid-fidelity wireframes and Voiceflow prototype link

5. Test: Observe same two participants with voice bot and listen to feedback

6. Define: Users are having problems accessing information that was mapped out in different acts than their current position

7. Ideate: Users need to be able to state their intent. Bot can’t be choice based only.

8. Prototype: High-fidelity prototype with listening capabilities

9. Test: Observe same users with new design


User A.jpg

USER A

Wireframe

 

user B.jpg

USER B

Wireframe

 

 

Design Process

Observations, Prototypes, & Tests

  1. After I built the physical escape room environment I created the 360 tour and Twine game as a prototype environment for this product. I began the voice assistant design by empathizing with users who would be navigating through the Twine game in order to play in this 360, virtual, escape room.  I did this by observing two users play through the game.  I observed each user separately as they played through the full game, but allowed for varying observation methods during each session by breaking the observation into time with and without my presence.  For the first 5 minutes I left the user alone to play without my help and recorded their screen and audio to pick up their comments.  They were instructed to think out loud.  For the next 5 minutes I was present and answer any questions that the user had as they played through the game.  This was also recorded.  For the last 5 minutes of the recording, I interjected with my own directions to help them along.  After that 15 minute recording, I asked the user if they wanted to continue playing the game.  Both participants wanted to continue.  For the rest of the observation I took notes as I observed the user and answered their questions until they completed the game.  

  2. With this user research, I moved into the next phase of my design process as I defined the problem space.  Each user could benefit from a guide that could encourage them, give them clues for puzzles, and help with navigation issues as they played the game.  

  3. In the ideation phase I thought about creating an Alexa skill that could play the game with the user as a guide, or a bot that was built into the game to help the user.  I thought about different systems such as Google, Alexa, ect and I thought about different interface options, and whether voice or typing was the best fit for this game and the user.  After reviewing the user research and the tools available through Voiceflow, I decided that I would, first, prototype a text bot based on CHOICE blocks. 

  4. I used Voiceflow to prototype a bot that would respond based on user choices.  This was a mid-fidelity prototype that was quick and easy to fix in the Voiceflow system in order to text and retest throughout the mapping process.  I used the 15 minute recordings and my notes from user research to customize a bot for each user that I observed.  The conversation flow was mapped out to follow the thought process of each user based off of the recordings and notes from their first time playing the game.  

  5. After each map was designed and the prototypes were ready to test, I recruited the same users to play again, but this time they would play with the voice assistant prototype link from Voiceflow. I recorded the first 15 minutes of their game play session in the same way that I had originally observed them by leaving them alone for 5 minutes, coming back to answer their questions for the next 5 minutes, and directing them for the last 5 minutes of the recording.

  6. From the test, I found that both participants did not use the voice assistant as much as I expected.  When I came back to the room, they asked me questions more than they asked their voice assistant.  I took feedback from each user on the voice assistant and learned what made the assistant difficult to use for the participant. 

  7. With this feedback, I went back to the ideate stage to come up with some solutions.  Users seemed to follow a similar, but different path this second time playing.  Though the voice assistant helped them remember how to navigate the environment at the beginning of the game, I only mapped out navigation instructions towards the beginning of the game, so when the user needed a reminder of how to navigate later in the session they were unable to choose a command for what they wanted to do.  I saw that mapping out choices was helpful for guiding the user through the story, but it was not helpful for the user when they wanted to access formative information that I thought they would not need later in the game.  So, the new problem was defined.  I needed to add a way for the user to input a command that was not listed as a choice.    

  8. With this new problem defined, I adjusted the prototype by adding intents that stand alone without direct connections so that the bot would listen for utterances and connect the user to helpful information. 

  9. I showed the final design to the users for feedback. 

The problem.

During the empathizing stage, I interacted with the user after the first 5 minutes of which the user played with the escape game by themselves.  I recorded our interaction for 10 minutes.  In total, I had 15 minutes of recorded interaction between the user, the escape game, and myself for each of the two users.  In the ideation stage, I listened to and watched each recording.  Each user was told to think out loud, so their thoughts and actions were both recorded through screen capture software. 

Sample Dialogue

I prototyped a sample dialogue as I listened to the recordings.  Some of the sample dialogue was directly copied from my response to the user when they asked me questions and some dialogue was created based off of the user’s comments as they thought out loud.  I tailored each dialogue map to each user, specifically.

Tailored Mid-fi Prototypes

This sample dialogue was put to the test when I asked the same users to play again, this time with the Voiceflow prototype in place for each user to access when they wanted help with the game.  Several problems were realized during this test. Here are my observations below.

 
Screenshot from user A’s recorded user test.

Screenshot from user A’s recorded user test.

User A.


Observations taken from User A.

  • User remembers what he learned from last session and relies on what he remembers without accessing voice assistant for help for the first minute and a half

  • When the user gets stuck at around a minute and a half, the user accesses the voice assistant for the first time, and the assistant opens with a welcome and tailored hints for navigation based off of this user’s first session

  • User audibly says “thank you” to the bot, then clicks the bot’s utterance choice “zoom in.” The bot speaks with instructions for how to zoom in. The user follows the assistant’s directions and correctly zooms in.  This is a huge improvement from the user’s first session where he struggled to remember how to zoom in throughout the game.

  • When the user is in a good flow with the game, the user forgets that he is zoomed in.  When the user is zoomed in they miss out on the Twine story at the top of the page. 

  • At 10 minutes and 50 seconds, the user gets stuck and remembers that he can exit full-screen to see the Twine game which will give him an idea of how to continue

  • At 11 minutes the user starts to click through the Twine story text

  • The user remembers the solution to the puzzle from his last session, but still wants to find all the letters himself. He is missing an “R” and an “I” and zooms in to continue to search

  • At 12 minutes, I commented to the user that he does not need to find all of the letters if he already knows the solution to the code and that he can solve the puzzle.

  • At about 12 and a half minutes, the user enters the solution to the puzzle into the Twine game

  • At 13 minutes, the user has entered the second level of the game

  • The user misses a step in the game and continues to click through the Twine game story without playing the game in the 360 tour as he was doing in level 1.  At this point I know the user is lost and I remind him that he can use the voice assistant to help him when he’s stuck.

  • At 14:47 the user clicked on the utterance choice “I don’t see a box” then the user verbalizes that he doesn’t see a box.  The bot speaks to remind him to use arrows to move back and forth.  In the last session, the user had trouble finding the box in level 1 and this response would have been helpful to him if he were in the same place in the game.  However, the user was at the beginning of level two at this moment where there aren’t any arrows.  Instead, at this moment he is meant to click on a disc to resume finding clues in level two. Once he clicks on that disc, he will see arrows once again.  

  • The user searches for arrows that are not there because the assistant gave irrelevant information to him due to his current location in the game.  To repair the conversation, I interjected at around 14 minutes and 57 seconds to let the user know that he needed to click on the disc, first.  Then he would see the arrows he was looking for.

Feedback from User A.

  • Details are hard to see in photos

  • Hidden puzzle pieces are harder to see in a picture than they would be in person.  Some hidden puzzle pieces are impossible to make out as different from the surface they are set on.  In person this hidden puzzle piece works, but without tactile interaction this hidden piece does not work where it is placed.

 
Screenshot from user B’s recorded user test.

Screenshot from user B’s recorded user test.

User B.


Observations taken from User B.

  • The user starts out by reading the Twine story.

  • At 40 seconds in, the user verbalizes that she doesn’t see a box.  Then she accesses the voice assistant. The assistant speaks to give her an introduction with navigation directions

  • After the assistant stops speaking, the user clicks on highlighted text in the Twine game to progress the narrative of the game

  • At 1:37 the user interacts with the 360 game for the first time by moving her gaze down toward the floor. This reveals a pink arrow, and the user verbally responds to the AHA! moment 

  • She finds her “first clue,” as she states, around 2:07.  She clicks on the green clue, but nothing happens. This is an error in the game, and knowing that this error is in the game, I mapped out a response in the sample dialogue that says something about the game not being perfect and what to do if she finds an error.  But the user does not access the voice assistant for help at this time.  Instead she assumes that she is experiencing a user error. 

  • At 2:45 the user is still trying to figure out why she can’t open “the first clue.”  The user thinks that this is “the first clue” because she followed the first arrow she saw, and that arrow pointed her in a direction that led her to seeing this specific green clue.  

  • At 3:14 the user defines “the first clue” as “a problem.”  She states that she’s going to try and open another clue. 

  • After finding and opening a different clue, the user clicks the highlighted text in the Twine game to move forward in the narrative.  The way that this 360 tour is built, if you click through the narrative of the Twine game it will reset you back where the tour begins until you reach the next level of the game.  All levels are coded as separate tours.  So you reset at the beginning of each level whenever you click through the Twine game.  The user comments about the game taking her “back to where she was before.”  

  • The user follows the same steps she did when she started the game which leads her back to the clue that doesn’t open.  At 4:26 the user says, “I don’t understand what’s going on here.”  At that time, I reminded her to use the voice assistant if she needs help.

  • The user responded that she did use the assistant.  I pointed out the utterance choices which the user did not see.  The user, then, states that she should have selected the  utterance “zoom in.” I step in to show her how to scroll to see all utterances and direct her to the utterance “how do I start?” which was specifically tailored for this user. 

  • I continue to demonstrate to the user how to select utterances.  I select “pink arrow” after the bot completes directions for “how do I start.” 

  • At 5:45 the user now understands how the Voiceflow prototype can be used throughout the game.  The user states that she was relying on what learned from her first session.

  • At 6:41 the user accesses the voice assistant and chooses the utterance “clue.” The voice assistant asks her if she wants a clue about the letter puzzle. The user says “yes,” then comments “how do I say ‘yes’? Do I type in ‘yes’,” then the user types “yes” into the Voiceflow prototype. 

  • After the user types in “yes” the prototype breaks and shows the message “reset test.”  I step in for conversation repair. 

  • At 7 minutes and 39 seconds I explain to the user that she must select choices instead of typing in her responses. I took her feedback about the way the bot only works off of choices.

  • At 7:58 the assistant responds to the user clicking the “yes…” choice and gives her a clue.

  • At 8:49 the user expresses frustration that she is “taken back to the same place” when she advances the pages of the Twine story

  • At 9:15 the user wants to be reminded about how to zoom in, but at this point in the sample dialogue the choice for “zoom in” is no longer there, so even though the user tried to use the voice assistant to remind her of how to zoom in, that choice was not available. I stepped in for conversation repair and told her how to zoom in

  • At 12:35 the user offers feedback that the Twine narrative with clickable text and the 360 game underneath are confusing together.  She doesn’t know if she is supposed to be progressing through the narrative of the Twine game, or moving through the 360 tour.

  • At about 13 minutes, I step in and click on utterances that I tailored for this user to access dialogue that will help her through navigating the game.  I click on “I’m lost.” The voice assistant addressed her question. The user also expressed that she didn’t want to know the answer to the puzzle yet, but she saw that “answer” was a choice utterance that she could click on.  Knowing that I coded a joke behind that utterance, I clicked on “answer.” The voice assistant teased that she wouldn’t tell. The user laughed.  Even though the responses were appropriately mapped out and tailored for this user, the utterances were a problem. The user did not connect to the utterances offered as choices. 

  • User will read into errors, unclear pictures, and any mistakes in the game as story or puzzle elements

  • At around 14:59 the user access the voice assistant to help her and the response from the bot helped her

Feedback from User B.

  • Some clues in the game have errors and will not open

  • The 360 tour should pick up where you leave off when you advance to the next page in the Twine game narrative

  • The assistant should be able to hear utterances and respond appropriately, and/or respond appropriately to utterances that are typed in

  • The user is confused about whether she should be clicking the Twine narrative, or staying in the 360 game

 

Taking a Step Back

How I created the mid-fi prototypes used in the test above.

First, I needed a sample dialogue. The 15 minute recordings of the users’ first sessions guided my creation of a sample dialogue map. During the first session of recordings I gave participants limited instructions on the game, but explained in detail why I was performing this user test. I explained the 15 minute recording process and that I needed for them to think out loud so that the recording would pick up their comments. I started the recording, then I left them to play the game alone for 5 minutes. After 5 minutes I returned to answer their questions while we were still recording and stayed with them for the next 10 minutes as they played through the game. Both participants wanted to continue playing the game past the 15 minute testing period and both played until they completed the game.

User A’s wireframe for mid-fi prototype.

User A’s wireframe for mid-fi prototype.

Creating mid-fi prototype for User A.


  • The first prototype that I created was for user A.  When watching the recording, I made a line of dialogue for each part of the video where user A could use guidance.  

  • I tested the flow sheet periodically as I created the dialogue map.  The bot would break sometimes when utterances were not clear.  I needed to change utterances to make sure that they were unique.  

  • The Voiceflow platform didn’t understand my speech and would misunderstand words, so I decided to only design for the user to type or choose utterances. 

  • When I tried creating intents that the bot would need to listen to utterances to access, the test would fail, so also decided to only design around choices

  • When the full prototype was ready I created a link for the user to access as they played through the game on their second session

 
User B’s wireframe for mid-fi prototype.

User B’s wireframe for mid-fi prototype.

Creating mid-fi prototype for User B.


  • The second prototype I created was for user B.  I used user A’s design as a template, and added and deleted paths while watching user B’s 15 minute recording to tailor this prototype for user B.

  • Added block and path in intro 

  • Connected ‘else’ to the new block, connected new speech to another command that directs the user to find the next clue. 

  • Added 2nd path about going back.  

  • Connected “Go left” block to “go back 2” block, then connected “go back 2” block to starting block. 

  • User B looked for letters before looking for the pink locked box, so I changed the flow so that blocks followed this new linear order. I took away the utterance “I don’t see a box” because she did not look for a box.  I disconnected flows that she didn’t follow, and reconnected them in the order in which she played. 

 

Feedback

Both users could benefit from the voice assistant more if they were able to speak to the bot or type their question to the bot.  Even though each dialogue was tailored to each user, that tailored information was hidden many times because the utterances got in the way.  Using choice only made it difficult to access the information they needed at the time that they needed it.  They generally needed the same information that they needed in the first session, but in the second session they did not need that information at the same point in time as they had in the first session.  There was also feedback about the build of the game.  Pictures were hard to make out, some clues did not open when clicked, and switching between the Twine narrative and the 360 tour caused frustration when advancing the Twine page reset the tour. 

The solution.

 
Conversation flow breakdown of user A’s wireframe.

Conversation flow breakdown of user A’s wireframe.

High-level structure redesign for user A.

High-level structure redesign for user A.

Conversation flow for User A.


User stories

  • Helping user zoom in

  • Helping user find box and 3 digit code

  • Helping user solve letter puzzle

Key words

  • Move

  • Clue

  • Puzzle

I went back to the drawing board and referenced User A’s first wireframe that I created by writing the copy for each CHOICE block in Voiceflow. In order to address issues from user feedback, I backtracked by categorizing each block of the wireframe conversation into one-word descriptors for the type of information that was given to the user. In this high-level structure redesign I evaluated what user stories I found in the first wireframe and used post-it notes to structure a conversation flow for each user story. I also added a main menu to help with navigation.

 
Conversation flow breakdown of user B’s wireframe.

Conversation flow breakdown of user B’s wireframe.

High-level structure redesign for user B.

High-level structure redesign for user B.

Conversation flow for User B.


User stories

  • Helping user start

  • Helping user find relevant clues

  • Helping user solve letter puzzle

Key words

  • Start

  • Relevant

  • Puzzle

Key words would be added as INTENT blocks instead of CHOICE blocks in Voiceflow so users can say or type key concepts at any time to find what they are looking for.  They can also say “menu” at any time to return to the main menu where user stories are listed for the user to click to follow that conversation. 


 
 

Combining user stories into one prototype.

When using AI, the bot will be able to learn which user story to follow as the user is playing.  I combined user A and user B’s user stories into one design. Below is a video prototype of how the escape room bot will work.  You can also click through a Voiceflow prototype by clicking the picture below.

 
 

A new wireframe is depicted to the left for the Attic Escape bot prototype. Read my design justification for an explanation of the redesign.

Watch the prototype video!

This prototype video gives you an idea of how the Attic Escape bot will help the user throughout playing the game.

 

Design Justification

After user testing and feedback, I redesigned the conversation flow by creating a high-level structure that did not contain the actual things that the bot would say, but rather keywords that described what the bot was talking about.  First, I backtracked to make this map with post-it-notes from the original mid-level wireframe.  Doing that showed me the type of information the user needed while they were following a user story.  When creating the redesign, I started by creating separate user flows for each user story.  Then, I added a main menu.  All of these new starting points would be put into Voiceflow as INTENT blocks instead of CHOICE blocks so that the user can find each user story by typing or saying key words.  The biggest problem from testing was that even though each conversation flow was tailored to each user, the users did not follow their same path the second time playing the game.  Therefore, the users found it difficult to access information at the appropriate time during the game.  Making INTENT blocks with keywords that lead to user story flows would solve this problem.  Being able to say or type “menu” at any time to take the user back to a main menu with user stories listed, also solves this problem. 

After creating a low-fi representation, I worked on the UI by writing the actual words and sentences for what the bot will say by using Voiceflow. I also combined user A and user B’s user stories into one design. With the redesign structured with INTENT blocks instead of CHOICE blocks, users are able to follow a non-linear pattern that is more adaptable to different paths taken.

In conversational design, prototyping is a mindset. Human Computer Interaction is a relationship that requires constant evaluation to work.