How many of us have watched a quiz show on TV and thought that the questions were ridiculously easy to write? I thought that all that one had to do was to add how, what, when or why in front of a fact to have a trivia question ready to go. So, when I was hired by the CRI GameLab to write 300 questions for a game, designed to celebrate the rich diversity of Indian states and union territories, I assumed the task to be a piece of cake. The game is called Pin My State and in the game the players would have to answer a set of questions in different categories about the different States and Union Territories of India. Now, after finishing the task of writing the questions, I have newfound respect for game show writers and quiz masters.
India is a large country which boasts of a multitude of cultures and customs. Each state has its own unique history which shapes its modern-day populations. To write credibly about all of them was a challenging albeit interesting task. I started writing the questions by myself and realized that I was being insular. I wrote more about the things I knew and places I lived in. Academia has taught me to google efficiently but I wanted something more than just a review of different Indian states. I wanted to capture the experiences people had while travelling through the country and the impressions that different cultures left on them. With these lofty and cliched ideas I mailed a few friends and relatives that I would be interested to know more their trips and vacations in different parts of India. I had expected a few responses, some vague ideas and a lot of long winded mails about life changing experiences they had on said trips. I was pleasantly surprised over the next few weeks by the positive response and the amazing details about food and culture that my friends gave me. Sure, I had some mails asking me if I had changed careers and occasional load of vacation pictures sent my way but among all that were amazing ideas and facts about places I had never been to. The fun part is that this mail has snowballed and spread through my social contacts. I still get calls from people who heard about the quiz from a friend saying that he/she has an amazing question for me.
Even with great ideas I had a few roadblocks ahead. A quiz is engaging and informative only if it is relevant. Esoteric facts and dry dates from historic events are only fun for a select few. The questions must be prefaced but mustn’t be long, the options should be coherent but still not too easy and finally the topics should be popular but should not be stereotyped. I can only hope to have hit the mark on balancing difficulty with level of interest the questions generate in players.
Next came a chance to take the game for a test drive. Since I was travelling to India I asked the team if I could test some of the questions. I had already taken to accosting unsuspecting people at parties with trivia questions but felt a more organized approach to testing was warranted. The Bangalore based branch of the company Progress allowed me to test the questions on a Friday afternoon. The twenty players were wonderfully diverse state-wise, and the following hour and a half was lively but borderlined on becoming combative. The questions which garnered the maximum discussion were about food and this fascination reflects the passion we Indians share over our amazing cuisine.
I had an amazing time being a part of this game and I thank Jesse, Radhika, the GameLab and Includo for giving me this amazing opportunity. For me this game has started a journey of discovering amazing things about my country and I plan to keep that going.
This blog post announces the conclusion of the game Pirate Partage. Like the other games we have developed in the IncLudo project, it has grown significantly from its initial birth at a game jam at the Mozilla office in Paris more than a year ago, without straying too far from the initial spirit and fun of the original design.
In the first version, pirates with different handicaps had to share treasure (coins) strewn over the table, by putting them into one of the four treasure chests. They did this by following instructions on cards, but the instructions were always for someone else. So one player had to tell another player to put 2 coins in the red chest, for example. What made this hard is that each pirate had a handicap: one pirate can’t see, another can’t hear, still another can’t talk, and the last can’t use their fingers. To spice things up, once in a while a player would turn over a bonus card that would make players either “toast” (by touching their fists together) or swap handicaps with the player across the table from them. You can read about more about the initial idea, and how we came to it, on a previous blog post.
Already at the end of the jam, we found that the game concept worked really well. Players enjoyed it, and it eventually found other ways to communicate with each other despite the handicaps. But there were a few glaring problems. First, the “blind” player (later named Captain Skulleyes) didn’t have any way to look at their cards without cheating a bit, peeking under their eye mask. Secondly, it was hard to measure how well the players were doing. They could easily drop coins in the wrong treasure chest and there was no feedback to tell them it was wrong. And finally, none of our prototype props looked anything like pirates. We knew there was a lot to do on that front.
The rest of this blog post will be centered around how we approached those three problems.
When ZMQ tested the first version of the game in India, they got quite positive feedback. But they found that players really wanted a scoring system. At first, this didn’t make a lot of sense to us. Pirate Partage is a cooperative game, so how could the players compete? ZMQ explained that teams of players wanted to compete against other teams, to see how well they did.
This seemed like a good idea, but in practice it wasn’t obvious how to make it happen. You could set a goal of having players go through all their cards, and then time how long it took them, but there was no way to know if players actually did what they were supposed to. A few times in playtesting we saw players miscommunicate and not realize it, or even tip over a whole treasure chest and then just turn it back up again. This is a difficult problem, because we can’t know which cards are played when, nor what actions were done when they were.
After struggling with this problem quite a bit, we decided to recast the design as a problem solving exercise. If we know how many pieces of treasure the players start out with, and how many they end with, then we can figure out if they played their pieces correctly. To make this work, we went from having a bunch of different treasure chests in the middle of the table to each player having their own. Instead of taking treasure off the table and putting it into the chests, players had to swap pieces from their chest to other. This was actually a bit harder, since sometimes you have to dig around in your chest a bit to get the right piece. Then we made the cards involve more than one player and piece, which opened up the possibility of making two other players exchange pieces rather than yourself.
In order to know what the solution is (the number of pieces in each chest), we need to know which cards have been played. To do this, we flipped the problem around and told the players which cards to play. This involved numbering all the cards, and telling the players which ones to take each game. This sounded like a job for a computer program (though later it was suggested that we could have put it into a booklet). The final step involved making sure there were no “deadlocks”, that is situations where no player could advance because they needed treasure from another player who was similarly blocked. We got around this by increasing the number of pieces that each player starts out with, and making sure that even if they played all their cards in a row, they would not surpass the number of pieces they started with. It’s a conservative approach, but at least we don’t have to worry about deadlocks.
Perhaps the biggest change in comparison to the original prototype is with the props. We created custom treasure chests, cards, masks, and sticks, and selected pieces such as bottles and gems that fit our pirate treasure theme.
We knew that we wanted cards that the “blind” player could understand through touch. Our first idea was to use something like Braille, with bumps coming out of the cards. But making something that we could read required making very large bumps, and these large bumps made stacking the cards difficult. We had more success with cutting holes into the cards. At first we did this with a hole-puncher, but then we discovered the laser-cutter, the unsung hero of the fablab. This machine is incredible to watch. Check it out
Once we figured out that what the laser cutter could do, we designed much of the remaining props with it in mind. The treasure chests were done using both the engraving and cutting features.
Radhika designed the masks using multiple layers of cut wood stacked on top of each other.
Coming back to the cards, we still had to generate all the designs. We did this using Adobe InDesign’s data-merge feature to layout the vector shapes used by the laser-cutter. InDesign loads a custom CSV file that we generated via a Python script.
When we playtested the game, we found that it took too long to hand out the right cards to the right players each game. So we reduced the number of cards and separated them into 4 categories, a bit like “suits” in a common 52-card deck. Each category has a letter and is painted in a different color, which really helps to distinguish between them.
It surprised us how difficult it is to feel out what a card means. We took pains to lay out the cards so that the same symbols are in the same positions each time, and the holes are always in the same configurations. But it appears that people in general do not get the same sense of space and distance through touch as they do through sight. We found that people feel in a more exploratory way, and have a really hard time doing relatively simple things such as counting the number of circles, or knowing which shapes are above which others. I will be very interested to see how sight-impaired people would deal with the cards; are they simply much better at remembering the spatial relationships between shapes?
Another element that was quite fun to design was the chopsticks. It was hard to find something that made the objects difficult to manipulate, but not impossible. In the original prototype, we tinkered with things we found around the Mozilla kitchen, eventually setting on a oven mitt with a wooden spoon taped to it. But this was not a very durable item. We also tried just using wrists instead of hands, but it was too easy.
We found that chopsticks worked quite well, at least with the audience we were playing with. They found them difficult enough to use to make it was a challenge, but not so hard as to be discouraging. The trick was that we wanted to use them with our mobile app, and regular wooden chopsticks aren’t recognized as fingers by a tactile screen. At this point I remembered the amazing work that Volumique did with using pawns on a tablet. They coated the pawns with a material that conducts electricity, and the device sees the pawn as a finger (or set of fingers) as long as the player touches it.
With this inspiration, I tried with metal chopsticks. They worked sometimes, but the contact with the screen was not large enough to be recognized as a finger. In addition, the metal was hard enough to scratch or break the screen if it was hit too hard. As a suggestion from Kevin Lhoste at the CRI Maker Lab, I took a cheap touchscreen stylus and fit it over one end of the chopstick, then attaching it with glue. That worked brilliantly.
The last major change from the original prototype involved creating an application for tablets and smartphones. We really wanted this for 3 main reasons: for playing our own custom soundtrack, for keeping time, and for a scoring system.
We gave the musical tasks to our intern Liburn Jupolli. The challenge was to make something pirate-themed that was loud and distracting enough to block the sounds of other players talking. He made us a soundtrack that you can hear below:
We decided to use the Unity game engine so that we could easily export onto Android and iOS. I began my making a very ugly but functional prototype that included all the elements. The most complex part of the design was the “firing scene”. The goal of the firing scene was to distract the players from exchanging items and force them to coordinate together in a different way. It replaces the “toast” cards from the original design. In the firing scene, each player has a button to launch a cannon, and some of them are asked to press them at the same time. Its a bit like a rhythm game for multiple players on the same device, though at a much slower pace. Since the blinded player can’t know when they should press, other players will have to tell them.
Radhika worked hard on creating a pirate cave feel for the game. I’m especially happy with the animations on the firing scene, which was a difficult UI design challenge.
I am quite proud of how far we’ve taken this game, strengthening the theme and including cards that can be read through touch.
Since the game requires so many different props and elements, it makes for an elaborate setup that interests people and makes them want to play. On the flip side, it takes a while to setup, and is a bit expensive to produce. It’s well suited for a workshop environment, but taking it to a home environment would doubtlessly require adapting the design to have it be cheaper to make and quicker to unpack.
In the IncLudo project, we are making open source games to promote diversity in the workplace. After a year of building and testing game prototypes, we want to share what we’ve learned about bias, empathy, icebreakers, taboos, and board games while pursuing this important but challenging goal.
The speaker is Jesse Himmelstein, coordinator of the project for CRI Paris and director of the CRI Game Lab.
This talk was recorded at the FOSDEM conference in 2017 as part of the Open Game Development track.
We’re looking to hire interns and a full-time graphic artists who have lived in India to work in our Paris offices to develop games that promote diversity and inclusivity.
IncLudo (http://includo.in) is a joint EU-India development project with the goal of creating games that promote diversity and inclusivity in the workplace. Within the project, CRI and ZMQ have developed a number of open source game prototypes of different forms, from mobile games to board games to interactive fiction, and tested them within organizations of different kinds in India.
In the next year, we will be developing our most promising prototypes into polished games. We are looking for help from a small number of interns (in graphic arts, game design, and programming), and a full-time graphic artist, who are familiar with Indian culture, history, and work environment.
Ability to discuss and research game design questions in English is a must. Previous experience in game creation (at work or in personal projects) is a big plus.
If interested, please contact jesse AT cri-paris DOT org with your CV, portfolio, and short letter of motivation.
What if we could meet people in reverse? That is, learn personal and intimate details about their experiences, thoughts, and day-to-day before we see the color of their skin, their gender, or their age?
Social psychology has shown that group stereotypes shape the way we think and feel about other people, even if we don’t consciously “want” it to be the case. When we see someone unfamiliar, we immediately categorize them by superficial criteria such as race, gender, looks, and age. And each of these categories correspond to expectations about the other person’s behavior and state of mind.
In the workplace, these biases are most apparent in decisions on hiring and promotions, and they are difficult to surmount. One way is to codify our social interactions, such as how Google HR now eschews free-form interviews. But what if we could re-wire our categorizations by simply meeting more diverse people and confronting our stereotypes with their realities?
This prototype, called Same Day Different Lives, takes that second approach. It is an online mobile game in which the player is assigned a pseudonym, and then is randomly paired up with someone else. At that point, the two contribute to a shared “journal” that only they can see and contribute to. To get the “conversation” started, they respond to a number of questions through taking pictures (like “Take a picture of something you’re throwing away today”) or recording their voice (like “Tell about a recent dream you had”).
After a number of days of such questions, they are then given a “quiz” in which they try to guess basic demographic information about the other person, such as their age and level of education. The intention here is to learn this superficial information only after first knowing them on a more personal level. On the final day, the two players are given a chance to chat freely, perhaps to ask each other about some of the experiences they shared that surprised them.
This was also a chance to team up with Gwen Ruelle, who has worked on oral history as part the Red String Project and Oral History Productions. She and I designed and tested the prototype together, and she contributed most of the questions for the journal section.
In terms of development, this project was done completely in Clojure and ClojureScript, letting me develop my skills with those languages. I still can’t claim to be an expert, but I really appreciated its features to tame asynchronous operations.
When doing research about inclusivity, and analyzing testimonies, I realized how difficult it was to make a safe professional environment for everybody. I used to take the game “Parable of Polygons” as an example.
Indeed, everyone is lead by numerous and different biases. As a result, it is very difficult to change mentalities.
We can’t change people in a snap of the fingers; it is a harsh work to do on ourselves. My goal is not to change the way of thinking, but to engage the players’ awareness.
This awareness is the first step to inclusivity.
I began to look for the mediator role in company. In every company, the human resource director embodies this role. For me, it was obvious that the player should take this position and then make some moral decisions. These decisions would have an impact on the well-being of the one who is concerned, all the employees of the company, and the company itself.
Taking account of all these elements was making the player’s decisions more difficult. Indeed, on the one hand he has the person who feel bad in the company, and in the other hand he has all the others employees.
The game I developed proposes different choices, as well as different endings. The player will never have the feeling of losing the game, or either winning it. Endings will adapt the player’s choices.
Weather has a great impact on human beings. The impact is so profound that we have terms like winter blues and even psychological disorders like Seasonal Affective Disorder or SAD for short (the acronym really speaks for itself :P) that are influenced by weather.
I am from the south of India, which implies that my city is blessed with ample sunshine throughout the year. The Parisian winter and even spring had left me longing for a nice warm sunny day. That is when I thought…’Hey! Weather is a nice and obvious representation of how people feel!’ That was the beginning of Weather Check!
The initial idea was to use the ambience to provide feedback about how employees in a team feel, especially about diversity and inclusion. Feeling excluded in the workplace happens more often than we think. This is one of the topics that do not surface during discussions. A person who points out that they feel excluded is often seen as weak. So, fitting in is something everyone tries to do but no one talks about it. The idea was to have team members anonymously fill in a short survey about how they feel within the team and then use this data to generate the weather. If a significant portion of the team claims that they feel left out, the overall weather turns out to be stormy or rainy. The expectation was that, upon seeing this, the team would make an effort to be more inclusive overall. As the team members interact more they would alter the input to the weather system, which would eventually reflect in the weather that changes from stormy to sunny.
The way I initially imagined it, the weather would be projected on the ceiling thus making it an integral part of the workspace, like in the image (all the better images are copyright protected, but if you are looking for a better image google sky ceiling tiles/wallpaper).To put this in context, imagine walking into the coffee room and seeing the sky above you nice and sunny. It is a subtle reinforcement that the team is getting along well. That brings us to the question…what if you walk into the coffee room and you are greeted by a stormy ceiling. In this situation, our concept seems more counterproductive!
This led us to the realization that it was not just enough to provide feedback. It was also necessary for us to offer a solution if the weather turned out to be bad. Inspired by another concept that Jesse has been working on, we started discussing the possibility of a tool that is linked to the weather check. This would typically be an app that groups people together (in pairs or small groups) and offers activities and fun team building exercises that happen over a period of time. At regular intervals, the users would also be asked to fill out the weather check survey form. Ideally, the activities and exercises would reflect in a positive change in the weather.
Once Jesse and I discussed this idea, we started working on a prototype (Well, Jesse did: P). What we needed initially was a way to validate if users could relate to the feedback. To test this, Jesse built a nice minimalist version of the first half of weather check: A simple form that takes input and displays the weather.
After the first version with simple hand drawn clouds, Jesse rightly pointed out that the feedback was incomplete. Users were being told that there was a problem, but it was not clear what the problem was. So in the next version, it was decided that the pain points mentioned by the users would be visible on the clouds. At this stage, there is not going to be a fancy ceiling but a simple projection of the results on the wall.
As the next step, we have been working on testing this version with some users from the GLASS summer school. We are hoping that this would help us substantiate our theory that this kind of representation would spark a healthy discussion within the team. If that works, the next step would be to develop a version with the part that generates activities and team building exercises.
In the hopes of making the talk more interesting, we took a chance on adapting one of our prototypes to make it playable during the presentation. We really should have tested it out more, though, or chosen volunteers ahead of time. We ended up rushing the volunteers, so that the audience didn’t get a good chance to understand how the game was supposed to work.
We had great success making contacts at the festival.
Gayathri had an interesting conversation with Marie Gillespie, a Professor of Sociology at the Open University, that widened her perspective. She urged us to think in the perspective of the user and raised the question of “are people willing to let games change the way they think”? We think this is a very important question to ask, because a lot of our game design depended on the fact that people would be willing to play such a game. Maybe, for the games to be effective, the message that we are trying to convey must be more subtle…
This thought was later reinforced by an insightful conversation with a TJ Matthews, a PhD student in pro-social gaming. He was pointing out some very good research and games from Tiltfactor. In particular, he quoted research based on the games ZombiePox and Buffalo where it was proved that players are more receptive to the actual message of social change when the game does not advocate the message openly or obviously.
Later in the conference, TiltFactor was mentioned again. This time by Prof. Scot Osterweil, the creative Director of MIT’s education arcade and Learning Games Network. He introduced us to another game from TiltFactor titled Awkward Moments at Work.
The last half-day of the conference was devoted to workshops. Each of us went to a different one.
Jesse: I participated in the “The Brain Architecture Game” session run by Marientina Gotsis from USC. The workshop was based on a game that she designed in cooperation with neuroresearchers. Each team builds a “brain architecture” out of pipe cleaners and drinking straws. Pipe cleaners are flexible and tend to bend easily, which makes them hard to build with. But by putting one inside a drinking straw, it can hold much more weight. At the beginning of the game, you roll the dice to see what kind of situation you are endowed with. Some teams got much luckier than us in terms of their genetics and early childhood. Our team was punished by malnutrition and negligence. These early handicaps proved to have compounding effects that weakened our structure. Lucky for us, we had rolled high on “social support” – essentially friends or extended family that could take care of us. In the game, this translates to a few extra straws that we used as soon as necessary to combat the random bad events that occurred. Overall, I found there were a number of simple but well-chosen rules that made the game succeed both in terms of fun and in teaching the realities of brain health through metaphor.
In conclusion, it was an excellent event, and we’re looking forward to doing more with with Games4Change Europe in the future!
Diversity means different things to different people. There are several dimensions of diversity – age, gender, religion, disability etc. In a short survey of employees from two companies in India (Jubilant and CYC), it was found that “Recruitment” was considered to be the top priority in terms of diversity and inclusion initiatives irrespective of the dimensions of diversity. Thus the immense burden of ensuring diversity across several dimensions falls on the recruitment officials of the company. According to TheLadders research, recruiters spend an average of 6 second before they make a decision. This implies that they often rely on instinct or ‘gut feeling’ when it comes to choosing a candidate. While their instincts are a useful tool in decision making, it brings with it the dangers of unconscious bias.
Unconscious bias is a psychological phenomenon where our brains perception of certain people is skewed based on our past knowledge and experiences. It’s not that we are either good or bad people in the way we judge others, it’s just that our brain has to process so much information that it has evolved mechanisms to make things easy in processing information. But the problem is that it may not always help us make the right decisions. If you are curious, feel free to take a test that helps identify unconscious biases at https://implicit.harvard.edu/implicit/takeatest.html . I tried the test on skin-tone bias and was surprised to see that I was slightly biased in favor of lighter skin tones. Despite being Indian (skin tone- brown), my brain seems to have leaned in favor of a lighter skin tone. I have noooo idea why! And I promise to work on fixing this bias! My unconscious bias probably has no significant impact on humanity, but imagine the compound effect of such a bias among recruiters and employees across the globe. When Google released their first diversity report in 2014, it was a wakeup call for companies to get to the root of this diversity gap. Soon Google realized that unconscious bias was a major hurdle in their diversity journey. Ever since, Google and several other companies have been trying to educate individuals about their unconscious biases. Take a look at Google’s unconscious bias training workshop here (https://rework.withgoogle.com/guides/unbiasing-raise-awareness/steps/watch-unconscious-bias-at-work/)
‘Hired!’ Is a card game that addresses biases in recruitment. The players represent the hiring committee of a company and must try to balance their interests and the company’s overall interest. Each player is assigned a particular bias at the beginning of the game that affects his/her perception of the candidates. The game puts the players in a safe environment where calling out each other’s biases is acceptable and rewarding. It gives the players an opportunity to understand how a biased decision can affect the company as a whole. The following sections are an account of the evolution of Hired!
Hired was proposed by Jesse at one of the brain storming sessions. The idea was to build a game where players had a particular bias and the others had to watch out for this biased behavior. The link with recruitment was made almost instantly as it seemed like the most impactful situation where a person’s bias can have a big impact. To get a more concrete understanding of how it would work, we decided to try out a paper version of the game. At this point we had just biases and candidates. We realized even before we started that there should be a mechanism which determines the actual worth of the candidates. And thus we had a deck of cards (sheets of paper of course!) that would represent how good the candidate actually was. Each card had a number from 1 to 4, 4 being the highest. Each player would pick up a scoring card for each candidate (not revealed to the other players) and the sum of all scores was the actual worth of the candidate. The decision on if a candidate should be hired or not was based on a discussion followed by a simple majority wins rule.
At this stage, the score was computed as the sum of individual scores of all candidates who were hired.
The Loopholes & Fixes
After the first round of gameplay, the most important thing that we noticed was that the arguments were poorly structured and players often ended up contradicting each other. Thus, we realized that there should be more solid information d on which players argue. This information had to be specific and binary. By binary, we mean that a characteristic had to be good or bad, as gray areas often lead to confusion and deadlocks. So for the next version of the game, we added specific information to the scoring card and the scores were either 1 or 4. These numbers were later changed to 0 and 1 to make the math easier 😛
Despite the clearly defined information, we noticed that some arguments led to a stalemate as the players were not clear about what role they were hiring for. For example: If a candidate had a visual impairment, one player may argue that it is impossible for this candidate to serve as a designer while the other may argue that the candidate is adept at sales or public speaking. This led to the addition of a new category of cards- the job description cards. These cards define what role this candidate is being hired for.
So far, the simple voting system helped in reaching a quick decision. However, there were 2 problems that we could foresee. One, if the number of players were even, it could lead to a deadlock. Two, the player on the minority side did not have a fighting chance thus making the game a little unfair. To fix this, we came up with a betting system. Each player would start out with a given number of coins and can bet in favor of a candidate. Although the betting worked in theory, it didn’t fit the spirit of the game. After briefly considering a shared resource model, we settled on a voting token system. Each player had a certain number of voting tokens and they could use it to skew the decision in their favor. The voting system has stayed since then.
Now that the voting system was clear, there were new problems! One problem that immediately needed fixing was that players needed more incentive to guess others’ biases. Without this incentive, the players were rarely tempted to guess. This problem was easily fixed by introducing a voting token based incentive for guessing. If A guesses B’s bias and is right, A gets one of B’s tokens. However, if he is wrong, he has to give B one of his tokens. This system encouraged people to guess others’ biases while ensuring that none of the players are at a disadvantage.
After a lot of internal testing, it was time to test the game with players other than the designers! We made some nice and fancy cards to play with and were good to go.
The game was play tested 4 times with gamelabers who were not part of the game design. It was amazing to see interesting strategies evolve. One player decided to play in favor of a candidate he was biased against to ensure that his bias isn’t guessed. Another player used his minority position to mislead the others. The feedback from the playtests helped us fine tune the voting system.
The scoring was another area that underwent changes. The initial idea was to use the sum of all hired candidates’ scores as the company score. However, when multiple teams were competing simultaneously, we needed a system that was not affected by chance. Thus, a rank based system was decided. Now the objective was just to see if the company hired the best candidates who were available. The impact of hiring someone with a bias also had to have a bigger impact on the individual. So in the final version, individual score is calculated as number of tokens in the end minus twice the number of candidates against whom the player is biased.