Spanish Translation of the Microsoft Product Reaction Cards

By V. D. Hinkle

Summary. Three Spanish translations of the Microsoft Product Reaction Cards (MPRC) tool were evaluated to determine whether there were any differences in the feedback elicited from Hispanic participants based on the type of Spanish translation provided (direct translation, translator-validated, and user-validated). The direct translation used the first Spanish word returned using Google Translate. Exact wording provided in Spanish was used as the direct translation for the English word. The translator-validated translation was created by one translator based on the results from the direct translation. The user-validated translation was created based on an online study with 52 bilingual participants in English and Spanish. Results showed that translation quality ratings were not significantly different for the translator and user-validated translation conditions. This was perhaps due to their bilingualism or the minimal differences in translation between the three Spanish versions.

Note: This is an excerpt of the article “Is User-Validation Necessary for a Spanish Translation of the Microsoft Product Reaction Cards Tool?” in the Proceedings of the Human Factors and Ergonomic Society, San Diego, CA 2013.

INTRODUCTION

The Microsoft Product Reaction Cards (MPRC) tool developed by Microsoft researchers Joe Benedek and Trish Miner (Benedek & Miner, 2002) is typically used by usability and user experience researchers and practitioners to gauge perceived desirability of products or services. The MPRC tool has been previously validated and has been extensively used with English speaking consumers by user experience and usability researchers (Williams, Kelly & Anderson 2004; Rohrer 2009; Barnum & Palmer 2010). Since the MPRC tool has never been used in Spanish, there was a need to determine whether the tool could enhance user feedback from Hispanic consumers. The purpose of this research was to assess whether the translation quality of a Spanish version of the MPRC tool affects the feedback collected from Hispanic users. Preliminary research was needed to translate the original tool and then create Spanish versions with varying levels of translation fidelity.

CREATING THE TRANSLATION CONDITIONS

Three translations were created – direct translation, translator-validated, and user-validated. The direct translation of the MPRC was created by using the first choice provided in Spanish to the English word in Google Translate. The exact wording provided in Spanish was used as the direct translation for the English word. The translator-validated translation was created by one translator after revising the results from the direct translation. Words awkwardly translated by Google Translate were adjusted to better options in Spanish. The user-validated translation was created based on an online study with 52 bilingual participants in English and Spanish. Participants chose from the provided options in Spanish or suggested their own for a given English word from the MPRC tool. The Spanish options were based on the direct and translator-validated translations of the MPRC tool. The results from this following study helped create the user-validated translation.

METHOD

Participants

Sixty participants of Mexican descent living in the United States, age 18 years or older, volunteered to participate in the study and were compensated for their participation with a $15 electronic Amazon gift card. Participants completed an acculturation scale called the revised Acculturation Rating Scale for Mexican Americans (ARSMA-II) and answered a background questionnaire which included questions about demographics, Internet experience, and language preferences. All participants felt comfortable using computers and the Internet. With the exception of two participants, most participants had 1 to 2 years of college education.

Materials

Online MPRC Tool. The tool included a section for answering the ARSMA-II acculturation questions, which was used to create an acculturation score for each participant, and a section for the exercises with the MPRC words. In the MPRC section, participants were asked to choose as many words as they wanted that best reflected their general impression of a website’s homepage. Once they were done choosing words, the MPRC tool prompted them to explain why they chose those words. Participants were randomly assigned into one of the three Spanish translation groups and also assigned to a counterbalanced presentation of the homepages.

The three translation types had the same number of words, 120 words, but varied in their Spanish translation type. There were 54 words (45%) that were the same across all lists. Sixty-five words (54%) were the same for two out of the three lists. Ten words (8%) were different in each of the three translation lists (Table 1). See Appendix A for a complete list of the MPRC words in the user-validated version.

Table 1. Unique Words by Translation Condition

Table 1

Homepages of Websites Selected. The homepages of the websites selected for this study were chosen because both provide a Spanish version of the site for prospective and current Hispanic consumers. Additionally, the sites were chosen because they provided content and imagery specific to Hispanics in different ways. For example, one of the sites has been identified as a successful web customization for Hispanic consumers (amfamlatino.com) by several experts specializing in Hispanic Marketing (Korzenny & Korzenny 2011; Kutchera 2011). The sites selected included All State (miallstate.com) and American Family Insurance (amfamlatino.com). Screenshots of the homepages were captured on April 30, 2012. Figure 1 shows the homepages for All State and American Family Insurance in Spanish.

 Figure 1a  Figure 1b

Figure 1. Selected homepages.

Procedure

Participants were asked to read an informed consent prior to committing to participation in the study via the Online MPRC tool. At the beginning of the session, participants answered acculturation questions, and then they were asked to rate their general impression of the Spanish homepages for two websites. Participants did a practice exercise to get familiar with the application prior to starting the exercises for the study. They were instructed to take a look at the image of a Spanish homepage and choose as many words from a list of 120 words (MPRC tool) to describe their impression of the Spanish homepage. Participants had an opportunity to look at all the words before committing to the final number of words from the list to describe their impressions. The MPRC tool automatically kept track of the words selected. Once participants finished choosing words to describe their general impression, they were prompted to explain why they chose those specific words. Participants then wrote their explanations inside text boxes provided next to their chosen words. Participants repeated the same steps for the second homepage. Once they had explained why they chose those words to describe their general impression of the homepage, participants were asked to rate how satisfied they were with the words provided to describe their general impressions using a 7-point Likert scale (1 = Very Satisfied; 7 = Not At All Satisfied). Participants also rated the translation quality of the words in the list using the following adjective scale:

scale

RESULTS

A Kruskal-Wallis test was conducted to determine whether the translation quality ratings were significantly different across translation conditions. The results show that the translation quality ratings were not significantly different across the different translation conditions: X2 (2) = 3.019; p = .221 (see Table 2). The results indicated that participants overall did not perceive their assigned translated list as a bad translation despite the obvious quality of some of the words translated in the direct translation, e.g., Novel was translated as Novela which means a novel as in a type of book or a soap opera.

Table 2. Frequency Count for Translation Quality Rating by Translation Condition

Table 2

A one-way ANOVA with type of translation as a between variable and acculturation score as the dependent variable was conducted to determine whether there were any significant differences in the participant’s acculturation ratings within each of the translation conditions. The mean acculturation score for the direct translation condition was M = -0.15 (SD=.48), for the translator validated condition was M = -0.06 (SD = .55), and for the user validated condition was M = -0.08 (SD =.71). The one-way ANOVA showed that the mean acculturation scores were not significantly different across condition groups F(2, 57) = 2.62, p = .886 (see Figure 2). Table 3 provides the cutoffs for determining acculturation levels based on acculturation scores.

Figure 2

Figure 2. Acculturation ratings by translation condition.

Table 3. Cutoffs for Acculturation Levels Based on Acculturation Scores

cutoffs

DISCUSSION

Translation quality ratings were not significantly different depending on the translation condition. The ARSMA-II acculturation scores indicated that all participants were bicultural and bilingual in English and Spanish. Perhaps bilingualism compensates for poor quality of translation and this was something reflected on the translation quality ratings as some of the latest research report from the Pew Hispanic Center shows that Hispanics seem to be reading news in English more than in previous years (Hugo Lopez & Gonzalez-Barrera, 2013). Since the majority of the participants were college educated and computer and Internet savvy, something that is not very common with Hispanics with low acculturation scores, this could also have been an influencing factor on the  translation quality ratings. The lack of statistically significant differences for the translation quality ratings could have been a result of the bicultural background of the participants in the study. This may also be due to the fact that there were only 10 words which varied among the translation types. Due to the minimal number of unique words between the translator and user validated lists, it seems like a translator can adequately perform the job without the need of recruiting participants to validate the translation for each MPRC word.

Because participants were bicultural, with similar degrees of biculturalism, more research needs to be conducted with less acculturated individuals with different levels of education, language proficiency in English and computer knowledge. It is entirely possible that the results would be different if conducted with less acculturated Hispanic participants.

This study had several limitations. Because the study was conducted online, it is entirely possible that the results would vary when conducted in person. Typically, user experience and usability sessions that are conducted in person tend to yield richer results than those conducted via online tools. In terms of being able to generalize the findings, this study was limited to American Hispanics of Mexican descent. While the words were translated with the idea of keeping a universal Spanish translation in place, it is necessary to validate that the words are the most common translation across different ethnic groups in the Hispanic population.

CONCLUSION

Translation types used in this study did not matter to Hispanic participants of Mexican descent. They seemed resilient to translation quality which is not surprising due to the great disparity of translation quality they typically encounter online. While their translation quality and satisfaction ratings were not significantly different based on translation conditions, it is possible that the lack of significance is rooted in their biculturalism, English proficiency, and overall expectations based on previous experience with Spanish versions of websites in the United States. It also seems unnecessary to have a user-validation of the translated MPRC words, since a small percentage of unique translations for the MPRC words occurred based on the translation condition. To fully understand the impact of Spanish translation quality on the user experience of Hispanic participants, an acculturation measure is needed to understand the degree of difference it will make to the Hispanic users. It seems that bicultural, and therefore bilingual, Hispanics did not perceive differences on translation quality of the different Spanish versions tested in this study, but these findings may be very different for Hispanics with low proficiency in English and who are less acculturated.

REFERENCES

Barnum, C., & Palmer, L. (2010). More than a feeling: Understanding the desirability factor in user experience. Proceedings from Association of Computing Machinery, Computer-Human Interaction (ACM-CHI). Atlanta, GA.

Benedek, J., & Miner, T. (2002). Measuring desirability: new methods for evaluating desirability in a usability lab setting. Proceedings from the Usability’s Professionals Association (UPA). Orlando, FL.

Hugo Lopez, M., & Gonzalez-Barrera, A. (2013). A growing share of Latinos get their news in English. Retrieved from http://www.pewhispanic.org/2013/07/23/a-growing-share-of-latinos-get-their-news-in-english/

Korzenny, F., & Korzenny, B. (2011). Hispanic Marketing: Connecting with the new Latino consumer (2nd Ed.). Burlington, MA: Elsevier Butterworth-Heinemann.

Kuchera, J. (2011). Latino link: Building brands online with Hispanics communities and content. New York, NY: Paramount Market Publishing, Inc.

Rohrer, C. (2009). Desirability studies. Retrieved from http://www.xdstrategy.com/blog on April 27, 2012.

Williams, D., Kelly, G., & Anderson, L. (2004). MSN9: New user-centered desirability methods produce compelling visual design. Extended Abstracts on Human Factors in Computing Systems, 959-974. ACM.

APPENDIX A

User-validated translation into Spanish of the Microsoft Product Reaction Cards Tool

list

Tagged with: , , , , , , , , ,
Posted in Usability News
Subscribe to SURL

Want to receive notifications when SURL has new articles? Please enter your name and email address to subscribe to our website.

Popular Topics in Usability News
Log in/Sign Up
%d bloggers like this: