User-centricity, or putting users at the center of the decisions an organization makes, has increasingly gained currency in recent years [1]. This is a product of both using agile methodologies and as a way of navigating a clearer path in increasingly uncertain times [2]. Indeed, there are a number of macro trends reinforcing the need to transform organizational culture towards user-centricity, including increasingly well-informed customers and accelerating advances in technology itself [3]. Jeff Bezos, founder of e-commerce giant Amazon, famously said: “The most important single thing is to focus obsessively on the customer.” [4]. Yet the academic literature is filled with reasons why organizations face barriers to becoming more user-centric [5, 6], including product-centricity and organizational silos. In previous blogs, Arm Education have outlined the launch of our Massive Open Online Courses (MOOCs) on edX [7, 8]. One of our key goals from the outset was to create user-centric MOOCs. So how did we take users into the heart of the MOOC production process, to create a great digital experience for our students and technical communities of practice learning on Arm? Which barriers did we face? In this blog I go behind the scenes to explore how we put users at the center. This includes hearing the user perspective from Ashley Bailey, one of our regular course beta testers. Ashley is also at the center of this blog.
The main barriers to user-centricity for us were largely around making the correct set of decisions around the detail of how to implement user-centricity, rather than with the product itself. Looking at how to make our MOOCs user-centric was daunting. There were a lot of choices to be made in terms of the sample of users we should talk to and the methodology to use. Indeed, who were our key customers? As Fader argues in his book Customer Centricity [9], being customer-centric is about thinking through your most important and valuable users at a strategic level. From an educational perspective, this meant not only understanding where our main markets are situated but where we would like them to be, both geographically and socially. Diversity, equity, and inclusion had a strong part to play in the choices we made around our sample of testers and the methodology we employed.
In terms of our panel of testers, we were conscious of recruiting with diversity in mind. Social axes of differentiation were important, particularly representing gender, ethnicity, and age diversity. Arm Fellow Teresa McLaurin has previously written on the entrenched case of gender imbalance. She notes only 27% of STEM roles are filled by women, according to the US Census Bureau [10]. It was important not to reinforce any barriers or create imbalanced views about who the course was for: you need to see it to be it. Our sample of testers therefore had gender diversity and people at varied career stages including students, early and later career professionals and academics, cross-cutting different ages. It was important to understand the perceptions of the course from various positionalities. It was likely that people from different backgrounds would spot different issues. They would have skin in the game and see from that point of view. In terms of geographies, we knew from previous market research that our main markets were in the US, UK, and India. But we were keen to expand out from these core areas to emerging economies in South America and Africa. As part of this we were additionally hoping to broaden the diversity of our sample in terms of ethnicity. We also wanted to understand any barriers to entry created by geographical differences. This included internet speeds and how they may impact on how our course simulators run, to the availability of hardware on bills of materials for lab work. Perceptions of courses from inside Arm as well as outside was also important, so we also included early career Arm employees.
Typically, 15-30 user testers across these diverse axes of age and geography would test every course before it was released in a closed beta test. The notion of closed beta test might suggest a straightforward testing of courses [11] with a new set of beta testers every time. This would be at the final stage of production where issues are fixed before a more general release. But our beta testing became more involved than that. While user testers certainly spotted issues, the user testers themselves brought insight from two areas. This was from their positionalities of identity and geography, but also from their ongoing relationship with us at Arm Education. Tuominen at al [12] (2022: 3) argue: “focusing solely on identifying and satisfying customer needs may not be enough; instead, firms should also pay attention to the culture, values, and activities that enable them to build long-lasting customer relationships.” These user testers did not review one course but went on to stay with us for multiple courses – to build a long-lasting relationship with us. This had wider, positive implications.
In terms of our methodology, a call would go out to about 30-40 user testers to support beta testing. The time required for testing was not inconsiderable, often 15-20 hours per course. Invariably some people would not be able to help at any one time due to work, study, or family commitments. But some testers regularly committed to beta testing every course and others would come back to our user research when time allowed and a new course was released. I started to call them the ‘long-term user testing panel.’ As a seasoned social scientist, I had not quite anticipated this – I’d often been involved in projects where engagements were a one off, but this had become more like a longitudinal study, just with different courses each time. In terms of beta testing MOOCs, what did it mean to have this long-term relationship? Traditional beta testing would often go out to a brand new cohort each time, often due to different researchers undertaking the research. At a practical level, it meant that we could often rely on certain user testers to provide a particular perspective or have a certain focus. It might be that they specialized in checking out our labs in minute detail or they centered on looking at our videos for issues about language or animation. Others focused on accessibility and navigation for others – were there barriers to the course for those with disabilities or neurodiversity? What it also tended to mean is that user testers ‘invested’ more in the reviews over time. They could observe that we took their comments seriously and made changes to the MOOCs. Finally, they could see modifications in the live courses and became stakeholders in the process. In other words, we closed the feedback loop [13]. They began to build up a view of what could be expected from our courses and would compare and contrast between them. They would tell us how they were different, their USP or some additional aspects we might build on in the future. In other words, they went from being beta testers to co-creating our MOOCs with us.
The research instrument that we used was a traditional issue log. It gave a sense of which issue type it was and its importance, as well as a timestamp for us to go and look in detail (see figure 1). This was a very practical way of picking up issues. The logs were handed over to a colleague project managing content production who had no direct engagement with the user testers. Distance from the testers was important to treat everyone’s comments equally. The issues were then evaluated according to a MoSCoW prioritization technique (must have, should have, could have, will not). This was famously developed by Dai Clegg for product releases when working at Oracle [14].
Figure 1: MOOC issue log: Issue type and details identified by user testers
The 'Musts' and 'Shoulds' were iterated in the course, while the 'Coulds' were considered if there was time or if not, noted for the next course update if they involved wide-reaching changes. Figure 2 shows the level of detail picked up in the MoSCoW prioritization exercise from testers and co-creation in practice.
Figure 2: MoSCoW prioritization: Issue logs amalgamated together into a prioritized list
The second research tool we used was a more straightforward survey (see figure 3), but it was much more about capturing high-level thoughts, sentiment and experiences of the course including barriers. If we saw issues repeating across a number of issue logs, we could begin to see how that may have affected course perceptions across the testers.
Testers were sent hardware a few days before the test started to support with user testing of lab exercises. User testing was typically conducted over a couple of weeks and user testers stayed in touch by email with any queries or sticking points.
Figure 3: High-Level survey on the course experience
After the testing was complete, external user testers were given a voucher as a gesture of thanks for their time. Testers from inside Arm were given feedback and digital badges in recognition of their efforts. But it was also clear that both external user testers and those inside Arm were gaining additionally from the experience. Vouchers and feedback were not their prime motivation in continuing to participate as long-term testers.
One of our most dedicated user testers came back time and again to test our courses – testing not only the course itself but often the labs in depth using hardware. This tester was Ashley Bailey and his work epitomizes how we put users at the center of our MOOC production. Ashley not only spotted issues in the courses but had ideas on what would improve the learning experience. The long-lasting relationship we had with Ashley as a user tester meant we closed the feedback loop. We could also recognize Ashley’s efficacy as a user tester and validate his knowledge in turn. Ashley regularly recorded 80-90% of all issues spotted by the other user testers in the long-term testing panel across our courses. We now put Ashley at the center of this blog to give his perspective on the user testing process for our MOOCs, his motivations, and his experience in co-creation. Indeed, one of the key barriers of implementing user-centricity if often keeping user testers motivated. To do this, we constructed user testing itself as a learning process. Ashley took this to the next level by using testing as part of his CPD activities and CEng application.
When considering my experience of user testing Arm Education’s online courses, three main topics come to mind 1) my motivation for being a user tester, 2) the practicalities of user testing, and 3) (perhaps selfishly) what I gained from the experience.
Having done edX and similar MOOCs before, the inquisitive engineer in me was interested to see a part of the process of how MOOCs are developed. This along with the course topic being an area of interest were the main motivating factors for offering to be a user tester for the first time.
Once having user tested my first course, while my initial motivations remained, other factors also contributed into my decision to user test additional courses. The request to help in subsequent courses itself was a motivation since – as Becky discusses previously – the repeated invite indicated my review comments were valued and must be useful if seeking them again. Additionally, as the topics are similar to my technical skills, user testing the courses was part of that year’s CPD (continuing professional development) commitment. This came from studying the material in the courses, taking time to reflect on the material, and considering what others would gain from the course.
The practicalities of user testing are perhaps less exciting but key to my experiences of it. Communication was by email, with any hardware – where needed for labs – arriving in the post a few days before the review started (see figure 4). From my perspective, this was easy and added little overhead to the logistics of user testing.
Figure 4: user testing with hardware
I would do the course being reviewed with my issues log open, recording any typos I spotted and descriptions that could be changed. Where my technical skills allowed, I added any technical comments I thought may be useful. Generally, comments would all be sent back at the end of the user testing. I would flag some issues early, if these made reviewing the rest of the material difficult for all the user testers. I would pass these comments on and the course team would get back with a fix or clarification which would allow me to review the rest of the material fully. I would do the user testing in evenings and at weekends. Sometimes this was when balancing computers when redecorating a room, moving boxes when moving home, or between anything else going on at the time. This was the potential way users of the course are doing the courses themselves, so I hoped this added value through the perspective it brings.
From my experience of being a user tester, what have I gained from it so far? The motivations I discussed have been realized, so I have gained an understanding of user testing and MOOC development. I have gained an understanding of others' technical skills by looking at material people are using to develop their skills in this area. As a CPD activity I included user testing as part of my professional registration / CEng application. Many of the courses I have user tested have covered microcontrollers, a topic I am familiar with but use less now than I have previously as my career has developed. My participation in user testing courses in this topic has helped refresh my understanding here and slow 'skills-fade' on the topic. And, finally, knowing I have helped improve courses others are using in real-world settings – and perhaps globally – is itself a reward.
Our user testers took us by surprise. Rather than our research participants having one off engagements with Arm Education, many went on to build ongoing relationships with us. They did not just beta test our MOOCs, but co-created them with us – they were developers. The answer to how we created a great digital experience for our students and technical communities of practice learning on Arm involved getting to know our audience better and for them to get to know us better. This was a long-term, two-way relationship that continued to develop. Indeed, rather than the long-term testing panel staying static, testers also began to recommend other people in their social networks with an interest in Arm’s domain and ecosystem in a snowball approach [15]. The network widened. And as testers began to ask for more resources and information, the network began to deepen. This culminated not only in educational enablement but also research enablement, as one of our user testers also introduced his university to Arm Academic Access.
The most daunting aspect of implementing user-centricity in the creation of our MOOCs, the how of implementation, was largely led by the users themselves and their long-term engagement with Arm Education. One of the key messages of this blog is to let your users lead. We continue to watch and explore how we create materials together and how user testers continue their own learning development from the process. Rather than purely being focused on user needs as part of user-centricity, it is increasingly important to think beyond – towards user relationships. Our next chapter is moving beyond user testing MOOCs towards seeding communities of practice around our educational materials with the help of user testers. We look forward to our next co-creation and the serendipity it brings.
1. Hemel, C. van den, & Rademakers, M. F. (2016). ‘Building Customer-centric Organizations: Shaping Factors and Barriers.’ Journal of Creating Value, 2(2), 211–230. 2. Anon (2020) ‘Lockdown Unlocked: understanding consumer habits in a time of crisis.’ Available from: https://www.paconsulting.com/insights/lockdown-unlocked-understanding-consumer-habits-covid-19/ 3. Shah, D., Rust, R. T., Parasuraman, A., Staelin, R., & Day, G. S. (2006). ‘The Path to Customer Centricity.’ Journal of Service Research, 9(2), 113–124. 4. Hyken, S. (2018) ‘Amazon: The Most Convenient Store On The Planet.’ Available from: https://www.forbes.com/sites/shephyken/2018/07/22/amazon-the-most-convenient-store-on-the-planet/?sh=7242fee01e98 5. Hemel, C. van den, & Rademakers, M. F. (2016). ‘Building Customer-centric Organizations: Shaping Factors and Barriers.’ Journal of Creating Value, 2(2), 211–230. 6. Shah, D., Rust, R. T., Parasuraman, A., Staelin, R., & Day, G. S. (2006). ‘The Path to Customer Centricity.’ Journal of Service Research, 9(2), 113–124. 7. Iannello, R. (2020) ‘Our first course on edX - Embedded Systems Essentials with Arm: Getting started.’ Arm Community. Available from: https://community.arm.com/education-hub/b/robert-iannello/posts/arm-edu-edx-embedded-systems-essentials 8. Iannello, R. (2022) ‘Announcing our online course on edX: Business Models for Technology Innovators.’ Arm Community. Available from: https://community.arm.com/education-hub/b/robert-iannello/posts/edx-business-models-for-technology-innovators 9. Fader, P. (2011). Customer centricity: Focus on the right customers for strategic advantage. University of Pennsylvania Press. 10. Teresa McLaurin (2022) ‘Tipping Tech’s Gender Imbalance.’ Arm Blueprint. Available from: https://www.arm.com/blogs/blueprint/tech-gender-imbalance 11. Anon (N.D) ‘Beta Test What is Beta Testing?’ Available from: https://www.productplan.com/glossary/beta-test/ 12. Tuominen, S., Reijonen, H., Nagy, G., Buratti, A., & Laukkanen, T. (2022). Customer-centric strategy driving innovativeness and business growth in international markets. International Marketing Review, Vol. ahead-of-print No. ahead-of-print https://doi.org/10.1108/IMR-09-2020-0215 13. Anon (N.D) ‘Validating Your Beta Testers by Closing the Feedback Loop.’ Available from: https://www.codemag.com/Article/1604002/Validating-Your-Beta-Testers-by-Closing-the-Feedback-Loop 14. Anon (N.D) ’MoSCoW Prioritization.’ Available from: https://www.productplan.com/glossary/moscow-prioritization/ 15. Parker, C., Scott, S., & Geddes, A., (2019). Snowball Sampling, In P. Atkinson, S. Delamont, A. Cernat, J.W. Sakshaug, & R.A. Williams (Eds.), SAGE Research Methods Foundations. Sage Publications.
Hello Dear Becky.Please could you share with me, where I could see what are the requirements to be a tester of your courses and how to apply.If you need to know more about me, tell me through which channel I could do it.
I will appreciate your answer very muchRegards Raul (@SeniorAsJunior)