On becoming the next new smartphone: The life and times of educational innovations
We all tend to recognize innovation when we see it. It’s not just the development of new products and services. It’s the sort of thing that crosses boundaries once faced and that takes the field beyond where it’s previously been. It introduces new ways of thinking about old problems, and may even bring to light issues never before considered. But as keynote speakers Sue Allen and Joan Ferrini-Mundy emphasized the first night of this year’s DR K-12 PI meeting, even this newness contributes to only part of the success of an innovation—the other part is sustainability. The sustainability of innovation underscored the theme of their presentation, Crossing Abysses. By abysses, Allen and Ferrini-Mundy meant the gap between funders, researchers, developers, and their audiences. The gap is not, as the recent PCAST report suggested, due to our emphasis on research at the expense of development. In fact, we do attend to issues of processes, models, and tools in development. Rather, as Allen and Ferrini-Mundy insisted, the problem is that we lack a language to describe how our development efforts get taken up at a wider level—how our impacts move beyond singular research publications and are sustained in the communities they intend to serve.
What may hinder us is that we consider ourselves foremost as researchers and educators. We value the more abstract impacts our products have on learning, on policy, and on infrastructure, and so we struggle to represent ourselves to a funding agency that rather views our ideas as financial investments. And when support for our ideas depends upon the promise and evidence of their commercial success, we come to face all the same challenges as commercial product developers. Suddenly, such terms as “marketability,” “product uptake,” and “customer loyalty” become relevant and, suddenly, we must reconcile these vastly different self-representations, which keeps our efforts from being recognized by those who would fund them and prevents us from actualizing their value.
Abyss number one: Communicating the value and success of our innovations to stakeholders.
To begin, Allen and Ferrini-Mundy suggested we must decide how innovation looks in a program such as DR K-12, if it is not the same as commercial innovation. How do we demonstrate progress compared with our commercial relatives? And what should count as evidence of the success of a DR K-12 product, if it is different from that of a commercial product? As Allen and Ferrini-Mundy argued, we need a new language to describe our efforts, because terms such as promotion, uptake, and adoption are not appropriate for what we do. But I will return to this issue later.
Before Allen and Ferrini-Mundy spoke that evening, I found myself sharing a dinner table and conversation with two evaluators of technical support. If, as you read this, you share the same blank expression at this term as I had at the time, you can hardly be blamed. No doubt, all would agree that evaluation is to the success of a research and development project what assessment is to the success of student learning. But where researchers and developers tend to position themselves at the forefront of a project, evaluators typically work behind the scenes. They check that decisions are justified and that actions have the impact intended; and they suggest ways to improve on the next round, packaging all in a neat report that regrettably few people beyond the project read. And where aspiring researchers have access to many public role models, evaluation is a profession someone tends to fall into after a meandering string of pursuits. In other words, a career in evaluation is not typically sought with planned purpose but rather is discovered by chance.
Along the route to becoming an evaluator, perhaps there is an experience of frustration over a lack of accountability in the course of research and development projects. This might be followed by realizing the necessity of taking stock of progress and of delivering a clear story of lessons learned for posterity. Eventually, it dawns upon the would-be evaluator that substantial impacts on educational innovations can be made through evaluation, by overseeing their progress, ensuring that expensive decisions are cross-checked and justified, and disseminating reports so that future efforts can pick up where others left off. In fact, as my dinner companions confided, they wished more people were aware of what evaluators did and of how their skills could help improve research and development projects.
As it so happened, one of my two dinner companions was a new PI of a project evaluating the effectiveness of interactive whiteboards in mathematics classrooms in the UK. His major finding so far? The whiteboards were not being used. Short of actually causing damage, neglect is surely the worst fate an innovative idea can suffer.
This conversation, and the meeting’s theme, Crossing Abysses, set the tone of the next few days. In the evenings, I wandered between sessions and through the ballrooms, decked to showcase promises of innovation—of social games that would teach a generation of more politically conscious consumers; of programs that would place the power of large datasets into the hands of middle school students; of hand-held data-capturing devices that would turn the world into a laboratory and encourage children to peer through scientists’ eyes. Here were the projects the NSF deemed worthy financial investments. And, certainly, the air around them was tangibly charged with hope and expectation, as veterans and newbies alike excitedly exchanged their visions for righting the wrongs in the world. But an investment is always a risk, and I could not help but also feel a tinge of uncertainty at the lifecycle of these innovative ideas. What, in two to five years’ time, would be their fates?
But back to my dinner companion. Perhaps, I suggested, the interactive whiteboards were not being used because the appropriate needs analysis was not done beforehand—those crucial extra steps taken to identify the end users’ needs and to determine how they might best be addressed. Perhaps the whole idea of interactive whiteboards was dangerously pushed by seductive new technologies rather than driven by any genuine need of the audience. At least in other situations, this would explain the mushrooming of useless gimmicks, rather than useful tools.
A needs analysis is one thing, the evaluator said, but another is that the benefits of the technology were not properly communicated to the teachers. Purchases were made by the techies of their departments, eager to try the next best thing. But teachers’ priorities are far more practical and their practices deeply ingrained in what’s worked in the past. Overwhelmed as they are by the everyday pressures of classroom teaching, they sometimes need help thinking outside the box. In teachers’ hands, these new toys were nothing more than expensive chalkboards, often not used at all.
This leads to abyss number two: Communicating innovation to end users.
Here, the difficulty for we researchers and developers is in balancing our place ever at the forefront of innovation, even as we respond to the everyday needs of our audiences. On this note, Allen and Ferrini-Mundy proposed gathering a group of teachers to be initial test beds of new products as they are developed. This way, we can be sure to align our ideas with the needs of our audience before too much effort is invested in trying to be the next new thing.
But if we’ve learned anything from the rapid spread of modern technology, it’s not simply that we don’t know what we need until we see it; it’s that sometimes, we must be taught to need it. Consider the current ubiquity of smartphones, microwave ovens, credit cards, and dishwashers. No one, protest now as they may, really needed these products. In fact, it’s not inconceivable that the initial reaction of some was resistance to accommodating these products into their habits. What we did need, however, was for marketers to tell us we needed them, and why. And now that we have them, it’s hard to deny that these products do make our lives better. Likewise, interactive whiteboards may very well be the next smartphone or dishwasher, as may be educational social games and hand-held data-collection devices designed for middle schoolers. The problem may be that teachers just don’t know it yet. Moreover, researchers and developers haven’t yet found a way to convince them.
This brings us back to abyss number one: The challenge of reconciling our dual identities as educational researchers and as product developers worthy of financial investment. Again, the solution proposed by Allen and Ferrini-Mundy was to find new terminology to distinguish what we do from commercial product developers. Indeed, appropriate language is important if our funding agencies are to better recognize the value of our innovations and the successes of our efforts. But while we like to believe that we have nobler priorities in supporting learning, we also face the same challenges of commercial product developers. Regardless of the terms we use, we must still market our products to consumers; we must persuade stakeholders of their worth; and we must promote their adoption among wider audiences. And so, even as we seek to distinguish ourselves as developers of educational rather than of commercial products, we may do well to also count the ways in which our efforts are similar. It may be that to succeed as commercial businesses requires that we think of ourselves in their terms.
Consider the potential: What if we took a page from companies such as Apple, and alongside theories of learning and instruction, we allowed our designs to be guided by principles of aesthetics and user experience? What if we promoted them according to product attachment theory? Would doing so really compromise the educational integrity of our work? Would an effort to market our products in the manner of commercial developers really amount to selling out? And if ultimately, these strategies result in such innovative ideas as interactive whiteboards becoming usefully integrated into teachers’ instructional practices, as well as in their sustained uptake among broader communities of learning, would these concerns really matter?
Even in considering these possibilities, it becomes clear that the skills of evaluators, such as those who were my dinner companions that first evening, are shamefully overlooked in our efforts. Evaluators may well be the ones to bridge the abysses between researchers, developers, stakeholders, and audiences. But perhaps we may all do well to come to the same realizations as evaluators did when they first fell upon their professions—that is, to become aware of the broader reaches of our projects within and across communities, over time, and in addition to our own places and roles within them. Not only might this help us find better ways to communicate our skills and contributions; it may also help us take advantage of the skills of others; to learn from those embarked in similar ventures; and to ensure the sustainability of our innovations.