Search, browse, and contribute to our library of tools, publications, presentations, organizations, and perspective pieces from the CADRE blog about all things related to K-12 STEM education. CADRE staff and your DR K-12 colleagues hand-picked these resources to support and enhance your work. DR K-12 project members: log in or register to gain full access to resource materials.
We all tend to recognize innovation when we see it. It’s not just the development of new products and services. It’s the sort of thing that crosses boundaries once faced and that takes the field beyond where it’s previously been. It introduces new ways of thinking about old problems, and may even bring to light issues never before considered. But as keynote speakers Sue Allen and Joan Ferrini-Mundy emphasized the first night of this year’s DR K-12 PI meeting, even this newness contributes to only part of the success of an innovation—the other part is sustainability. The sustainability of innovation underscored the theme of their presentation, Crossing Abysses. By abysses, Allen and Ferrini-Mundy meant the gap between funders, researchers, developers, and their audiences. The gap is not, as the recent PCAST report suggested, due to our emphasis on research at the expense of development. In fact, we do attend to issues of processes, models, and tools in development. Rather, as Allen and Ferrini-Mundy insisted, the problem is that we lack a language to describe how our development efforts get taken up at a wider level—how our impacts move beyond singular research publications and are sustained in the communities they intend to serve.
What may hinder us is that we consider ourselves foremost as researchers and educators. We value the more abstract impacts our products have on learning, on policy, and on infrastructure, and so we struggle to represent ourselves to a funding agency that rather views our ideas as financial investments. And when support for our ideas depends upon the promise and evidence of their commercial success, we come to face all the same challenges as commercial product developers. Suddenly, such terms as “marketability,” “product uptake,” and “customer loyalty” become relevant and, suddenly, we must reconcile these vastly different self-representations, which keeps our efforts from being recognized by those who would fund them and prevents us from actualizing their value.
Abyss number one: Communicating the value and success of our innovations to stakeholders.
To begin, Allen and Ferrini-Mundy suggested we must decide how innovation looks in a program such as DR K-12, if it is not the same as commercial innovation. How do we demonstrate progress compared with our commercial relatives? And what should count as evidence of the success of a DR K-12 product, if it is different from that of a commercial product? As Allen and Ferrini-Mundy argued, we need a new language to describe our efforts, because terms such as promotion, uptake, and adoption are not appropriate for what we do. But I will return to this issue later.
Before Allen and Ferrini-Mundy spoke that evening, I found myself sharing a dinner table and conversation with two evaluators of technical support. If, as you read this, you share the same blank expression at this term as I had at the time, you can hardly be blamed. No doubt, all would agree that evaluation is to the success of a research and development project what assessment is to the success of student learning. But where researchers and developers tend to position themselves at the forefront of a project, evaluators typically work behind the scenes. They check that decisions are justified and that actions have the impact intended; and they suggest ways to improve on the next round, packaging all in a neat report that regrettably few people beyond the project read. And where aspiring researchers have access to many public role models, evaluation is a profession someone tends to fall into after a meandering string of pursuits. In other words, a career in evaluation is not typically sought with planned purpose but rather is discovered by chance.
Along the route to becoming an evaluator, perhaps there is an experience of frustration over a lack of accountability in the course of research and development projects. This might be followed by realizing the necessity of taking stock of progress and of delivering a clear story of lessons learned for posterity. Eventually, it dawns upon the would-be evaluator that substantial impacts on educational innovations can be made through evaluation, by overseeing their progress, ensuring that expensive decisions are cross-checked and justified, and disseminating reports so that future efforts can pick up where others left off. In fact, as my dinner companions confided, they wished more people were aware of what evaluators did and of how their skills could help improve research and development projects.
As it so happened, one of my two dinner companions was a new PI of a project evaluating the effectiveness of interactive whiteboards in mathematics classrooms in the UK. His major finding so far? The whiteboards were not being used. Short of actually causing damage, neglect is surely the worst fate an innovative idea can suffer.
This conversation, and the meeting’s theme, Crossing Abysses, set the tone of the next few days. In the evenings, I wandered between sessions and through the ballrooms, decked to showcase promises of innovation—of social games that would teach a generation of more politically conscious consumers; of programs that would place the power of large datasets into the hands of middle school students; of hand-held data-capturing devices that would turn the world into a laboratory and encourage children to peer through scientists’ eyes. Here were the projects the NSF deemed worthy financial investments. And, certainly, the air around them was tangibly charged with hope and expectation, as veterans and newbies alike excitedly exchanged their visions for righting the wrongs in the world. But an investment is always a risk, and I could not help but also feel a tinge of uncertainty at the lifecycle of these innovative ideas. What, in two to five years’ time, would be their fates?
But back to my dinner companion. Perhaps, I suggested, the interactive whiteboards were not being used because the appropriate needs analysis was not done beforehand—those crucial extra steps taken to identify the end users’ needs and to determine how they might best be addressed. Perhaps the whole idea of interactive whiteboards was dangerously pushed by seductive new technologies rather than driven by any genuine need of the audience. At least in other situations, this would explain the mushrooming of useless gimmicks, rather than useful tools.
A needs analysis is one thing, the evaluator said, but another is that the benefits of the technology were not properly communicated to the teachers. Purchases were made by the techies of their departments, eager to try the next best thing. But teachers’ priorities are far more practical and their practices deeply ingrained in what’s worked in the past. Overwhelmed as they are by the everyday pressures of classroom teaching, they sometimes need help thinking outside the box. In teachers’ hands, these new toys were nothing more than expensive chalkboards, often not used at all.
This leads to abyss number two: Communicating innovation to end users.
Here, the difficulty for we researchers and developers is in balancing our place ever at the forefront of innovation, even as we respond to the everyday needs of our audiences. On this note, Allen and Ferrini-Mundy proposed gathering a group of teachers to be initial test beds of new products as they are developed. This way, we can be sure to align our ideas with the needs of our audience before too much effort is invested in trying to be the next new thing.
But if we’ve learned anything from the rapid spread of modern technology, it’s not simply that we don’t know what we need until we see it; it’s that sometimes, we must be taught to need it. Consider the current ubiquity of smartphones, microwave ovens, credit cards, and dishwashers. No one, protest now as they may, really needed these products. In fact, it’s not inconceivable that the initial reaction of some was resistance to accommodating these products into their habits. What we did need, however, was for marketers to tell us we needed them, and why. And now that we have them, it’s hard to deny that these products do make our lives better. Likewise, interactive whiteboards may very well be the next smartphone or dishwasher, as may be educational social games and hand-held data-collection devices designed for middle schoolers. The problem may be that teachers just don’t know it yet. Moreover, researchers and developers haven’t yet found a way to convince them.
This brings us back to abyss number one: The challenge of reconciling our dual identities as educational researchers and as product developers worthy of financial investment. Again, the solution proposed by Allen and Ferrini-Mundy was to find new terminology to distinguish what we do from commercial product developers. Indeed, appropriate language is important if our funding agencies are to better recognize the value of our innovations and the successes of our efforts. But while we like to believe that we have nobler priorities in supporting learning, we also face the same challenges of commercial product developers. Regardless of the terms we use, we must still market our products to consumers; we must persuade stakeholders of their worth; and we must promote their adoption among wider audiences. And so, even as we seek to distinguish ourselves as developers of educational rather than of commercial products, we may do well to also count the ways in which our efforts are similar. It may be that to succeed as commercial businesses requires that we think of ourselves in their terms.
Consider the potential: What if we took a page from companies such as Apple, and alongside theories of learning and instruction, we allowed our designs to be guided by principles of aesthetics and user experience? What if we promoted them according to product attachment theory? Would doing so really compromise the educational integrity of our work? Would an effort to market our products in the manner of commercial developers really amount to selling out? And if ultimately, these strategies result in such innovative ideas as interactive whiteboards becoming usefully integrated into teachers’ instructional practices, as well as in their sustained uptake among broader communities of learning, would these concerns really matter?
Even in considering these possibilities, it becomes clear that the skills of evaluators, such as those who were my dinner companions that first evening, are shamefully overlooked in our efforts. Evaluators may well be the ones to bridge the abysses between researchers, developers, stakeholders, and audiences. But perhaps we may all do well to come to the same realizations as evaluators did when they first fell upon their professions—that is, to become aware of the broader reaches of our projects within and across communities, over time, and in addition to our own places and roles within them. Not only might this help us find better ways to communicate our skills and contributions; it may also help us take advantage of the skills of others; to learn from those embarked in similar ventures; and to ensure the sustainability of our innovations.
Participating in the 2010 DR K-12 PI meeting was an informative, inspiring, and unforgettable experience. I have never been so actively involved in an academic conference. I had numerous opportunities to interact with professionals of diverse research backgrounds of STEM education in various meaningful ways. What I heard and saw in the meeting have pushed me to think hard about the meaning and role of knowledge, language, and learner in STEM educational research projects.
First, I wonder how we should view and treat the existing knowledge generated by our research. What is our most current knowledge about STEM education? How many kinds of knowledge have been explored? To what extent does this body of knowledge have a positive influence on real teaching and learning? How should both researchers and policymakers respond with knowledge to challenges grown within certain social, cultural, and historical contexts? How should teacher preparation and education programs utilize this knowledge?
Second, I wonder what could be the fairest way to view and treat children who are not yet proficient in English in the STEM research studies. How we view these children can significantly affect the reasons why we include them in our research, and how the findings will be interpreted and implemented. Throughout the history of education, a variety of terms have been used to describe or characterize these students, such as limited English proficient (LEP), language-minority, culturally and linguistically diverse (CLD), English as a second language (ESL), second language learners (SLLs), heritage language speakers, bilinguals, and emerging bilinguals. In NSF-funded projects, they are usually referred to as English language learners (ELLs). However, the term ELLs carries negative connotations. It consists of “English” and “learners,” which tends to lead people to think of ELLs as “one-dimensional on the basis of their limited English proficiency” (Short & Echevarria, 2004/2005, p.9) and simultaneously to ignore the fact that those children usually are emerging bilinguals, representing myriad national, cultural, and linguistic backgrounds with a huge range of abilities and needs.
Recently, I attended the DR K-12 PI meeting in Washington, D.C., and was surprised to hear so much controversial discussion surrounding the development and use of learning progressions. In my work at Michigan State University, I have been immersed in learning about what learning progressions are, how they can be used by teachers and teacher educators, and their potential to positively impact students’ learning—all in science education. For the most part, the conversations I have been a part of (up until this point) have always cast learning progressions in a positive light. However, I have started to wonder about their potential drawbacks and limitations.
To be upfront, I am much enamored of the research that shows how learning progressions can be used by teachers to pinpoint students’ understanding of various science concepts and to assess their own teaching strategies and behaviors. The development of learning progressions for various science topics and concepts seems to be a promising way to help teachers, especially those with little science background or who teach multiple science topics, learn about how students think, and point to productive directions that teachers would want to move students towards (the same can be said about teacher learning progressions). However, I heard many conversations where participants asked questions about whether there is even such a “thing” as a learning progression—that is, are there really patterns and stages for how students think about particular scientific ideas (or how teachers learn how to enact specific teaching strategies or methods)? Does the notion of learning progressions even make sense, or is there something else that would more accurately represent students’ thinking/teachers’ practice? If so, what might that look like?
I look forward to continued conversations on this matter and hope to hear from others who have been pondering similar questions.
A primary goal of education research is to inform and benefit the everyday work of practitioners; yet, often it is hard to establish and maintain partnerships between groups. What is so difficult about building relationships to bridge this divide?
Teachers and administrators have demanding schedules and responsibilities, so it is hard for schools to take the time and energy to work with researchers. However, by establishing partnerships, schools can gain invaluable resources that can ultimately help them do their jobs more effectively.
Researchers want to be able to control situations as much as possible. However, by working completely in context, and thus not having control, researchers are provided the opportunity to situate their problems and come to a realistic understanding of possible solutions. Great difficulty in partnerships arises when balance between the parties is lost, or when there was never a chance for balance to exist. What elements are essential to keep in mind before, during, and after a practitioner/researcher partnership?
Before: Take steps to ensure that the relationship and project are feasible and realistic. When writing a proposal, you should essentially already be doing the work. Consider whether the participants are ready for change, and whether there is infrastructure that has the potential to support and sustain the work. Examine the relationship structures that are established at various levels of the project to determine how durable they are and the extent to which the project can move forward if structures change.
During: Although most partnership research work physically takes place in schools, the onus is not on the schools to drive the project; researchers must take responsibility for adapting to the existing atmosphere. Furthermore, it is the obligation of the researcher to demystify the research process: to be clear about what the plan is, and what possible outcomes are. Consequently, practitioners should be able to benefit from their participation rather than feel as though they are being used by the researcher. Although it is hard work, by being explicit in your communication, by keeping organized documentation of progress, and by avoiding making assumptions, researchers and practitioners can move forward together in a positive light.
After: As the research project comes to a close, the partnership does not similarly dissolve. Practitioners are looking for both closure through reflection and for what the next steps are. Researchers should be clear in communicating what follow-up they will provide and whether there are future projects for their partnership. Sustainability, scale, and dissemination of the resulting knowledge or product of the project are to be considered at every stage of the research process.
Partnerships must be developed over time, at a local level, through trust in each other. What have you learned from your experiences in partnerships with schools? Are there components that are necessary in particular conditions? By purposefully examining the relationships situated within our work, meaningful partnerships can be developed to build the bridge between researchers and practitioners.
This response was informed by the presentations and discussion at the session on “Fostering Knowledge Use in STEM Education through R&D Partnerships with Schools and School Districts” at the DR K-12 PI Meeting on December 2, 2010.