Composition Forum 35, Spring 2017
http://compositionforum.com/issue/35/
Down the Rabbit Hole: Challenges and Methodological Recommendations in Researching Writing-Related Student Dispositions
Abstract: Researching writing-related dispositions is of critical concern for understanding writing transfer and writing development. However, as a field we need better tools and methods for identifying, tracking, and analyzing dispositions. This article describes a failed attempt to code for five key dispositions (attribution, self-efficacy, persistence, value, and self-regulation) in a longitudinal, mixed methods, multi-institutional study that otherwise successfully coded for other writing transfer factors. We present a “study of a study” that examines our coders’ attempts to identify and code dispositions and describes broader understandings from those findings. Our findings suggest that each disposition presents a distinct challenge for coding and that dispositions, as a group, involve not only conceptual complexity but also cultural, psychological, and temporal complexity. For example, academic literacy learning and dispositions intersect with systems of socio-economic, political, and cultural inequity and exploitation; this entwining presents substantial problems for coders. Methodological considerations for understanding the complexity of codes, effectively and accurately coding for dispositions, considering the four complexities, and understanding the interplay between the individual and the social are explored. We describe how concepts from literacy studies scholarship may help shape writing transfer scholarship concerning dispositions and transfer research more broadly.
I. Introduction
The question of what mechanisms facilitate writing development and students’ ability to apply, adapt, remix, or otherwise transfer writing skills, experiences, and knowledge is critical. The role of student dispositions requires further study, as previous scholarship suggests that dispositions strongly influence learning and subsequent transfer (Driscoll and Wells; Wardle; Slomp). Despite dispositions’ potential importance, writing studies scholars have not yet developed valid, reliable methods for measuring them and their role in writing development, due to their complexity, measurement challenges, and newness as an inquiry area. As our study reveals, the nature of this complexity raises methodological challenges that require careful integration of key theoretical constructs into a reflective methodological approach that highlights the potential and limitations of empirical research on dispositions’ role in writing development. Our results suggest that dispositions offer not only conceptual complexity, but cultural, psychological, and temporal complexity as well, complexities that fundamentally alter how we define, measure, code, and ultimately understand dispositions.
This article is based in our repeated attempts—and failures—over two years to code and identify dispositions in a large, multi-institutional, longitudinal dataset. These repeated failures led us “down the rabbit hole” in conducting a systematic study of what went wrong so we could better understand methodological challenges and develop strategies for studying dispositions. Following others reporting on “failed studies” (Conard-Salvo and Spartz; Kjesrud and Wislocki), we recognize that such failure supports deep learning and replicable, aggregable, data-supported research (Haswell). We resist the narrative that good research is always successful research—rather, good research informs future studies, and in this regard, our article has much to contribute. Therefore, we address the following questions (three informed by our data, one addressed in our discussion):
- Can we operationalize sufficiently precise definitions of dispositions to enable researchers to identify them accurately and consistently?
- Can we define dispositions with enough precision that coders not deeply versed in dispositions research can apply them with acceptable reliability?
- What do coders’ error and accuracy patterns teach us about how to operationalize and code dispositions more effectively?
- Of the dispositions constructs imported from other fields, which are useful for writing studies and which should be revised or replaced?
By analyzing the challenges we encountered, we develop recommendations for scholars investigating how student dispositions shape both initial learning and transfer of writing knowledge, and we provide insights into the complexity of dispositions themselves.
II. Dispositions, Learning, and Transfer
The terms used to define and describe dispositions, or internally held qualities that impact a student’s learning, are diverse—habits of mind, intrapersonal factors, individual features, and intelligent behaviors. We chose the term “dispositions” because previous writing transfer scholarship within and outside the field uses it. For example, Perkins et. al. describe dispositions as “not only what people can do but how they tend to invest their capabilities—what they are disposed to do, hence the term dispositions” and suggest that dispositions are one form of intelligence (270-271). Bronfenbrenner and Morris see dispositions as central to human development, arguing that dispositions are the “precursors and producers” of later growth (810). In discussing how best to promote critical thinking and transfer, Halpern separates the disposition, or willingness, from the ability to engage (452). Even if one possesses the ability, if s/he is unwilling to engage, transfer will not occur; this (un)willingness is inherently dispositional.
Dispositions also shape knowledge transfer and lifelong learning. Several models of transfer stress that dispositions, not just knowledge, facilitate transfer. Bransford and Schwartz propose a “preparation for future learning” approach to transfer, which identifies student dispositions (e.g., toward asking generative questions and toward investigating relevant resources) that support adapting current knowledge to undertake new learning tasks in unfamiliar contexts. Perkins and Salomon similarly suggest that students must detect, elect, and select relevant knowledge to transfer. These studies suggest that to understand writing transfer, we must also understand students’ dispositions.
III. Scholarship on Dispositions and Writing
Writing researchers have demonstrated dispositions’ importance in transfer, although early transfer research often used other terms. As Driscoll and Wells note, early seminal studies (Wardle; Bergmann and Zepernik; Beaufort) often describe phenomena we recognize as writing-related dispositions that shape transfer, or lack thereof. Wardle, for example, discusses how students “did not perceive a need [and so weren’t disposed] to adapt” writing skills from first-year writing (FYW) to other contexts. Other seminal studies, when viewed through a dispositional lens, illuminate dispositional perceptions or behaviors that shape writing transfer.
Further, broad interest in dispositions is growing. For example, the Framework for Success in Post-Secondary Writing published in 2011 emphasizes eight habits of mind that support writerly development. These habits parallel dispositions described by Perkins et. al. and include curiosity, openness, engagement, creativity, persistence, responsibility, and flexibility. Writing after the Framework was released, Slomp suggests that “failure to consider the role that intrapersonal factors [like dispositions] play in the transfer process can cloud our ability to assess underlying barriers to transfer” (84). Composition Forum’s 2012 special issue on writing transfer also explored dispositions. Wardle argues that “problem-solving dispositions” encourage “creative repurposing” of existing knowledge in new settings, while “answer-getting dispositions” discourage transfer. Driscoll and Wells operationalize definitions of four student dispositions (attribution, self-efficacy, self-regulation, and value) that impact writing transfer. Further, recent writing transfer scholarship acknowledges the fundamental importance of dispositions; for example, Wardle and Adler-Kassner’s Naming What We Know has several concepts directly linked to students’ identities, prior knowledge, and metacognitive self- understanding (50-75).
As writing transfer research matures, we’ve recognized that studying writers’ texts doesn’t fully explain why students produce these texts, how students engage with learning situations, or (fail to) use prior writing knowledge. Given dispositions’ role in shaping writing development, we must investigate them to understanding what motivates writers. A fuller picture of writing transfer may include the context (curriculum, writing program); the courses (instructor, interactions in course, classroom community); the texts (genre, rhetorical situation, affordances and constraints); and the writers (dispositions, experiences, prior knowledge, external influences). Dispositions, then, form a single but important piece of the complex puzzle that depicts the mechanisms behind writing development and transfer. Thus despite the inherent challenges in operationalizing definitions precise enough to enable quantitative investigation of how particular dispositions correlate with writing-related behaviors, producing such definitions is key to learning how these dispositions influence writing development.
IV. Five Key Dispositions Addressed in this Study
To lay the groundwork for the more nuanced discussion of dispositions below, here we present the five dispositions we sought to study, including abbreviated definitions from our coding glossary and brief explanations of their importance in writing development. We drew codes and concepts primarily from Driscoll and Wells, who explored attribution, self-efficacy, self-regulation, and value; to this list, we added persistence (from the Framework). We also explore two qualities of dispositions: generative and disruptive. As Driscoll and Wells note, psychologists studying dispositions define disruptive dispositions as those that inhibit learning success and generative dispositions as those that facilitate such success. In the examples below, some dispositions disrupt learning and writing processes while others facilitate such processes.
Attribution, or locus of control, refers to how a student attributes the “cause” of events or outcomes (like a grade) to herself or to external factors (Weiner). A student with external locus of control attributes success or struggle to an external factor, like a teacher or tutor. A student with internal locus of control attributes success or struggle to herself. A student who earned a poor grade and blamed the teacher for the grade (external) contrasts with one who recognized the role of his procrastination (internal). Appropriate attribution of responsibility for outcomes is essential to writing growth. Students who typically blame others for failures cannot identify and address behavioral changes that could increase success. Conversely, students who view failures beyond their control as resulting from their own inadequacies may underestimate their potential to succeed, rather than recognizing that they could perform effectively in more favorable circumstances.
Persistence is the ability to continue despite adversity, specifically, an articulation of overcoming a difficulty or hardship or succumbing to the challenge and giving up (Framework). For example, a student having difficulty finding the right sources will choose to go to the library and meet with a librarian (persisting through the difficulty) rather than give up and use Google, not meeting the assignment requirements. Persistence supports writing development by enabling writers who face unfamiliar tasks or struggle with aspects of writing to seek help, to make repeated efforts to address difficulty areas, and to pursue multiple avenues if needed. Without persistence, students may try to avoid difficult areas, thus foregoing the writing growth that results from addressing such areas.
Self-efficacy is the relationship between a student’s beliefs about his capability and the likelihood that he will take steps needed to achieve the goal (Bandura). For example, if student believes she is a “bad writer” and doesn’t have the ability to complete her 10-page paper, she is undermined before even beginning. Self-efficacy fosters writing growth because belief in one’s capacity to perform a task—or to overcome challenges associated with a task—is a necessary foundation for persistence, for assuming responsibility through internal locus of control, and for effective self-regulation (defined below). Without self-efficacy, students tend to avoid difficulties, blame others for failures, and to eschew learning-related behaviors that promote writing growth.
Self-regulation is the ability to monitor, revise, and improve one’s writing-related behaviors and strategies (Zimmerman). For instance, recognizing that she cannot write well in a noisy setting, a student moves to the quiet library. Self-regulation facilitates writing growth by enabling students to identify and implement behavioral changes designed to improve their chances for success, e.g., by increasing focus, re-examining key material or concepts, or using available resources. Students unable to effectively self-regulate are less likely to grow as writers. For instance, a student with strong cognitive capacities who does not allow sufficient time to read and consider multiple texts when drafting her first synthesis paper is unlikely to develop the new abilities requisite to synthesis.
Value is how much positive or negative meaning is attributed to specific learning experiences or activities (Wigfield and Eccles). If a student drafting an end-of-term reflection sees it as “pointless busywork,” the reflection’s value dissipates. Because value determines students’ engagement with tasks, concepts, and skill acquisition, it crucially structures writing growth. A student who believes he will never need to write analytically probably won’t invest the intellectual energy required to develop the relevant capacities.
V. The Story Begins: Our Challenges in Studying Dispositions
Here we describe our study of the five dispositions listed above. The Writing Transfer Project (WTP) examined which of many potential factors contributed to students’ long-term development and transfer of writing knowledge. While the WTP methods successfully generated relevant findings in other areas of the study (Gorzelsky, et. al.), when applied to dispositions, these methods failed. That is, despite our use of strong theoretical models and previous data-supported findings on dispositions, our research methods proved inadequate for investigating how dispositions shape writing transfer, despite the fact that these methods produced useful findings concerning other transfer factors (such as the role of genre awareness, writing knowledge, or metacognition). Given the complex nature of dispositions, we think that our “study of a study” illuminates the complexities of coding dispositions and of adapting what we know qualitatively to mixed-methods multi-institutional research.
Given the substantial role dispositions play in determining the quality and extent of writers’ growth, to understand such growth, we must investigate how dispositions operate. Because dispositions govern behaviors, it is crucial to identify any correlations between the dispositions defined above and students’ writing-related behaviors and evidence of writing growth. Identifying such correlations will offer insights into where instructors might usefully target interventions designed to address dispositions. Yet as we show in this section, investigating possible correlations is quite challenging. Pursuing such investigations requires quantitative measures. Producing these measures demands operationalized definitions precise enough that coders can achieve at least 80% agreement on what counts as evidence of a disposition. Without such agreement, an accurate count of evidence of specific dispositions is unattainable, making it impossible to search for correlations between specific dispositions and writing-related behaviors.
Initially, we believed that coding for dispositions would be like coding for other writing growth constructs, e.g., genre knowledge, use of sources, or writing processes. However, as we illustrate, we found that while the other constructs we coded involve conceptual complexity, dispositions entail not only such conceptual complexity but also cultural, psychological, and temporal complexities. We recount our failed efforts to define the five dispositions listed above precisely enough that graduate student coders from across disciplines, with well planned, focused training but unfamiliar with dispositions literature, could achieve at least 80% agreement on whether or not specific textual moments in students’ written reflections evidenced one of the five dispositions.
In seeking such agreement, we do not claim that any such single articulation, or even a series of such articulations across a semester, indicate that the student writer enacts a particular disposition consistently across time and contexts. As we explain below, dispositions change across contexts and, in some cases, across time. They are ambiguous, intersecting in complex and shifting ways, depending on local and larger contextual factors. We do not seek to define individual students’ characteristic dispositions, an endeavor far beyond the scope of our study. Instead, we seek to identify dispositional moments that occurred during writing courses, as articulated by students in their written reflections, and to tie those dispositional moments to students’ texts. Yet even this more modest goal proved difficult to attain. Thus we summarize our original study’s goals and methods to discuss how it derailed.
Original Study Goals and Dataset
The broader WTP goal was investigating which transfer-related factors, identified by previous research, predicted writing development and transfer over a two-year period. In this investigation, we had hoped to learn how often dispositions appeared in student reflections during their initial course (year 1) and in interviews during the following year (year 2), and how they connected to written performance. The project spanned two years of data collection and analysis at four participating universities (public and private; urban and suburban; varying widely in student demographics) and included five different general education writing courses, with student work from 44 sections.
While the writing programs studied had different foci, faculty, and practices, all sections used a rhetorical genre studies approach (Bawarshi and Reiff) that emphasized building discourse community knowledge, genre awareness, rhetorical knowledge, and metacognitive awareness. From these 44 sections, we collected pre- and post-semester writing samples from 121 students (one source-based writing sample written prior to the start of the semester and one source-based writing sample written near the end of the term). We also collected reflective writing: early assignments where students examined themselves as writers, reflecting on their source-based writing sample from prior to the start of the course, and reflections submitted with each major assignment. In year 2, we conducted text-based interviews and collected 35 disciplinary writing samples from 27 year 1 students.
Original Code Development
The WTP had eight code sets: dispositions, metacognition, identity, transfer-focused thinking, writing knowledge, genre awareness, use of sources, and rhetorical knowledge. We followed the same code development, coding, and analysis procedures for all code sets. In the tradition of many other writing studies and transfer researchers, we imported and adapted theoretical concepts from other fields. Two co-authors (Driscoll and Wells) developed the initial dispositions codes and glossaries based on their 2012 article and prior research, from scholarship cited in section IV below, and from preliminary data not included in the study sample. Over several months, we tested and refined these codes with help from a co-author unfamiliar with the concepts, discussing the coding glossary, reading student work blind, coding independently, discussing any discrepancies, and refining the coding glossary.
Trouble Brewing: Original Coding and Initial Challenges
In both years, we hired a group of interdisciplinary graduate students from one study site to rate and code; we met with these graduate students for four days each year. One research team member led coders assigned the dispositions codes. Both coders and leader differed from year 1 to year 2. To achieve consistency, we used a common set of training materials (although the materials were fine-tuned for Year 2 based on what we thought we had learned from Year 1). While we coded for dispositions, other co-authors led groups coding other constructs or rating student writing. Coder training introduced coding glossaries then led coders in examining sample coded texts, coding additional samples, and discussing discrepancies. After initial training, groups that agreed on 80% or more coding instances{1} tested for inter-coder reliability (ICR), then moved into actual coding. Groups that did not initially achieve 80% agreement completed additional training until they reached 80%. Researchers administered ICR tests after breaks and implemented refresher training at the start of each coding day.
At this point, several worrisome issues arose. First, the dispositions coding group took twice as long to reach initial agreement as other groups did. After the first training day (when other groups had started coding), the dispositions leader found it necessary to create an additional coding guide to help coders avoid “reading into,” or inferring implications from, student texts absent explicit evidence. The “reading into” problem had prevented agreement and, as we discovered later, had substantial broader implications. After another half day of training, the disposition coders finally reached 80% agreement and began to code. At several points during regular ICR tests, dispositions coders dropped below 80% agreement and had to retrain, while no other groups ever dropped below 80% agreement. At the end of our coding week, less than half of our dataset had been coded for dispositions, while all other constructs were completely coded.
On the matter of “reading into,” we designed our coding glossary under the assumption that disposition coding would include conceptual complexity. These concepts, like writing knowledge or genre knowledge, are generally identifiable as either present in varying amounts or absent. Instead, dispositions proved to be contextually based with layers of meaning combined with temporal complexity. A simple example, from a student’s end-of-term reflection, helps illustrate this: “My level of confidence in my writing has unfortunately gone down this semester. When I came into this class, I had thought that my writing and ideas had been one of my strongest points, but after working on these papers and projects, I have found that I need to do a lot more editing and fixing of the way I write. Although it was somewhat hard to hear, I know that it was for the better and that learning to accurately write will help me in the future.” In this statement, multiple periods of time and states of being are discussed, where the student was originally compared to where the student is at the end of the course. The student indicates taking steps to address perceived writing deficiencies (which would lead one to consider coding for self-efficacy) and yet the student doesn’t see this as a positive experience but rather as a deficiency. We have a tension between the student’s view of self-efficacy (disruptive) vs. how a researcher or teacher might interpret the statement (generative). Further, the statement seems to suggest the student values his or her learning, although the contradictions are present here (and in other segments of this same manuscript). In the end, the coder avoided the complex issues surrounding self-efficacy and only coded for generative value. This single segment leads one down a rabbit hole “reading into” layers of meaning. Given that we were coding hour-long interviews and multiple reflections with these same layers of meaning present everywhere, the difficulty reaching 80% agreement is hardly surprising.
After Year 1, several co-authors met to refine the disposition codes and examine coded data. Agreed that we were not confident in the coding, we decided to re-code all year 1 data during the second year of the study. A researcher with years of experience studying dispositions led the disposition coding but, despite training refinements, struggled to guide coders to 80% agreement. Coders eventually did reach this standard but did not finish coding year 1 data. Needing a completely coded dataset for our analyses, we decided to use the same training methods to train one co-author’s graduate assistant to finish the coding. Again, the graduate assistant and co-author struggled to reach 80% agreement and the coding remained incomplete.
After two and a half years of failed attempts to code dispositions, we decided to run the analysis on the incompletely coded dataset to see what we would find. We were not surprised that the analysis (which entailed inferential correlations and regressions that examined the relationship of the disposition codes to student written performance over time) produced completely nonsensical results inconsistent with the broader findings from the study and the field.
Understanding What Happened and Examining our Coders
After much weeping and gnashing of teeth over hundreds of hours invested in this “failed” portion of the study, we decided to examine a subset of our codes to understand the nature of the problem. We knew that our process had worked for the other codes in the dataset and had led to illuminating results. We knew that we struggled with the disposition codes at each step. At this point, we investigated whether the problems resulted from the code definitions, the training methods, some unknown issue, or a combination thereof. We believed that we could learn something valuable about dispositions, even if not what we’d initially intended. We hoped that a systematic analysis of our coders’ work would offer clues. To investigate the findings on coders’ accuracy, we used Hammer and Berland’s contention that “authors should not treat coding results as data but rather as tabulations of claims about data” (1, emphasis added). Our analysis investigates our coders’ “claims about data.”
Two co-authors (Driscoll and Gorzelsky) re-coded some of the dataset to test our agreement rates and investigate the problem. We selected Smagorinsky’s collaborative coding strategy to examine our coders’ work; this was a method we had used successfully for another aspect of the project (Gorzelsky, et. al.), and we knew it would lend itself to examining dispositions. In this coding approach, all coders examine the same documents and discuss codes applications, producing 100% agreement. Due to the time-intensive nature of collaboratively examining each document and the dataset’s size, we sampled from our larger dataset. Because some students in our study declined in their writing performance while others gained in their second year, we selected a group of eight students—one whose writing performance declined and one whose writing performance gained from each study site. This yielded a dataset of 28 documents – eight interviews and twenty reflections. We collaboratively examined these materials, reviewing 165 excerpts and 192 code applications over a period of three full days. For each code application, we discussed whether the code fit and, if so, whether it had been applied accurately; through this process, we achieved 100% agreement.
We also noted any missing codes. Missing codes are critically important because, as our earlier example illustrates, missing codes suggest that coders had difficulty identifying evidence of some dispositions or avoided coding complex excerpts entirely. Missed codes create an incomplete picture of dispositional moments in writing courses and undermine reliability and validity. If evidence of a phenomenon cannot be accurately identified and counted, meaningful study is impossible. As when training raters to score texts by using a rubric consistently, researchers training multiple coders must help them to achieve consistency. After reviewing data coded for dispositions, we counted codes missed and misapplied and calculated descriptive statistics.
VI. Analysis of Results, or, What Went Wrong
We now describe our findings, which we found illuminating not only for our specific study but for dispositions as a construct and subject of study more broadly.
Frequency of Codes
Table 1 describes how many codes our coders applied, which helps provide an overall picture. Not all codes appeared at the same frequency from the dataset; notably, generative codes more frequently appear than disruptive codes.
Table 1. Codes Originally Applied
Disposition Code | Total |
Self-Efficacy (Total) | 56 |
→ Disruptive - Self-Efficacy | 13 |
→ Generative - Self-Efficacy | 46 |
Attribution (Total) | 20 |
→ External Locus of Control | 6 |
→ Internal Locus of Control | 18 |
Self-Regulation (Total) | 52 |
→ Disruptive - Self-Regulation | 9 |
→ Generative - Self-Regulation | 45 |
Value (Total) | 69 |
→ Disruptive Value | 16 |
→ Generative Value | 58 |
Proportion of Documents with Missing Codes
Our coders had not accurately identified all excerpts where dispositions should be coded, although what was missed and how often it was missed varied widely based on the code.{2} While persistence did not show up extensively in the dataset, coders almost always coded it successfully, missing codes on persistence in only 7.1% of documents sampled. Value was also frequently coded when present, being missed only by coders in 14.3% of documents. However, all of the other codes were missed a substantial number of times (see Table 2)—including self-efficacy, which was missed in 60% of the documents in our dataset (please note our example in Section V for why this may have been so difficult). The question of what constitutes an acceptable rate of missing codes for a study to be valid and reliable, especially for phenomena like dispositions, remains salient in light of these findings.
Table 2. Documents Missing Specific Codes
Disposition Code | Number of Documents with Missed Codes |
Attribution (Locus of Control) | 14/28 (50%) |
Persistence | 2/28 (7.1%) |
Self-Efficacy | 17/28 (60.7%) |
Self-Regulation | 11/28 (39.3%) |
Value | 4/28 (14.3%) |
Correctness of Disposition Codes
After reviewing 193 code applications, we agreed with coders in 126 coding instances (or 65.3% of the time) and disagreed in 67 instances (or 34.7% of the time). We discovered that coders were much more effective in coding Year 1 reflective writing (83.7% agreement, 31/37 segments) than Year 2 interview data (60.8% agreement, 96/156 instances). Interviews were longer and more nuanced; additionally, they contained more disruptive codes (see below).
Correctness of Generative vs. Disruptive Codes
In our dataset, generative dispositions were coded more accurately than disruptive dispositions (Table 3). Furthermore, we discovered that coders were much more likely to code generative dispositions effectively (71.3% correct) when compared with disruptive dispositions (only 39.5% correct). The challenge with disruptive disposition coding is more nuanced when we examine specific dispositions (Table 4).
Table 3. Disruptive and Generative Disposition Codes (Note that Attribution is not included in these counts since we did not consider these categories inherently disruptive or generative.)
Disposition | Total | Disagree | Agree |
Disruptive | 38 | 23 (60.5%) | 15 (39.5%) |
Generative | 136 | 39 (28.7%) | 97 (71.3%) |
Table 4. All codes based on disruptive or generative status
Disposition | Number/Percent Correct | Number/Percent Incorrect | Total Codes |
Attribution – External Locus of Control | 3 (75%) | 1 (25%) | 4 |
Attribution – Internal Locus of Control | 11 (73.3%) | 4 (26.7%) | 15 |
Persistence – Disruptive | 1 (50%) | 1 (50%) | 2 |
Persistence – Generative | 5 (100%) | 0 (0%) | 5 |
Self-Efficacy – Disruptive | 7 (53.8%) | 6 (46.2%) | 13 |
Self-Efficacy – Generative | 22 (57.9%) | 16 (42.1%) | 38 |
Self-Regulation – Disruptive | 3 (27.3%) | 8 (72.7%) | 11 |
Self-Regulation – Generative | 14 (37.8%) | 23 (62.1%) | 37 |
Value – Disruptive | 4 (33.3%) | 8 (66.6%) | 12 |
Value – Generative | 47 (83.9%) | 9 (16.1%) | 56 |
Coders’ accuracy for attribution and persistence was higher. However, attribution was the second most missed code in our checked data set (it was missed in 13 of 27 documents), so it’s still problematic for coders to identify, though not necessarily to code correctly once identified. Self-efficacy was the most missed code; it was missed in 17 of 28 (60.7%) documents analyzed. When coders did code for it, a little under half of the time they coded it in incorrectly (46.2% for disruptive, 42.1% generative). Self-efficacy represents a very problematic code, both in identifying when it should be coded and determining if it is disruptive or generative. Additionally, disruptive self-regulation was particularly hard for coders to identify (72.7% inaccuracy), missed in 11 of the 16 documents. Value was often accurately coded in Year 1 reflective writing when generative (83.9%) but it was rarely coded when disruptive (33.3%). Coders had much more difficulty identifying generative value in year 2 interviews (16.1% accurate).
VII. Discussion, or, The Slippery Nature of Dispositions: How and Why We’ve Persisted in Studying Them
As one anonymous Composition Forum reviewer pointed out, “dispositions are dynamic” and so may be quite difficult to study, particularly in short time frames or with limited data points. We agree. Although dispositions are generally seen as stable in the broader literature, a given disposition may manifest in one context but not another. For instance, a student may display high self-efficacy for a familiar or enjoyable writing task and low self-efficacy with for an unfamiliar or disliked writing task. Because dispositions manifest differently in different contexts, to use research on them to improve writing instruction we must develop methods to accurately operationalize, identify, and count evidence of particular dispositions.
Even when we do feel we have an understanding of student dispositions, that understanding will be partial at best. As in any investigation of attitudes, perspectives, beliefs, and values, researchers examining dispositions cannot access their object of study directly. The only available evidence is indirect and takes the form of either self-report or behavior indicating the attitude, perspective, belief, or value in question. Such evidence cannot be taken as conclusive and must be treated carefully; it is suggestive, not definitive. Part of the care required, in our view, requires using conservative definitions and counts. Specifically, we coded only explicit statements indicating a given disposition, not statements that required coder inference—the “reading into” issue explored above. We recognize that this approach limits our analysis of dispositions evidenced through direct articulation, but see such evidence as useful for identifying patterns, despite the fact that it does not track dispositions evidenced in behaviors students did not explicitly describe.
Progress, Limitations, and Next Steps
Because two researchers did agree 100% of the time on coding choices, typically in 60 seconds or less, we believe that our definitions of the five dispositions discussed here provide sufficient reliability to offer a solid basis for further research. This agreement indicates that the codes were sufficiently well defined that researchers familiar with dispositions literature could consistently identify and agree on them. We anticipate that future research will refine and revise our descriptions of dispositions; perhaps identify other dispositions relevant to writing development; and compare patterns in articulations of attitudes, perspectives, values, and behaviors (which we address) with patterns in actual behavior (which we don’t), illuminating how dispositions operate in writing development.
However, graduate student coders unfamiliar with the concepts rarely met Lombard, Snyder-Duch, and Bracken’s minimum agreement standards, despite extensive training, norming, and post-training agreement checks. In emphasizing this point, we do not intend to discount our graduate student coders’ perspectives. Rather, their views revealed complexities in the dispositions codes that we hadn’t addressed in our training materials and discussions. Lacking explicit discussion of these complexities, our astute coders apparently recognized the complexities and addressed them in varying ways (avoidance, over coding, etc.). As we discuss next, it revealed issues spanning a range of different areas: conceptual/definitional, cultural, psychological, and temporal that shape the dispositions constructs charted by our codes.
Conceptual Issues: Self-Efficacy
As noted above, self-efficacy was the most missed code, and even when coders recognized it, codes were incorrectly applied almost half the time. Psychology researchers commonly understand self-efficacy as confidence in one’s capacity to perform a task (Bandura). However, this definition may need sharpening to better serve writing researchers. Of the few writing studies scholars studying self-efficacy, one, Metcalf Latawiec, argues persuasively that self-efficacy is task-specific, rather than generalizable. Her analysis suggests that effective writing development entails identifying specific task aspects, e.g., transitioning across paragraphs in ways that improve text coherence, and increasing efficacy for these specific tasks.
Cultural Complexities
Some codes, like attribution, may entail cultural complexities. Attribution involves assigning responsibility for outcomes to oneself or to external factors. When coders did recognize attribution, they achieved the most accuracy with this code, but it was nonetheless the second-most missed code. Coding attribution is inherently complex because virtually all outcomes result from multiple causes. For instance, a sophisticated writer may attribute aspects of a text drafted in a new genre to both her prior experience learning new genres and to assistance from a colleague. Such an assertion should be coded for attribution of both internal and external locus of control.
This complexity may be augmented by two cultural factors likely to affect coders. First, American culture (most coders’ home culture) emphasizes an individualist ethic. Second, poststructuralist theory (familiar to our social science and humanities graduate student coders) critiques this individualist ethic, emphasizing the shaping power of culture, institutions, and systems. Each view entails deep—and conflicting—investments. These investments may have complicated the coding process if coders’ beliefs about inherent individual or cultural responsibility interfered with their ability to recognize students’ attributions of responsibility. The fact that coders needed to distinguish between their own beliefs about the nature of responsibility and students’ attributions of responsibility may have made it difficult for them to recognize instances of attribution. Below we offer an example in which coders failed to assign relevant codes in the face of similar complexities involving persistence. While that example illustrates a different code, we believe it reveals how the cultural complexities we describe here may have hindered coding.
Cultural and Psychological Complexities: Disruptive and Generative Dispositions
The possible interference of subjective evaluations, perhaps shaped by cultural complexities, also appears in coders’ struggle to recognize the difference between generative and disruptive dispositions with codes like persistence. Determining accurately whether an attitude or behavior is generative or disruptive requires focusing strictly on students’ writing development rather than on daily life, performance in a single course, or students’ overall personal development. No disposition is always generative or disruptive to a student’s development; rather, dispositions’ impact depends on the context and on short- and long-term outcomes. Sometimes a single moment or interview cannot adequately reveal these outcomes.
This is a shift away from our earlier line of thinking as a research team as well as a discipline. For instance, we initially assumed that the disposition of persistence was always positive, as suggested by the literature (Framework for Success). However, a student participating in one co-author’s (Driscoll’s) ongoing six-year study provides a counter-example. The student selected an ill-fitting major (pre-pharmacy) not consonant with her academic skills. Despite repeated failures in introductory major courses, she persisted, taking some courses three or more times; in this process, she lost three years and nearly $25,000 in tuition and fees. In her fourth year, she changed her major to nursing and is now progressing toward her degree which she will likely finish in eight years. While both Driscoll and the student saw this persistence as positive for two years, four years later both agree that the student’s persistence ultimately caused her serious setbacks. We believe that such complexities—including a cultural value regarding persistence—may have exacerbated coders’ difficulties in accurately distinguishing between generative and disruptive dispositions.
This difficulty with disruptive dispositions particularly affected coding for disruptive self-regulation and disruptive value. Coders missed disruptive self-regulation in at least 11 of 16 documents and had a 72.7% error rate for excerpts they did code. We suspect that coders had trouble recognizing when students’ decisions functioned to disrupt writing development and trouble recognizing such decisions as choices. Examples include choices like failure to find a quiet place to work, failure to seek needed help, and failure to self-motivate when facing an assignment of low intrinsic interest. While students may not always make such decisions consciously, they are choices and impact capacity to complete a task. However, coders may have interpreted the choice to remain in a location with too many distractions as a contextual factor entirely beyond one’s control. Like attribution, self-regulation is a construct that involves issues of such control, which is often partial control. Coders may have struggled to recognize situations of partial control as situations where choice was possible. For instance, in an interview in which a student who reported repeated failures on individual papers and in entire courses due to avoidance of argumentative writing, the coder failed to recognize this avoidance as disruptive self-regulation. In some cases, the coder did not recognize self-regulation at all, as when the student reported substituting the exploratory writing s/he preferred for the argumentative writing prompted by the assignment. In other cases, the coder did not recognize that the self-regulation articulated was disruptive rather than generative, as when the student submitted an expressive rather than argumentative paper in response to an assignment requiring an argument, received a failing grade, and described planning to rewrite the paper at the end of the semester without contacting the instructor to seek approval, additional feedback, or help in revising. Apparently the coder did not recognize these choices as choices that, among other factors, influenced outcomes for the student. Perhaps disruptive self-regulation was particularly difficult to identify because it required coders to acknowledge where students might have made better choices yet took actions (or inactions) that undermined their success, perhaps without recognizing that they were doing so. Of course, all of this requires “reading into” the specific contexts in which students write.
Conceptual and Temporal Complexities: Generalized Dispositions and Subjective Interpretations
Because coders had a much higher accuracy rate when coding for dispositions in students’ short reflective texts, which offered more simple and direct examples, than they did in coding interviews (83.7% coded accurately vs. 60.8% coded accurately), we believe that the longer, more complex interviews posed particular challenges. For instance, in interviews, coders were only 16.1% accurate in coding for generative value. While checking coders’ work, we realized that some of their errors seemed to result from a failure to consider shorter excerpts in context of a series of relevant statements made throughout the interview. For example, one interview included many moments in which the student alluded to using material learned in GEW (general education writing) in subsequent course but indicated s/he had not initially seen the usefulness of GEW. A coder marked the point about the student’s initial perception as disruptive value yet failed to code the generative value embedded in the student’s assertion of long-term usefulness. This focus may indicate the coder’s tendency to recognize disruptive (rather than generative) values about education, a tendency to privilege initial (rather than revised) perceptions, a tendency to view GEW courses as not useful, or some other perceptual tendency. Such errors were common in interview transcripts and make us consider issues not only of time but also manifestation—how many times does a student need to allude to a disposition to make it “count”?
Cultural Considerations: Value and Writing Development
In reflective writing, coders were 83.9% accurate in coding generative value but only 33.3% accurate in coding disruptive value. However, they coded only 16.1% of interviews containing value, whether generative or disruptive. Coding for disruptive value, in particular, demands that coders acknowledge the potential negative impact on learning of personal characteristics often held sacred in American culture—one’s values. Much literacy research suggests that learners from marginalized communities hold values that differ markedly from the values embedded in mainstream literacy practices. For instance, marginalized groups sometimes view mainstream literacy practices as instruments of cultural domination, a view substantiated by a noteworthy body of research (Collins and Blot; Gee; Graff). Such perspectives sometimes lead GEW students to avoid language they perceive as “academic” in favor of language they view as more welcoming to members of their cultural communities, to see academic writing overall as irrelevant to their often more pragmatic career goals, or to use time or financial resources for family-related rather than academic goals, which are sometimes perceived as selfish or as individualistic rather than communal. A related set of research shows that, as a result, learners from marginalized groups do, in fact, often face substantial difficulties mastering mainstream literacy practices due to such value conflicts (Gee; Heath; Hicks; Mahiri). Thus disruptive value is a documented problem in literacy learning and one linked to histories of systemic socio-economic and political inequity.
In interviews, when students reported viewing GEW knowledge as irrelevant to their career goals, coders often failed to code for disruptive value. We see such student views as disruptive only in relation to writing development, not in relation to a larger ethical or educational ideal. We understand and respect that students may reach such conclusions about GEW knowledge based on a wide variety of factors, including communal values like those that privilege avoiding “frivolous” courses and instead using education to access material resources to be shared with immediate and extended family. We believe that coders unfamiliar with literacy research are typically unaware either that the role of disruptive value in literacy learning is a documented concern or that scholars studying this issue treat such values with respect, recognizing that values disruptive to literacy learning often serve important functions of maintaining communal identity in the face of systemic injustice. Without understanding the relevant research, coders may have seen the instruction to code for disruptive value as conflicting with the American emphasis on respect for different values. Such emphasis is particularly characteristic of graduate programs in the humanities and social sciences and so may have been an even stronger imperative for coders.
In sum, we believe that the conceptual, cultural, psychological, and temporal complexities associated with dispositions codes may explain the difficulties coders faced in applying these codes accurately. Further, low accuracy rates where all three forms of complexity intersect suggest that coders may have faced cognitive overload.
VII. Recommendations for Coding and Future Study of Dispositions
Our story highlights disagreements on what counts as evidence of a particular disposition. These disagreements, among graduate student coders and between these coders and our research team members, resulted in incoherent coding that could not be analyzed for patterns or correlations. That is, when we ultimately decided to review coders’ work to learn why the whole process was fraught with setbacks, we learned that while researchers familiar with dispositions scholarship quickly agreed on whether and when excerpts from students’ reflections should be coded for a particular disposition, graduate student coders had, in many cases, not drawn the same conclusions. The fact that researchers agreed on which excerpts to code and how to code them shows that, despite the differences in how dispositions are embodied and experienced, students’ written reflections demonstrated enough commonalities in describing attitudes, perspectives, or behaviors linked to each of the five dispositions listed above that researchers could consistently agree on which student descriptions constituted evidence of one of the five dispositions. This researcher agreement suggests definitions sufficiently precise to enable quantitative study—if coding is done by such researchers. That outcome is promising for future research on the role of dispositions in writing growth.
However, in studies with enough participants to enable quantitative analysis, it is often impractical (or impossible) for researchers to do all coding. As we explain below, the disagreements between researchers and coders suggests that training coders effectively to enable studies with enough participants to obtain statistically meaningful data requires careful attention to the psychological, cultural, and temporal complexity of dispositions, as well as to their conceptual complexity. Such effort is worthwhile because the insights offered by quantitative analyses enable investigation of whether and how the insights from in-depth qualitative studies operate across larger groups and multiple demographics.
Inter-coder Reliability
Considered in light of scholarship on the complexities of doing quantitative analyses of “messy” qualitative data, our failed attempts at disposition coding is less surprising than it initially seemed. As Chi notes, investigating complex activities like learning behaviors in their natural contexts generates data that is inherently “messy,” such as interview, observational, and retrospective report data (271). Scholars propose various methodological solutions to this problem, from Chi’s eight-step method for examining such data to chart study participants’ representations of metacognition to Hayes and Hatch’s argument that literacy researchers should report differences in coders’ judgments using correlations between these judgments rather than percentages. Lombard, Snyder-Duch, and Bracken, in their meta-analysis of 200 social science content analyses of communications, also conclude that most such studies fail to provide adequate information about inter-coder reliability (599). Given the substantial variation between reliability rates for our large set of variables, we have several suggestions.
First, Smagorinksy’s collaborative coding may be better suited for disposition codes, at least until the field establishes sufficiently nuanced definitions of dispositions. For those engaging in coding in groups, we agree that inter-coder reliability should be reported for individual groups of variables. Further, we argue that in cases where inter-coder reliability is difficult to achieve, analysis of low reliability rates can generate important insights about the complexities of the constructs being coded and thus about challenges in training coders to recognize these constructs. This study of a study has also encouraged us to consider the role of the coders’ identities themselves. With constructs as complex as dispositions, coders may need intimate familiarity with the construct; whether the coders are researchers or assistants, this question transcends inter-coder reliability.
Selecting What to Study
Regardless of who is coding, because dispositions require more complex and nuanced reading than many other codes, we suggest studying a limited number of dispositions at a time (one or two), or breaking larger sets of dispositions down into much smaller sets that are coded at different times, allowing for mastery and making reading more manageable. This is a recommendation useful for all researchers, whether working individually, with co-investigators, or with trained coders. If researchers intend larger coding sessions with groups of coders in a multi-institutional research setting, we recommend assigning more coders to dispositions codes than other code sets and then breaking that larger number into small groups. Each small group should train on one subset or on a single disposition.
Individual researchers should consider their own cognitive capacity. Because each disposition is extremely nuanced and complex, coding for only one at a time or focusing a study on one is more appropriate than our multi-disposition study. Given the challenges in achieving acceptable accuracy rates in applying the larger code set, we believe that studying one or two dispositions may enable solo researchers to produce findings with more depth. Focusing on any disposition, like self-efficacy, could reasonably be expected to generate sets of subcodes that would offer a more specific understanding of how that construct operates in relation to writing development and of its key aspects.
The Four Dimensions of Dispositions
As we have demonstrated, we posit that dispositions as a construct have at least four dimensions: conceptual, cultural, temporal, and psychological. Researchers working individually or those training coders should work to understand the cultural, temporal, and psychological, as well as conceptual, complexities we’ve described. Further, in working with coders, we should situate these discussions explicitly in brief explanations of relevant literacy research. For example, trainers working with dispositions codes might use Gee’s example of how communal values appear to disrupt some African American students’ efforts to master individualistic, agonistic legal discourse. By presenting Gee’s argument for the importance of these communal values and of pedagogical efforts to help students negotiate such cultural conflicts, trainers can help coders understand that applying the code “disruptive value” to such communal norms—when they interfere with literacy learning—is not an ethical or intellectual categorization but rather a pragmatic one. That is, trainers can explain that many literacy researchers and instructors value highly the communal norms underlying such interference and, conversely, are sharply aware of the use of dominant literacies such as legal discourse to perpetuate systems of economic exploitation and cultural hegemony (Collins and Blot; Gee; Graff; Stuckey). Such explanations should emphasize the need for changes in dominant systems of literacy and, perhaps even more importantly, the use of literacy research to help equip students with strategies to address the cultural and psychological conflicts that may impede their mastery of dominant literacies when they seek such mastery.
Adapting Dispositions to Writing Studies
We also need to be careful of bringing concepts in from psychology and related fields without careful scrutiny, as these codes are rarely nuanced enough without adaptation. For example, based on Metcalf Latawiec’s results, we believe that future researchers should train coders to identify not generalized self-efficacy language, which Metcalf Latawiec finds relatively rarely, but rather task-specific self-efficacy language, which she finds more often and shows to correlate with a more specific understanding of the writing process.
Addressing Subjectivity
Future researchers should themselves (or help coders) identify and correct for instances where their subjective views (e.g., paying more attention to disruptive rather than generative value) may be prompting them to miss or misread students’ articulations. First, we must learn to identify and set aside subjective evaluations so we can recognize complex, potentially contradictory articulations. As we explain below, for training coders, this process requires more in-depth training and discussion designed to help coders self-regulate by recognizing the potential complexities involved, not only in the categories of the taxonomy but also in how these categories may intersect problematically with day-to-day evaluations. Because our taxonomy codifies its distinctions in objective terms that constrain potential subjective interpretations, helping coders develop the ability to recognize how taxonomy categories may collide with subjective evaluations is crucial to enabling valid, reliable coding.
Temporal Considerations and Trajectories
We may need to develop more sophisticated ways of coding for students’ trajectories and dispositional shifts over time.{3} Even in a single interview, a student may go from talking about experiences in the past to his current dispositions and then articulating a vision for the future—all of these temporal realities require complex coding. We need to recognize and appropriately code a long document (such as an interview) in which a student’s articulations of dispositional issues develop a trajectory, particularly one punctuated by competing representations like that of the student who initially saw GEW knowledge as not useful but later viewed it as relevant to subsequent courses. The demands of this process posed significant challenges for our coders not immersed in dispositions and writing studies research, as suggested in our above discussion of why coders may have so frequently missed examples of self-efficacy and attribution.
In addition, coding dispositions should involve discussions of whole documents. Because a single underlying disposition may be spanning across the entire interview or series of reflections that a student produces, it is important to consider these temporal factors. Reading the entirety of a student’s dataset, for example, can help researchers/coders see a larger dispositional trajectory rather than viewing material in isolation.{4} In terms of training groups of coders, we suggest training using whole documents whenever possible. For practical reasons, namely time constraints, training sessions may need to combine work with excerpts (which we used successfully to train coders in all categories) and with full documents.
Respect for Culture and Self-regulation
In studying dispositions, we should stress dispositions’ relationship to marginalized literacy practices and recognize those practices have an impact on student dispositions. Specifically, we can draw upon literacy researchers’ respect for all cultures and emphasis on supporting students from such groups in finding ways to harmonize the values of their home literacy practices with the values embedded in the dominant literacy practices they seek to learn. In terms of training coders, trainers should take a similar approach with potentially disruptive self-regulatory behaviors, stressing their potential value for other aspects of students’ lives and emphasizing that categorizing such behaviors as disruptive is a judgment with limited scope and with a pragmatic rather than ethical or intellectual valence.
Dispositions and Student Identities
One final aspect to consider is the question of how much of the identity of the student should be known to those coding for dispositions. In our study, materials were read blindly with no indication (other than material presented in the interviews and reflections themselves) of who the student was. While this was effective for a number of other areas, it proved challenging for dispositional coding because dispositions are rooted in students’ identities. Knowing some features of the identity of the student could aid coders with the many issues described above (“reading into,” addressing cultural issues). But yet, the reverse to this is to risk potential coders’ stereotyping and reading into the data in inappropriate ways. We suggest that each researcher consider carefully the issues presented here and make their own determination about using demographic information as part of coding.
Patterns Over Motives
Identifying patterns in dispositions could helpfully inform instructional approaches, for instance, by allowing a teacher who notices a trend toward low self-efficacy among some students to offer a new type of writing task to explore with students their perceptions of the task’s requirements, of their capacities or preparation for the task, and of what support they believe they’d need to complete the task successfully. Developing reliable methods to count evidence of dispositions, even with limitations, will help us to understand how dispositions shape writing development and thus to better tailor writing instruction.
Finally, a note of caution: because researchers may need to begin by investigating how dispositions work in the FYW context, researchers and teachers should identify patterns but not attribute motives or presume any fixed perspective. Instead, we should use curricular and pedagogical changes to engage students in activities and dialogue relevant to dispositional patterns. For instance, teachers might develop concrete demonstrations of the value of FYW for future contexts. Similarly, we might help students to recognize that internal locus of control doesn’t indicate weak intellect or character. We might guide students in recognizing their opportunities to make choices that better facilitate learning and might scaffold support designed to help students make such choices.
VIII. Conclusions and Future Work
Although frustrated during our initial efforts into dispositions on a large scale, we believe going down the rabbit hole of dispositions has yielded fruit, including foregrounding an important intersection between literacy studies scholarship and writing transfer scholarship with important implications for transfer research more broadly. This intersection involves the way in which literacy learning, particularly academic literacy learning, is deeply entwined with systems of socio-economic, political, and cultural inequity and exploitation. As Driscoll and Wells argue in their work on dispositions, much research on writing and writing instruction has emphasized attention to the social at the expense of examining the role of the individual’s actions and choices in learning to write. As they explain, this focus has prevented writing researchers from examining the substantial impact of such individual factors. Investigating the role of dispositions in the process of learning to write provides an important corrective. However, the particular nature of the complexities we encountered reveals that neither side in a binary view of individual vs. social factors can adequately explain how learners develop writing expertise. Instead, our work with dispositions coding suggests that researchers must focus on the intersections between the individual and the social.
More specifically, using literacy research to understand how cultural norms may have impacted dispositions coding highlights the fact that dispositions themselves are shaped by such norms. This interpretation thus suggests the need to consider how individual factors like dispositions are shaped by these norms in designing studies of writing transfer. That is, investigations of dispositions’ role in transfer must consider these tendencies both as individual characteristics and as socially and culturally defined patterns. For instance, many scholars studying African American literacy experiences emphasize how dispositions to suspect, rather than value, academic literacy and to define self-efficacy in relation to it as “acting white” can negatively impact literacy learning for African American students (Gee; Mahiri; Ogbu; Richardson). Other work suggests that working-class children are not motivated to learn literacy practices that appear to conflict with their community’s values (Heath) or that do not appear relevant to the roles they see their parents playing (Hicks; Purcell-Gates). Training coders—and ourselves—to understand this cultural complexity when working with dispositions is one important way of addressing this issue methodologically.
However, we suggest addressing three other aspects of study design as well. First, future research on dispositions should include collection of more extensive demographic information than typically collected (e.g., on socio-economic status, school first-generation status, race, ethnicity, and the like) potentially relevant to dispositions. Collecting this information will allow researchers to identify any dispositional patterns based on social factors and to consider the specific writer’s identity while coding (which presents its own set of problems). Studies that include such data are needed to identify whether particular dispositional patterns correlate in any way with various demographic factors, which could help identify teaching approaches most supportive of different student demographics. Second, data collection instruments such as surveys, interviews, and reflection prompts should seek information on the family and communal values, attitudes, and experiences informing students’ interaction with academic literacy, as well as on how their prior formal literacy learning experiences impacted them affectively. Finally, researchers should use both of these additional data types during analysis to investigate possible patterns of correlation between cultural factors and dispositions. For example, researchers might look for correlations between first-generation status and particular patterns in value, self-efficacy, or motivation. Further, researchers should study whether patterns in these dispositions, which prior research suggests are implicated in cultural difference, correlate with patterns in other dispositions less obviously linked to cultural factors. For instance, it would be useful to learn whether patterns in attitudes related to locus of control or in self-regulatory behavior correlate with patterns in value or self-efficacy, as well as with patterns in demographic and cultural factors. In short, much more investigation of the relationship between social and individual factors is needed to understand the role of dispositions in writing development. Clearly, we have a long journey ahead with regards to the study of dispositions, a journey doubtless fraught with perils but also with the promise of rich rewards.
Notes
- This was the minimum percentage of agreement as recommended by Lombard, Snyder-Duch, and Bracken. Many groups were well above the 80% mark for inter-coder reliability. (Return to text.)
- Although our methods did not produce a total count of all codes missed, we identified when codes were missed at least once per document (which is a very conservative estimate of the total codes missed). In other words, if we saw the code missed at least once in the document, we noted that it was missing. (Return to text.)
- Our coders had difficulty reading one interview that discussed a year or more of writing experiences. This would become an even greater challenge for studies that were even more longitudinal in nature with multiple interviews at multiple points. (Return to text.)
- Dana recognized the critical importance of this based on her work in a subsequent 6-year longitudinal study for examining dispositions. (Return to text.)
Works Cited
Adler-Kassner, Linda, and Elizabeth Wardle. Naming What We Know: Threshold Concepts of Writing Studies. Utah State UP, 2015.
Bandura, Albert. Regulation of Cognitive Processes through Perceived Self-Efficacy. Developmental Psychology, vol. 25, no. 5, 1989, pp. 729-35.
Bawarshi, Anis S., and Mary Jo Reiff. Genre: An Introduction to History, Theory, Research, and Pedagogy. Parlor Press, 2010.
Beaufort, Anne. Learning the Trade: A Social Apprenticeship Model for Gaining Writing Expertise. Written Communication, vol. 17, no. 2, 2000, pp. 185-223.
Bergmann, Linda, and Janet Zepernick. Disciplinarity and Transference: Students’ Perceptions of Learning to Write. WPA Journal, vol. 31, no. 1/2, 2007, pp. 124-49.
Bransford, John D., and Daniel L. Schwartz. Rethinking Transfer: A Simple Proposal with Multiple Implications. Review of Research in Education, vol. 24, 1999, pp. 61-100.
Bronfenbrenner, Urie, and Pamela A. Morris. The Bioecological Model of Human Development. Handbook of Child Psychology. Eds. Lerner, R. M. and W. Damon. Vol. 1. New York: Wiley, 2006. 793-282. Print.
Chi, Michelene T.H. Quantifying Qualitative Analyses of Verbal Data: A Practical Guide. Journal of the Learning Sciences, vol. 6, no .3, 1997, pp. 271-315.
Collins, James, and Richard K. Blot. Literacy and Literacies: Texts, Power, and Identity. Cambridge UP, 2003.
Conard-Salvo, Tammie, and John Spartz. Listening to Revise: What a Study About Text-to-Speech Software Taught Us About Students’ Expectations for Technology Use in the Writing Center. Writing Center Journal, vol. 32, no. 2, 2012, pp. 40-59.
Council of Writing Program Administrators, National Council of Teachers of English, and National Writing Project. Framework for Success in Postsecondary Writing. CWPA, NCTE, NWP, 2011, http://wpacouncil.org/framework.
Cushman, Ellen, Eugene R. Kintgen, Barry M. Kroll, and Mike Rose. Literacy: A Critical Sourcebook. Bedford, 2001.
Driscoll, Dana Lynn, and Jennifer Wells. Beyond Knowledge and Skills: Writing Transfer and the Role of Student Dispositions. Composition Forum, vol. 26, 2012, http://compositionforum.com/issue/26/beyond-knowledge-skills.php. Accessed 15 December 2016.
Gee, James Paul. Social Linguistics and Literacies: Ideology in Discourses. 3rd ed., Routledge, 2008.
Gorzelsky, Gwen, Carol Hayes, Ed Jones, and Dana Driscoll. Cueing and Adapting First-Year Writing Knowledge: Support for Transfer into Disciplinary Writing. Understanding Writing Transfer, edited by Jesse Moore and Randall Bass, Forthcoming.
Gorzelsky, Gwen, Dana Lynn Driscoll, Joe Pazcek, Ed Jones, and Carol Hayes. Cultivating Constructive Metacognition: A New Taxonomy for Writing Studies. Critical Transitions: Writing and the Question of Transfer, edited by Chris Anson and Jesse Moore, WAC Clearinghouse, 2016, pp. 217—249, http://wac.colostate.edu/books/ansonmoore/. Accessed 4 January, 2017.
Graff, Harvey J. The Nineteenth Century Origins of Our Times. Literacy: A Critical Sourcebook, edited by Ellen Cushman, Eugene R. Kintgen, Barry M. Kroll, and Mike Rose, Bedford, 2001, pp. 211-33.
Halpern, Diane F. Teaching Critical Thinking for Transfer across Domains: Dispositions, Skills, Structure Training, and Metacognitive Monitoring. American Psychologist, vol. 53, no. 4, 1998, pp. 449-55.
Hammer, David, and Leema K. Berland. Confusing Claims for Data: A Critique of Common Practices for Presenting Qualitative Research on Learning. Journal of the Learning Sciences, 2013, pp. 1-10, http://dx.doi.org/10.1080/10508406.2013.802652. Accessed 4 January, 2017.
Haswell, Richard. NCTE/CCCC’s Recent War on Scholarship. Written Communication, vol. 22, no. 2, 2005, pp. 198-223.
Hayes, John R., and Jill A. Hatch. Issues in Measuring Reliability: Correlation Versus Percentage of Agreement. Written Communication, vol. 16, no. 3, 1999, pp. 354-367.
Heath, Shirley Brice. Ways with Words: Language, Life, and Work in Communities and Classrooms. Cambridge UP, 1983.
Hicks, Deborah. Reading Lives: Working-Class Children and Literacy Learning. Teachers College P, 2002.
Kjesrud, Roberta D., and Mary Wislocki. Learning and Leading through Conflicted Collaborations. Writing Center Journal, vol. 31, no. 2, 2011, pp. 89-116.
Lombard, Matthew, Jennifer Synder-Duch, and Cheryl Campanella Bracken. Practical Resources for Assessing and Reporting Intercoder Reliability in Content Analysis Research Projects, 2010, http://matthewlombard.com/reliability/index_print.html. Accessed 4 January 2017.
Mahiri, Jabari. Shooting for Excellence: African American and Youth Culture in New Century Schools. NCTE, 1998.
Metcalf Latawiec, Amy. Fostering Self-Efficacy and Motivation in the Self-Directed Basic Writing Classroom. Unpublished dissertation (in draft), Wayne State University.
Ogbu, John U. Literacy and Schooling in Subordinate Cultures: The Case of Black Americans. Literacy: A Critical Sourcebook, edited by Cushman et al., Bedford, 2001, pp. 227-242.
Perkins, David, et al. Intelligence in the Wild: A Dispositional View of Intellectual Traits. Educational Psychology Review, vol. 12, no. 3, 2000, pp. 269-93.
Purcell-Gates, Victoria. A World without Print. Literacy: A Critical Sourcebook, edited by Cushman et al., Bedford, 2001, pp. 402-417.
Slomp, David H. Challenges in Assessing the Development of Writing Ability: Theories, Constructs and Methods. Assessing Writing, vol. 17, no. 2, 2012, pp. 81-91.
Smagorinsky, Peter. The Method Section as Conceptual Epicenter in Constructing Social Science Research Reports. Written Communication, vol. 25, no. 3, 2008, pp. 389-411.
Stuckey, J. Elspeth. The Violence of Literacy. Boynton/Cook, 1990.
Wardle, Elizabeth. Understanding ‘Transfer’ from FYC: Preliminary Results of a Longitudinal Study. WPA Journal, vol. 31, no. 1/2, 2007, pp. 124-49.
Wardle, Elizabeth. Creative Repurposing for Expansive Learning: Considering ‘Problem Solving’ and ‘Answer Getting’ Dispositions in Individuals and Fields. Composition Forum, vol. 26, no. 2, 2012, http://compositionforum.com/issue/26/creative-repurposing.php. Accessed 15 December 2016.
Weiner, Bernard. Attributional Theory of Achievement Motivation and Emotion. Psychological Review, vol. 92, no. 4, 1985, pp. 548-73.
Wingfield, Allan, and Jacquelynne S. Eccles. Expectancy-Value Theory of Achievement Motivation. Contemporary Educational Psychology, vol. 25, 2000, pp. 68-81.
Zimmerman, Barry J. Becoming a Self-Regulated Learner: An Overview. Theory into Practice, vol. 42, no. 2, 2002, pp. 64-72.
Down the Rabbit Hole from Composition Forum 35 (Spring 2017)
Online at: http://compositionforum.com/issue/35/rabbit-hole.php
© Copyright 2017 Dana Lynn Driscoll, Gwen Gorzelsky, Jennifer Wells, Carol Hayes, Ed Jones, and Steve Salchak.
Licensed under a Creative Commons Attribution-Share Alike License.
Return to Composition Forum 35 table of contents.