Chapter 7: Tactics – Eight techniques for data collection

0
618

As data collection in designed action sampling is almost entirely done using the form in Figure 4.2, all eight data collection techniques below are linked to different parts of the form.

Experience sampling is an established scientific method for collecting data from participants in research studies. The action-reflection link means that each reflection collected should be based on a completed action. Longitudinal data is what you get when many forms with different tasks over time are completed many times by the same people. Deep reflection is not easy, but requires thought, methodology and training for both the peer learning leaders who design the tasks and the teachers who then fill in the form. Mixed methods involve collecting both text and numbers in a single study and facilitate deep and time-efficient analysis. Emotional rating provides a quick indication of how teachers perceive a task and is done quickly and easily using a five-point scale. Effect coding significantly simplifies analysis and is based on ‘grounded theory’ – a scientific method for building theory that is well grounded in empirical data. Feedback from peer learning leaders strengthens teachers’ learning and also builds a mutual relationship in the research process.

Collection of experiences – same reflection questions for everyone

The designed action sampling form is designed to capture participants’ experiences in the moment, as soon as possible after completing a task. Preferably in the same hour or on the same day, so that the feeling from the completion remains in the body. The form is also designed to be easy to complete in just a few minutes. It should not take more than three minutes. Some participants will of course write slightly longer reflections, but the idea is that they should not be very long texts. The most important thing is that the feeling and thoughtful reflection of the moment is captured on paper, in a way that works in the teachers’ everyday life.

The form also includes a brief description of the action-oriented task that is expected to be completed. Ideally, this should be the main information given to participants about what the task is about. This simplifies the work and ensures that all participants have received the same information prior to implementation. Any variation in reported outcomes can then be attributed mainly to how things went, rather than to variation in instructions or questions.

In terms of scientific theory, this design of the questionnaire is based on a psychological research method called the experience sampling method (ESM). This method was developed in the 1970s by the Hungarian-American researcher Mihály Csíkszentmihályi. By capturing subjective experiences with high precision, he and his colleagues achieved a much higher reliability in data collection than previously possible.[1] It was the combination of behavioural observations, diary entries and questionnaires with precise scales that made this possible. The designed action sampling form therefore combines four different perspectives: a task that has just been completed, a quick diary entry, an emotional assessment and a choice of some pre-defined tags that then allow for mathematical calculation of effects.

Linking action-reflection – in all data collection.

As each completed form is linked to a specific action-oriented task to be completed, each data set collected is linked to a specific action of interest to the research. Reflections and estimations are thus not of a general nature, as is otherwise often the case with traditional questionnaires and interviews. Instead, all data collected are causal in nature – they always say something about the relationship between causes and effects. The action is the cause, reflections and estimates show effects.

Each data set collected is also linked to a specific point in time – just when a task has been completed, or at least shortly thereafter. This distinguishes designed action sampling from both surveys and interviews, where instead the timing of the survey or interview determines when the participant is expected to contribute with their thoughts. This reduces retrospective bias, i.e. bias due to trying to remember what happened some time ago.[2]

The link between action and reflection also applies in reverse, i.e. backwards in time. If a deadline is set for when everyone should have reflected upon a certain completed task, this triggers action among the participants. You cannot reflect upon an action you haven’t yet completed. As the deadline approaches, the participants will thus feel an increasing pressure to perform the task agreed upon and described on the distributed form. It is therefore advisable that a task deadline does not coincide with other demanding tasks.

A content package with many tasks needs to be spread out over time in a way that makes each deadline manageable. Many participants have found that a fortnightly deadline is reasonable. However, an appropriate time between deadlines depends on the complexity and time commitment of the tasks. It is not completing the form that takes time, it is rather the development work itself that takes time. Some tasks are also difficult to fit into a given week, which may require months of time for the possibility of implementation to materialise, see examples 5 and 9 in chapters 6 and 11.

Longitudinal data – collection over time and on a weekly basis

When the same form is completed many times over time by the same people, the data collected becomes longitudinal in nature. You could say that the data collected extends along the time axis. We then have a longitudinal study: data collected from the same individuals repeatedly over time.[3] This is different from cross-sectional studies, where data is collected on a single occasion, such as a student or employee survey. Longitudinal studies can be expensive to conduct but have some key advantages. For example, it is possible to track changes over time at the individual level. It is also possible to see in detail how different individuals are affected by different circumstances. While a cross-sectional study can only demonstrate the occurrence of a phenomenon, a longitudinal study can also describe how or why different phenomena occur and what effects they have. In research on dynamic processes, such as school development and learning, longitudinal studies are therefore always preferable if possible.[4] If you are also studying change processes, cross-sectional studies may even be inappropriate. It is only over time that change can be observed.[5]

The difference can be said to be similar to that between photo (cross-sectional) and video (longitudinal).[6] A photo taken at a time determined by the photographer certainly tells us something about what it looked like exactly at that moment, from a certain angle. But a video recording, at its best, can provide a dynamic and detailed account of crucial moments in people’s lives. Researchers in a longitudinal study can become fellow travellers or companions to the people being studied.[7] This makes the study form more intimate, personal and relational than a cross-sectional study. However, it requires mutual trust, good research ethics and some level of confidentiality. The close relationship also means that the participants often expect the research leaders to give something back, as a balance between giving and taking needs to be maintained in the relationship between them. This is known in sociology as reciprocity – relational equilibrium between giving and taking.[8]

Because designed action sampling is based on content packages with multiple tasks that are reflected upon preferably on a weekly basis, many of the unique advantages of longitudinal studies are realised. Each participant makes a new reflection approximately every two weeks. The research leaders then become trusted co-travellers on the teachers’ everyday journey and also gain a dynamic and detailed picture of when, how and why different phenomena occur and the resulting effects. The form and the concise task descriptions also mean that this is done in a time-efficient way. The school avoids the main disadvantages of longitudinal studies – time consumption and cost.

The participants themselves choose the approximate time when they reflect on an assignment they have just completed. This allows data to be collected in real time and close to when something interesting has been done. This makes the data collected more reliable, more detailed and more emotionally charged. Especially in comparison to a cross-sectional study, where data is usually collected at a time determined by the researcher, who then rarely manages to pinpoint dynamic processes or crucial emotional moments in the school. An exciting way of working longitudinally is given in Example 7.

Example 7: Values-based work in primary schools

Working with students’ values is an important part of promoting safety and well-being in schools. However, a difficult question is how this can be done in practice. In a development project at one school, around 70 students were asked to carry out concrete actions aimed at strengthening their sense of community. The goal was a healthy school with a good atmosphere, good relations and few cases of offence.

The tasks could be about everyday actions such as looking the school restaurant staff in the eye and saying thank you with a smile, saying hello to someone they don’t normally say hello to, being kind enough for someone else to say thank you, getting someone who rarely talks at lunch to talk a little more, supporting a friend who seems to be left out, or getting as many people as possible to participate in a joint activity. After each completed task, the students had to write a short reflection, report the feeling and estimate the effects. Students then received feedback from an adult. Tags captured effects such as “More people cheering for each other”, “More people doing things together” and “More people caring for each other”.

The project was so successful that it was both extended and spread more widely in the municipality. The responsible special educationalist Åsa Sourander and the author John Steinberg wrote a book about the project: Värdegrundsarbete i praktiken: metodbok för skolan (Steinberg & Sourander, 2019, in English: “Values based work in practice: a method book for schools). They called the approach ‘behaviour-based values development’, and it involves illustrating abstract concepts such as solidarity, equality, respect, consideration and acceptance with concrete behaviours that students themselves can try out in practice. A new task is tested every week for six weeks. Important elements are that all students are involved, that the students’ actions are systematically followed up via written reflection afterwards and that an adult provides feedback.

When all students had to complete the same task at the same time, the impact was greater. The actions led to many interesting conversations between students, both in class and during breaks. The atmosphere in the school became calmer and more pleasant. In the school restaurant, students talked to the staff in new ways.

This way of working requires perseverance. After a round of six missions, you can take a break. Then a new round is needed. Some tasks were difficult to implement in practice, such as getting a shy friend to talk more.

Deep reflection – everyone thinks on a deeper level

The most important field on the designed action sampling form is the written reflection that teachers are asked to do after completing their assignment. Reflection plays a key role in all learning. This is especially true for action-based learning, where learning takes place actively and practically, inside or outside the classroom. Simply doing something is not enough for deeper learning to take place. Doing must also be linked to content knowledge and previous experiences through deep individual personal reflection. Only then is new useful knowledge created that the individual can apply in their future problem solving. Reflection on the form in Figure 4.2 is thus not only a way of collecting data, but also fosters teachers’ learning. Time set aside for reflection is a kind of ‘glue’ that helps teachers in the all-important task of combining theory and practice.[9]

For many people, reflection is an automatic and unconscious thought process. As early as the 1970s, researchers Argyris and Schön coined the term ‘reflective practitioners’ to describe managers, architects, teachers and others who developed in their professional role through individual critical reflection on their own actions.[10] Particularly competent practitioners often dared to challenge established ways of thinking in the workplace through so-called double-loop learning: a type of learning where norms and goals are critically scrutinised.

A frequently used way of explaining double-loop learning is based on an ordinary heating element in a room, with a thermostat set at 20 degrees. Single-loop learning is then about the thermostat constantly trying to maintain the target value of 20 degrees in the room. Double-loop learning is instead about the thermostat exercising a kind of self-criticism and questioning whether 20 degrees is even a suitable temperature for the room.

A high and increasing pace in schools risks crowding out the all-important time required for individual critical reflection on one’s own practice and its goals. Neo-liberal school policy has also led to increasingly strict target management through various steering documents, with ever more detailed formulations of objectives that must not be questioned.[11] Reflective practices can thus perhaps be said to be an endangered species in schools. Here, designed action sampling can contribute with an increased focus on critical reflection.

If school leaders ask all staff to engage in a few minutes of individual written reflection at least every two weeks or every month, over time the priorities in the workplace will change. In the long run, this leads to more self-aware and more critically reflective teachers. If individual written reflection becomes the norm in the workplace, employees will also spend more time developing their teaching, as they will be forced to stop and think more deeply about how they work and why they do so. Teachers’ efforts to verbalise their thoughts also contribute to stronger ownership and deeper insights into various attempts to develop their own practice.[12]

Reflection should be timed close to the actions to which it relates, preferably on the same day. This is called reflection-in-action.[13] This makes the reflection more emotionally alive. It also makes it easier to mentally recreate one’s own actions in order to review and reassess them critically.[14]

There is a risk that written reflection becomes superficial and uninteresting, both for the teachers who write down their thoughts and for the peer learning leaders who read and give feedback. It is therefore important, as a peer learning leader, to encourage the teachers, already in the task description, to:[15]

  • not only describe what they were doing, thinking and feeling, but also reflect on why they were doing, thinking and feeling that way.
  • try to relate their practical experience to relevant theory and literature
  • try to formulate any new insights
  • try to write down what surprised them
  • try to relate to the feelings the task led to – what made them feel strongly positive or strongly negative, and why do they think they felt that way?

Good reflection often requires taking a step back and considering alternative ways of perceiving the situation, and thinking about how others might view the situation. The deepest level of reflection involves trying to articulate your changed beliefs and values at a deeply personal level.

Peer learning leaders need to guide teachers towards deeper reflection. This is best done by using the form in Figure 4.2 at the end of each task description to describe how teachers should reflect after an action. If the peer learning leader is then not satisfied with the depth of the reflections, he or she may need to reconsider. It may be necessary to revise the task description, more specifically the part that instructs teachers how to reflect. Peer learning leaders may need to try things out and continuously ask themselves questions:

What wording in the task description makes teachers’ written reflections interesting and deep?

Another way to get deeper and more interesting reflections from teachers is to give them regular and thoughtful feedback on their reflections.

Not everyone is initially able to reflect in depth in writing. It is also quite common for some individuals to resist or even refuse to reflect.[16] Over time, however, these problems pass, especially if the school management insists on the importance of written individual reflection. Teachers develop a good ability to reflect deeply in writing over time, through practice and with good support from their peer learning leaders. The peer learning leader feedback box on the form in Figure 4.2 could be used to provide teachers with different tips for deeper reflection.

Mixed method – collection of both text and numbers

Designed action sampling is based on mixed methods. Mixed methods in a research study means collecting both qualitative data (e.g. text) and quantitative data (mainly numbers) in the same study.[17] Some advantages of this are the ability to:

  • be able to analyse through triangulation – studying an issue from several different angles.
  • be able to simultaneously see both the details (zoom in on the text) and get an overview of the whole (zoom out via numbers).
  • flexibly switch between two different worlds of analysis – text and numbers.[18]

Despite many advantages, mixed methods in the social sciences are extremely rare. One reason is that the social science research community is divided into two camps – qualitative and quantitative research. According to this division, researchers should work only with numbers and statistics from, for example, surveys, or only with text and other qualitative statements from people via for example interviews, video, audio, photos, observations, case descriptions, field notes or archival documents. This bitter struggle between two camps has been called a paradigm war, where too much effort has been put into describing and emphasising the differences[19] – although there are actually more similarities. For example, both camps are trying to:[20]

  • reduce and describe the collected data in a summarised and clear manner.
  • analyse collected data using different analysis methods
  • build explanatory models
  • draw meaningful conclusions.

With such common goals, co-operation should be a better research strategy than war.

The form in Figure 4.2 shows that designed action sampling is based on mixed method. The form has several fields for both qualitative free text and for selection of emotions and tags that can then be quantified and analysed numerically. This achieves many of the benefits traditionally associated with mixed methods. The work of hundreds of teachers on multiple tasks can be relatively easily visualised numerically in a single summary table. If you then want to know more about the details, the analysis turns to qualitative data. Text analysis can then tell the story behind the numbers: why they are the way they are.

The designed action sampling form always has the same structure and thus always contains both free text and quantifications. This leads to an important advantage: all text from each participant is always linked to quantitative data. In the analysis phase, peer learning leaders can therefore always easily go back and forth between text and numbers. For each number, there is text from many participants explaining underlying cause-and-effect mechanisms. For each snippet of text, there are numbers (emotional states and tags) that link the text to other participants’ texts so that overall patterns can be searched for and visualised.

Emotional rating – all reflections are emotionally rated

Each time the form in Figure 4.2 is completed, teachers are also expected to rate their experience emotionally on a five-point scale. The choice is made by selecting a suitable image from five different emotional states. The teachers’ choice of image is then converted into numbers, with each image representing a number between -2 and +2. The negative numbers represent negative emotional states, and the positive numbers represent positive emotional states. Zero represents a neutral emotional state. The numbers can then be used to quickly sort all the data and calculate averages. This makes it easier to analyse, see next chapter.

The five-image scale is derived from psychological research on emotions.[21] The original version dates from 1980 and is called the Self-Assessment Manikin (SAM). The researchers who developed SAM wanted to find a simpler way to collect people’s feelings in different situations than the relatively complicated text-based scales that were dominant at the time.

A picture-based scale makes it easier and faster for participants in research studies to rate their emotional state, without the researchers losing reliability compared to text-based scales. SAM has been used to allow participants to rate their experience mainly in medical, psychological and marketing research.[22]

People’s feelings about a phenomenon say a lot about how they will choose to act in relation to it in the future, often even more than their thoughts.[23] Therefore, the calculated mean value of many teachers’ emotion rating on the form in Figure 4.2 can provide a relatively reliable indication of how likely it is that they will choose to continue working on the action of the task in the future. Therefore, when comparing different completed tasks, it is always interesting to look at the mean value of all participants’ emotional assessment of each task.

Effect coding – all data is tagged according to effects

An important part of the form for designed action sampling in Figure 4.2 is the tagging of different effects. A tag is a short phrase of a few words that summarises an effect, experience or behaviour of interest to the research being conducted. Tags can therefore be both positive and negative. All eleven examples in this book contain concrete examples of tags used in each study.

After writing a reflection and estimating the emotional state in the form, the participant chooses among different tags that represent interesting effects and experiences of different kinds. It is possible to choose more than one of the tags, if there are several that fit. The main purpose of this tagging is to collect quantitative data on what effects the participants could see after a completed task. Thus, tagging is an important part of enabling and facilitating the analysis of causal cause-and-effect relationships.

The tags that participants will be able to choose from are decided by the peer learning leaders at the start of a study. Participants can also be invited to the process of deciding which tags to choose from. However, it is not easy to come up with a good set of tags. It requires creativity, experience in the area to be studied, a lot of thought and many attempts. Therefore, the first time a content package is used, peer learning leaders may start with a sufficiently good set of tags. Each time a content package is used again, the set of tags can be further developed based on how it worked last time. Which tags worked well? Which tags were missing? Which tags captured interesting phenomena and relationships well?

A tag should not be longer than four to five words and can be even shorter. This makes it easier for participants. Tags should be tested on a smaller group of participants before a study starts, to ensure that many participants understand roughly what each tag means.

The idea of tags is derived from a very established qualitative research method called grounded theory. It was developed in the 1960s by sociologists Barney Glaser and Anselm Strauss. Their approach was mainly inductive, which means trying to build theories from empirical data, mainly qualitative interview studies. This is why it is called ‘grounded theory’ – it refers to theories that are well grounded in empirical data.

In grounded theory, coding of text-based interview data is an important step. The researcher looks for patterns and relationships by analysing interviews that have been recorded and then transcribed, i.e. written down word by word. The analysis is done through a creative generation of different conceptual words, called ‘codes’. At best, the codes describe the studied practice in a good way, and at the same time form a conceptual basis for a theory of how a certain phenomenon works more generally. These codes are then linked to relevant passages in the interview text using software specifically built for this purpose. [24]

In grounded theory, coding of interview data is an important part of linking from practice to theory.[25] In line with this, tags are used in designed action sampling to facilitate linkages from practice to theory. However, it is the participants themselves who code their own data, unlike in grounded theory, where the researcher has to code all the text afterwards with the help of software. Having the participants code their data themselves saves a lot of time, but may involve a small risk of misleading coding. Especially if participants do not understand, or misunderstand, what some tags mean.

The methodological literature on grounded theory contains many good tips that can be used in the process of designing tags for the designed action sampling form. Ideas for suitable tags can, for example, be drawn from perspectives and effects that are perceived as particularly interesting or important, from what has emerged in previously collected data and from literature in the field. [26]

There are also some creative tricks. Corbin and Strauss (1990, pp. 75-95) write about the importance of analysing collected data by:

  • asking many questions – what, who, when, where, how, how much, why?
  • analysing key words and key phrases that appear in the collected data – what does the word/phrase really mean?
  • trying to turn situations around and see opposites – what would be the opposite of what we see in our data?
  • systematically comparing two or more instances of the same phenomenon – what similarities and differences do we see?
  • comparing with something completely different – what can we learn about this from some completely different phenomena?
  • never taking anything for granted – is it really always/never a certain way around this phenomenon?

Such questions generate ideas for appropriate codes, or tags as we call them in designed action sampling.

Feedback – everyone gets quick feedback from peer learning leaders

Once the peer learning leader has received a completed form, it is important to promptly provide feedback on the content of the form back to the teacher who completed it. Preferably within a few days, or by a specified deadline. Therefore, there is a feedback box at the bottom of the designed action sampling Form, see Figure 4.2. The peer learning leader needs to first fill in the feedback box with their own thoughts and tips and then make a photocopy of the form. The copy is returned to the teacher. Through this procedure, both the teacher and the researcher have access to all the information. The teacher can save their own reflections and go back to them later. The researcher saves all completed forms for the subsequent analysis phase. Of course, if the work is done with some form of digital support, all the paperwork is avoided.

Feedback is like rocket fuel for learning[27] and is just as important for teachers’ learning as it is for students’ learning.[28] Teachers who receive feedback on their reflections feel seen, get clues and suggestions for their own learning, and get important information about how they are doing in relation to various expectations. These can be expectations from peer learning leaders, school leaders or teacher trainers. Feedback from those leading the work is also particularly powerful in organisational development work.[29] Getting quick feedback from an expert can be particularly motivating. Here is what one teacher said:

The feedback on our task reflections came immediately and I felt that wow, this is so much fun! Getting feedback from an expert is a huge development for everyone!

Teachers who do not receive feedback risk losing both their own motivation and trust in management. If no one in a leadership position seems to care enough to read and give feedback on their reflections, one might wonder what the point is of reflecting and working with development at all. Teachers who do not receive feedback also miss out on important clues in their learning process and thus risk losing their focus on development.

Lack of feedback may even be one of the main causes of poorly functioning organisations, as it leads to a harmful imbalance between practical action and theoretical reflection.[30] The intimate and relational nature of designed action sampling also makes it particularly important for peer learning leaders to provide feedback to each participant, as it is an important part of perceived reciprocity; that there is a relational balance between giving and taking. All in all, as a peer learning leader, you should definitely not forget to fill in the feedback box at the bottom of the form in Figure 4.2. In the worst case scenario, this can lead to the work you have done being in vain.

Some peer learning managers may find it difficult to give good feedback. It can be reassuring to know that it is often enough to acknowledge the participant with a short sentence or two, and perhaps include (at most) one in-depth question for further reflection. It is also more important to give quick feedback to everyone than to give perfect feedback. Therefore, peer learning leaders should not be more ambitious with their feedback than the time available to give everyone quick feedback allows.

Research on formative assessment can provide many concrete tips on feedback. Hattie has suggested that feedback providers start with the following three questions, asked from the recipient’s perspective:[31]

  • Where am I going?
  • How am I doing?
  • What is the next step?

It is also important not to focus too much on pleasant but trivial praise such as “Good job!”, as this is the least effective form of feedback.[32] It is better to provide confidential and personalised feedback on a detail in what the teacher has written in his/her reflection, for example something which aroused the peer learning leader’s genuine interest.


[1] ESM as a method has been shown to have very high ecological validity, meaning that the information collected is very much in line with the reality experienced by the participants. See further details in Hektner et al. (2007, s. 7) and Stone et al. (2003, s. 28).

[2] For a longer discussion on how the ESM reduces such bias, see Stone et al. (2003, s. 28).

[3] See Yin (2009).

[4] For a detailed discussion, see Wunsch et al. (2010).

[5] See Neale (2018, s. 15).

[6] Read more about this difference in Neale (2018).

[7] Full chapter 4 of Neale (2018) is about this.

[8] Read more about reciprocity in qualitative longitudinal research in Neale (2018, s. 82-85).

[9] Read more about the glue metaphor in Hägg (2018).

[10] See for example Argyris and Schön (1974) and Schön (1983).

[11] See Karlsson (2017), Bornemark (2018) and Crocco and Costigan (2007).

[12] For a detailed review of the value of reflection, see Moon (2004, s. 79-94).

[13] See Schön (1983).

[14] A model for such reflection has been proposed by Boud et al. (1985). See also Boud and Walker (1993).

[15] These tips are taken from Hatton and Smith (1995) and from Kember (1999).

[16] This is described by Moon (2004, s. 89).

[17] See definition in Venkatesh et al. (2013, s. 22).

[18] For a review of the various advantages of mixed methods, see Onwuegbuzie and Leech (2005).

[19] Read more about this in Bergman (2008) and in Onwuegbuzie and Leech (2005).

[20] These similarities are described by Onwuegbuzie and Leech (2005).

[21] See Bradley and Lang (1994). See also Lackéus (2014).

[22] See for example Morris et al. (2002).

[23] This is described by Morris et al. (2002).

[24] See also the use of software in coding in Hutchison et al. (2010).

[25] See Corbin and Strauss (1990, s. 74).

[26] See recommendations by Corbin and Strauss (1990).

[27] For a comparison between feedback and other learning strategies, see Hattie (2011).

[28] For an overview of the value of feedback, see Gamlem and Smith (2013).

[29] See Huisman (2006, s. 14).

[30] On this, Kolb writes (1984, s. 21-22) in his book on action-based learning.

[31] See Håkansson and Sundberg (2012, s. 213).

[32] See Håkansson and Sundberg (2012, s. 214).