What is research?
If you have ever tried to find something out or asked ‘how do they know that?’, then you have dabbled in the research process. At a basic level, it is about being inquisitive and learning to develop an ‘evidence base’ to support your opinions, views and theories about the world. Research can be defined as an inquiry or investigation, or a process of investigation. Academic research is also described as systematic. This means that it relates to or consists of a system. So research is a process of investigation which needs to be systematic.
Brown & Dowling (1998: 1), reflect on what research is not, stating that ‘research is properly conceived, not, primarily, as a sequence of stages, not as a collection of skills and techniques, nor as a set of rules, although it entails all of these. Rather, it should be understood …, as the continuous application of a particularly coherent and systematic and reflexive way of questioning, a mode of interrogation’.
The definition of research as systematic is important as this separates it from other forms of enquiry or investigation. Research needs to be rigorous, thorough and methodical.
The purpose of research
Discovering facts is the most obvious and simplest reason for research but investigation needs to be systematic, methodological and contribute to an existing body of knowledge to be called research. For example, finding out the bus timetable does not do this as this body of knowledge already exists and investigating when the next bus to Edinburgh is running does not add anything new.
In contrast, the purpose of academic or scientific research is to revise an existing theory or theories, to shape academic practice or to suggest new interpretations of data. Research is also used to test hypotheses, or to identify new questions (for instance, knowledge gaps) that need to be answered.
Primary, secondary and empirical research
Our methods of ‘finding out’ will vary depending on the topic, how much we want to know and the availability of the information. Similarly, in academic research there are different methods for gathering evidence (data) to answer a question. These generally fall into two categories: primary methods and secondary methods.
Primary research includes experiments or investigations that are carried out to collect data first-hand, rather than making use of data from other (published) sources. This is the first-hand account of a study written by the person(s) who undertook the research. This can be described as original research.
Secondary research refers to the analysis and interpretation of primary research. The researcher is not involved in the initial studies but collects published articles of primary studies on a particular topic and explains what the findings of these show. Secondary research may also use available data from a study where they did not collect the data themselves.
Empirical Research. Empirical research is research that is based on observation or experiment where knowledge is gained from experience and not from theory or belief (see Week Three on a posteriori knowledge). It is essentially primary research, however you may come across both terms in your reading.
History of research
Alussutari et al (2008) comments that ‘methodology is not some super-ordained set of logical procedures that can be applied haphazardly to any empirical problem’. Methods are therefore only one part of research methodology. Methodology is made up of different strategies and procedures which are about finding the best means to answer a particular research question. The term methodologies is usually employed to indicate the sets of conceptual and philosophical assumptions that justify the use of particular methods (Payne and Payne 2011: 148). Research involves ‘…choices about methods and the data to be sought, the development and use of concepts, and the interpretation of findings’ (Blumer 1969: 23).
Different research methods have been developed over time, and there is debate and critique over the relative values of each. Different methods also have different levels of regard depending on the subject discipline. For example, within medicine, the randomised controlled trial (RCT) is considered to be the gold standard, although this is not suitable for many other areas of research, particularly in the social sciences. Although an interest in qualitative methods is more recent (Alusuutari et al 2008), compared to the dominance of statistical methods since the 1930s, there is now an overwhelming use of qualitative methods in the social sciences in Britain. For instance, research by Payne et al (2004) showed only 1 in 20 published papers in British journals used quantitative analysis. This demonstrates how particular methods may gain/lose popularity.
An awareness of the methodological debate is helpful when considering the design of your research project.
Methodologies and philosophical paradigms
Research designs have three components to them, as outlined by Creswell (2013).
- The philosophical paradigm is the starting point for any design (for a definition of paradigm, please see Miller and Brewer 2003).
- Next is the chosen strategy for the inquiry.
- Finally the research methods or instruments.
To elaborate on this, methods are only one part of research methodology. Research methodology is made up of different strategies and procedures which are about finding the best means (or methods) to answer a particular research question. The term methodologies is usually used to indicate the sets of conceptual and philosophical assumptions that justify the use of particular methods (Payne and Payne 2011: 148).
To summarise, a researcher(s)’ philosophical approach or worldview will shape their research design. Creswell (2013) identifies four main worldviews that he suggests particular strategies and methods are associated with.
Positivism is often considered to be the traditional research paradigm. It is associated with reductionism, determinism, empirical observation and measurement and verification of theory. Positivists apply the basic methods of the natural sciences to studying social actions, such as the testing of hypotheses, the search for explanations and causes which connect observations and the identification of laws or regular patterns from those observations (Payne and Payne 2011: 172-173). Positivism assumes the absolute truth of knowledge (based on the work of Auguste Comte 1798 – 1857), while postpositivism challenges that idea (Philips & Burbules 2000).
Positivism sees the world as consisting of phenomena that can be observed. The observer therefore makes up theories that describe the phenomena, in particular, describing the order in which events take place and making testable predictions about how that order will display itself in the future (Payne and Payne 2011: 171). Positivism is often associated with quantitative research methods. Postpositivism challenged the notion of the absolute truth of knowledge. A postpositive research approach results in knowledge gained from careful observation and measurement that is tested and verified according to different theories or laws (Creswell 2009).
There has been longstanding debate within the social sciences regarding how useful the application of natural science methods to the study of the social world is, in particular the positivist approach to studying social phenomena (Matthews and Ross 2010: 28). Interpretivist approaches argue that social research must include understandings and explanations of social phenomena that are not directly observable by the senses but can be interpreted. For example, social constructionism argues that people make their own reality and develop subjective meanings of their experiences. These meanings can be varied and multi-dimensional therefore the researcher is looking for the complexity of views rather than trying to narrow this into a few categories or ideas. The intent of the researcher is to make sense of or interpret these meanings, therefore rather than starting with a theory (as with positivism), researchers generate or inductively develop a theory or pattern of meaning (Creswell 2009: 8). Researchers with this philosophical worldview argue that their interaction with participants is a key part of the research. This interaction provides a deeper understanding and interpretation of social life (Miller and Brewer 2003: 41). Interpretivism and social constructionism is generally associated with qualitative research methods.
The pragmatic approach is concerned with application and solutions to problems, including what works. Therefore, this worldview is not based on a particular theoretical perspective or epistemological stance.
It does not accept a single form of reality but rather accepts that truth is what works best at the time (Cherryholmes 1992). Pragmatists see research as situated in social, historical, political and other contexts. For pragmatists, it is important to focus on the research problem/question and then use pluralistic approaches to derive knowledge about the problem (Creswell 2009).
Methodological pluralism treats all methods as equal and assesses the merits of any given method in terms of how appropriately it can answer the research question (Payne and Payne 2011: 149). Therefore, methodological pluralism means that readers need to know why we chose a particular research method e.g. why we thought it was the best method to select and how well we used this method (ibid). Pragmatist researchers choose from a range of research methods that best suit the needs of the research question being asked and can select from all the approaches available to understand the problem. The pragmatic approach is most commonly associated with mixed methods research.
Participatory research draws on the work of people like Marx and Freire (Neuman 2000) and in particular ‘identifies with Habermas’ (1972) articulation for the need for a critical science which serves emancipatory interests’ (Miller and Brewer 2003: 225). Habermas (1972) argues that knowledge, methodology, and human interests are linked and therefore this approach rejects positivism’s suggestion that only ‘neutral’ experience can provide a foundation for valid knowledge (Miller and Brewer 2003: 225).
This approach has a particular political agenda around social justice and the needs of powerless, marginalised peoples. This type of research is used by people who reject the postpositivist approach but also feel the constructionist approach is inadequate (Kemmis and Wilkinson 1998). Advocacy/Participatory research has an agenda for change to improve the lives of the people they research and the researcher is usually part of this reform. Advocacy/Participatory research addresses issues such as ‘empowerment, inequalities, oppression, domination, suppression and alienation’ (Creswell 2009: 9) and it provides a voice for the participants who, because of its collaborative nature (to avoid further marginalisation) may be engaged in the design and implementation of the research.
This worldview may combine theoretical perspectives with philosophical assumptions, such as feminist perspectives, radicalised discoursed, critical theory, queer theory, and disability theory (Creswell 2009: 9). This approach tends to use qualitative methods, although it may also use quantitative research, usually to provide descriptive statistics to underpin the discussion of any particular issue or to highlight inequalities (Creswell 2009).
There are other approaches, for example see Feminist Research (Payne and Payne 2011: 89-93). If a researcher indicates that they have adopted a feminist methodology, or a interpretivist methodology, this indicates to the reader the sets of values that were brought to the study, why the study took used a particular research design and how the interpretation and analysis were developed (Payne and Payne 2011: 151). It is important to consider your own epistemological stance or philsophical worldview when designing your study.
As Payne and Payne (2011: 66) comment, ‘Ethical practice is not an add-on to social research but lies at its very heart.’ Ethical conduct is central to the research project, it provides the basis which legitimates the project and it is part of the research design, organisation and conduct of the project. Ethical responsibility is essential at stages of the research process, from the design of the study, to how participants are recruited and treated throughout the research, and to the consequences of their participation (Miller and Brewer 2003: 95).
An assumption is made by academic researchers that other researchers are behaving honestly, that their data is not invented, that they are not supressing findings or only selectively reporting on parts that support their own theoretical position. This is important so that we can rely on the knowledge produced by research. Following on from this, one breach of ethical conduct is plagiarism. Under no circumstances should a researcher use another author’s work without rightful or appropriate acknowledgement (Miller and Brewer 2003: 98). However, the requirement for proper conduct does not mean that what is published should be regarded as absolute ‘truth’ (Payne and Payne 2011: 67) – data collection or analysis mistakes can happen and there may be differing interpretations of the findings.
Ethical conduct towards the participants and informants within the research project involves three key elements: informed consent, the protection of participants’/informants’ identities and ensuring that no harm is done to participants/informants.
These may sound simple, however implementing these outlines rigorously can be complex. For instance, participants/informants may not fully comprehend what the research entails, for example ‘research’ may not be a term that exists in some languages. Some research entails not disclosing that the research is taking place to the participants/informants, for example ‘covert’ research into groups such as fascists, criminals, religious extremists. However, this has been critiqued by advocates of feminist methods who have argued that this is an abuse of power by the researcher. They argue that research should be a collaboration between researcher(s) and participants/informants in which participants/informants are enabled to participate and expand the terms of the project (Payne and Payne 2011: 69). Anything less would be unethical. No harm to participants may sound straightforward, however research on sensitive issues may cause anxiety or distress to the informants. For example when carrying out research with particularly vulnerable groups of children, such as children who have suffered abuse or exploitation, Mudalay and Goddard (2009) note:
'Is it ethical for children to experience pain or sadness when talking about their experiences of abuse for purposes of research? Can they be re-traumatised by this experience? How can confidentiality be guaranteed if there are concerns about current abuse? These are some of the ethical questions that arise when children who have been abused are involved in research. Yet it is also recognised that children have a right to be involved in research. The critical dilemma is how to balance the welfare rights of children to be protected from any possible exploitation, trauma and harm with their right to be consulted and heard about matters that affect them.'
The complexity of ethical practice and diversity of research styles and settings mean that universities and other institutions (e.g. the NHS) have Ethics Committees to vet new research proposals. UHI have their own Ethics Committee and all undergraduate, postgraduate and staff research projects must gain ethical approval/clearance before undertaking empirical research.
In conclusion, ethics are established and utilised to protect the researcher(s), their work and the subjects of research. The application of ethical principles should be the central consideration in any research you undertake..
The second component of a research design, as suggested by Creswell (2013), is the research strategy (or strategy of inquiry). This can also be referred to as research methodologies (as opposed to research methods).
There are three main categories of research strategy, quantitative, qualitative and mixed methods. We will discuss each of these broadly here, and in the following sections will look specifically at some of the research methods or instruments that are used within these strategies. The ‘category’ and ‘type’ of any research strategy will influence the research ‘method’ chosen and used, it is important to understand this in order to design your study.
Quantitative research has been predominantly associated with a positivist/postpositivist worldview (see earlier discussion regarding positivism). A simplistic explanation of quantitative research is that it is related to numbers and measurements, particularly through the use of statistics. Payne and Payne (2011:181-182) state that ‘almost all forms of quantitative research share certain features:
- The core concern is to describe and account for regularities in social behaviour, rather than seeking out and interpreting the meanings that people bring to their own actions.
- Patterns of behaviour can be separated out into variables, and represented by numbers (rather than treating actions as part of a holistic social process and context).
- Explanations are expressed as associations (usually statistical) between variables, ideally in a form that enables prediction of outcomes from known regularities.
- They explore social phenomena not just as they naturally occur, but by introducing stimuli like survey questions, collecting data by systematic, repeated and controlled measurements.
- They are based on the assumption that social processes exist outside of individual actors’ comprehension, constraining individual actions, and accessible to researchers by virtue of their prior theoretical and empirical knowledge.’
Quantitative research often tests theoretical hypotheses using deductive not inductive logic (please follow the link here for an explanation of inductive and deductive research). Most quantitative research has less detail than qualitative methods but with a wider scope and more generalised level of explanation.
Qualitative methods are ‘especially interested in how ordinary people observe and describe their lives’ (Silverman 1993: 170). Qualitative research is strongly associated with a social constructivist worldview (Creswell 2009) and is simplistically described as being about words, although it would be more correctly described as about ideas, opinions, understandings etc. Payne and Payne (2011: 176) comment that almost all qualitative research share these features:
- ‘The core concern is to seek out and interpret the meanings that people bring to their own actions, rather than describing any regularities or statistical associations between ‘variables’.
- They treat actions as part of a holistic social process and context, rather than as something that can be extracted and studied in isolation.
- They set out to encounter social phenomena as they naturally occur (observing what happens, rather than making it happen).
- They operate at a less abstract and generalised level of explanation.
- They utilise non-representative, small samples of people, rather than working from large representative samples to identify the broad sweep of national paters.
- They focus on the detail of human life.
Rather than starting with a theoretical hypothesis, and trying to test it, they explore the data they encounter and allow ideas to emerge from them (i.e. using inductive, not deductive, logic).’
Although it is conventional to divide social research into two types, quantitative and qualitative, it is not uncommon for researchers to use both types of methodology in their research. This is known as the mixed-methods approach as researchers recognised the limitations of a particular approach or method could be minimised or cancelled out by triangulating data sources. This originally meant finding ways to bring together different methods but it soon became the means to integrate or connect different kinds of data (Tashakkori & Teddle 2003). So for example, a questionnaire might identify issues that can be explored further in interviews, or a focus group could establish questions for use in a survey. Data from one method can be used to reinforce the data from another, for instance using quotes from interviews to support statistical data (Creswell & Plano Clark 2007).
Three main mixed methods strategies are presented by Creswell (2009), these are these are sequential, concurrent or transformative.
This is the third and final component of Creswell’s (2009) research design model. The research method(s) chosen is/are influenced by the chosen research strategy and this is formed in response to the researcher’s worldview (philosophical or epistemological stance). Research methods refers to the specific techniques or instrument(s) used to collect data. It also includes the analysis and interpretation that is used to analyse the data. Methods are in contrast to methodologies which ‘deals with the characteristics of methods, the principles on which methods operate and the standards governing their selection and application’ (Payne and Payne 2011: 150). As Payne and Payne (2011:151) comment, understanding a researcher(s)’ methodology helps to show how methods are constructed from prior orientations and knowledge. There are many different instruments or methods that can be used within each of the strategies identified earlier.
Questionnaires and structured interviews
Questionnaires are commonly used in survey research. These involve all participants in the research (sample) being asked the same question, in the same order. Questionnaires can be quantitative, qualitative or both.
There are two main types of questions, ‘open-ended’ or ‘closed’. Open-ended (or qualitative) questions leave the answer entirely to the respondent, this allows for more detailed responses and depth to the survey. Closed (or quantitative) questions are phrased in a closed format, offering a number of fixed answers from which respondents must choose. The advantage of closed questions is that they can be easily classified at the coding stage or even pre-coded on the questionnaire.
Questionnaires can be administered by an interviewer (this is also known as a structured interview) or as a self-completion questionnaire. For questionnaires administered by an interviewer, interviewers must follow the questionnaire exactly in order and not make any changes to the question wording. They may ‘prompt’ for more information but only at the points marked in the questionnaire. They must not deviate from the questionnaire as this could introduce additional extraneous factors and therefore distort the findings (Payne and Payne 2011: 130).
For self-completion questionnaires, such as postal questionnaires or online surveys, what needs to be done in order to complete the questionnaire must be very clear and not have complicated structure.
The order in which questions are asked is also important, as this can have an influence on the answers. Questions should flow into each other similar to a normal conversation, however it is sometimes possible to ‘hide’ a question among other topics in order to ‘check’ previous responses (Payne and Payne 2011: 188).
The questions asked need to be clear and easily understandable to all respondents. For instance, each question should mean the same to everyone involved so it is possible to compare answers. Questionnaire design should also avoid leading questions – these are questions that appear to expect a certain answer. For example, Payne and Payne (2011: 187) give the example ‘Youth crime is a problem in this area, isn’t it?’ This is better phrased as, ‘In this area, is youth crime or problem?’ or ‘In this area, which of the following do you think are the main problems?’ followed by a list of possible problems.
Experiments and randomised control trials
Experiments involve trying to work out if the changes in one thing have an effect on something else. For example, if we feed a plant more plant food, will its growth be stimulated? In Experiments we call these the Variables. In the plant growth study we have 2 variables:
The first is the one we are testing the effect of – the Plant Food
The second is the one we are measuring the effect on – The Growth Rate.
These have special names – the one that we apply to the plant is known as the Independent Variable and the one that we are measuring is known as the Dependent Variable.
Amount of plant food given = Independent Variable (IV)
Amount of growth measured = Dependant Variable (DV)
Experiments involve manipulating the independent variable to see whether it has an effect on the dependent variable. Manipulation involves controlling or influencing a variable e.g. teachers may have expectations about pupil performance (independent variable) and the behaviour or attainment of the pupil can be measured. We can try and work out if expectation has an effect of attainment (dependent variable). Crucially all other variables which might influence the result (the dependent variable) are controlled in an experiment as much as possible in order to minimise their effect so that we can be more certain it was our Independent Variable that actually had the effect! So, age, ability, gender, etc. of the students will be controlled. This is the basic principle of the experimental design – to control as much as possible so that it is obvious that only the IV creates the change in the DV. This is why the Classic Experiment often takes place in a controlled environment or a laboratory. If all the other variables are controlled, we can show Cause and Effect, e.g. one variable causes change in another.
However, when dealing with people and social activities we cannot control variables in the same way as other ‘units’ of research. For example, the clinical research method of randomised controlled trials was developed in order to minimise the influence of extraneous factors (Payne and Payne 2011: 85). This is done by matching people on certain characteristics (age, gender, education, occupation etc.), which should be representative of the general population. These are then randomly allocated, one of each pair to an experimental group, and the other to a control group. This is intended to make the two groups as similar as possible by removing any differences that might distort the outcome of the experiment. Tests are then carried out on both groups and then treatment/intervention given to the experimental group before further tests carried out on both groups in order to analyse and evaluate the treatment/intervention. If there is found to be any differences in the two groups, it is assumed that the treatment/intervention is the cause because the control group did not receive it.
Quantitative methods attach importance to generalising e.g. that the specific results we get from our sample tell us something about everyone else (e.g. the population). In contrast, qualitative methods focus on the specific, and its meanings rather than trying to explain wider processes (Payne and Payne: 200-210). Indeed some qualitative researchers deny the possibility of generalisation (Guba and Lincoln 1994) and others have raised serious doubts (Hammersley 1992). However, Williams (2000) has suggested that limited careful moderatum generalisations are possible (Payne and Payne 2011:210).
A question asked, particularly for Social Surveys, is ‘How big a sample do we need?’ or ‘How large a sample do we need to be confident about our findings?’ This depends on the type of sample, the resources (for instance the number of researchers, budget etc.) and the quality of information from the sample (Payne and Payne 2011: 205). The method of analysis is also important, for instance with a very specific research question a small sample would be sufficient. Variability of the population is also a factor, for instance if all people in the population are very like one another, the population is said to be ‘homogenous’.
Participant observation involves collecting data over a sustained period of time by watching, listening to, and asking question of, people as they go about their everyday activities. The researcher often adopts a role within the group in question (e.g. becoming a participant) in order to become a member of the group. The aim of this is to avoid disrupting these everyday activities and to avoid drawing attention to themselves as observers. So, for example, if we wanted to conduct participant observation in a school, the researcher might want to adopt a role such as a classroom assistant.
Participant observation can be overt or covert. Overt observation means that the group are aware that the research is taking place and have given permission in advance. Covert observation is used when studying groups who do not want to be studied (such as fascists, criminals or religious fundamentalists). Covert observation may raise ethical issues, the majority of research is overt and the role-playing is for the purpose of enabling the researcher to be unobtrusive (Payne and Payne 2011: 167).
An important part of participant observation is the interpersonal communication, social interactions between the researcher and members of the group and the researcher’s reflection on these interactions. Therefore, taking detailed fieldnotes of these observations, including the researcher’s own personal reactions are a key aspect of participant observation. As Payne and Payne (2011: 169) comment, ‘Fieldwork is a reflexive experience, researchers brining themselves into contact with real-live social situations’.
In-depth interviews: semi-structured and unstructured
The aim of in-depth interviews is to obtain an in-depth account of particular topics. It is still important that questions are phrased in such a way that these are not leading so that the account is the informants and not a projection of the researcher’s preconceptions (Payne and Payne 2011: 131). These types of interview are usually on a one-to-one basis between the interviewer and the respondent.
Establishing trust and familiarity, appearing non-judgemental and developing a rapport with the interviewee are all crucial aspects of interviewing skill. The interviewer needs to remember what the respondent has said and knowing when and when not to interrupt.
The majority of interviews are audio or video recorded. Prior permission or consent needs to be obtained from the respondent. This allows the interview to flow without the interruption of copious note-taking. Interviews are generally transcribed verbatim, which then allows this to be coded for data analysis.
Focus groups involve obtaining information from a group of people via group discussions, normally no more than between six to ten people. Focus groups are often used to explore possible lines of enquiry and to inform participants about what is being planned. Focus groups are sometimes thought of as a form of group interviewing, however Miller and Brewer (2003: 120) argue that with group interviewing the emphasis is placed on the questions and the interaction between interviewer and respondents, whereas the focus group relies on the interaction within the group itself.
The value of focus groups are that these are less time-consuming and costly than conducting several one-to-one interviews. The disadvantage is that less detail or depth is achieved for each informant, however we also see how the individual’s comments are received by other people in the group (Payne and Payne 2011: 104). Therefore, this type of research is useful for seeing the group dynamic and also how individual members may adapt when faced with alternative views. Another benefit to focus groups, are the sharing of views, experiences and stories between the participants, and the rich data that is produced (Miller and Brewer 2003: 12).
Focus groups tend to focus on particular issues that are introduced in a predetermined order. Group members are chosen because they have some commonality e.g. education, social status, occupation, income, age, gender etc. The interviewer is often called the facilitator. There may also be a second interviewer or scribe to record the focus group and to act as a note-taker.
Focus groups can be used as a self-contained method and also in conjunction with others.
Visual methods can include the study of visual records produced by those under study and visual records produced by the researcher. So, this can include making visual representations (studying society by producing images), examining pre-existing visual representations (studying images for information about society) and collaborating with social actors in the production of visual representation (Miller and Brewer 2003: 340). Payne and Payne (2011: 239) categorise images under four main research categories:
- Images used by other people for their own purposes that can be interpreted as the topic of research (semiotics – the study of signs and symbols and their use or interpretation).
- Working collaboratively with informants, using image-making and images as a way of eliciting information.
- Using images to record what is taking place during our research.
- Using images as an addition to words in communicating our findings (dissemination).
Visual methods are particularly appropriate for research involving children and young people, for example photography, video and art (Christensen & James 2008, Johnson, Pfister, & Vindrola‐Padros 2012, Punch 2002, Thomson 2009).
The researcher needs to be aware that all visual representations are not only produced but consumed in a social context. Images are selected, constructed and interpreted – they cannot be taken at face value. For instance, members of the audience will have certain expectations of narrative form, ‘plot’ development, ‘good’ and ‘bad’ composition and so on (Miller and Brewer 2003: 342). Nevertheless, Payne and Payne (2011: 241) argue that this ‘should not incapacitate us, but rather empower us to greater self-awareness in the practice of our craft’.
Obejctivity and bias
Objectivity in social research means that, as far as possible, researchers should remain distanced from what they study so the findings produced depend on the nature of what was studied rather than on the personality, beliefs and values of the researcher (Payne and Payne 2011:152). This concept developed from a positivist perspective, particularly the notion that of the ‘scientific’ nature of sociological knowledge (Payne and Payne 2011: 153). The task of social science is to discover what is, not what ought to be, therefore a researcher should be a neutral observer who should not make value judgements in their research practice (Payne and Payne 2011: 153).
However, the perspective of positivist objectivity has been critiqued, as researchers are human beings with feelings and evaluations which cannot be neatly compartmentalised and are also members of society, interacting within it, therefore complete objectivity is unobtainable. Payne and Payne (2011: 154) also argue that without values it is impossible to define what is socially problematic and therefore values help to show what should be researched, rather than preventing rigour in research. Although a completely value-free orientation is not possible, nevertheless this does not mean that objectivity should not be a target of research. One approach is that whilst acknowledging that objectivity is not an absolute, research should implement methods that are themselves basically neutral. At the same time, a visible value position should be maintained and discussed in order to display any personal prejudices and preferences to the reader (Payne and Payne 2011:155). A second approach, found in feminist research, is that both the topic selection and methods used are explicitly determined by the political stance of advancing the status of women (Payne and Payne 2011: 155). In contrast to the positivist stance on objectivity, for some feminist researchers, personal feelings are essential resources to draw on in the research (Payne and Payne 2011: 155). A third approach argues that a researcher cannot stay detached and aloof and that there are no set protocols that can control for subjectivity. Therefore, what is important here is not neutrality but credibility, other researchers and those researched should be content with the interpretations advanced (Payne and Payne 2011: 155). The key here is not that good research should be neutral or distanced from its subjects but that it should be reliable/dependable and valid/trustworthy (these terms will be discussed further in the following sections).
Lack of objectivity has often been linked to the concept of bias. Bias suggests that personal judgements have been involved, favouritism displayed and distortions introduced in the research evidence (Payne and Payne 2011: 27). However, Payne and Payne (2011) use the term bias to refer to errors of procedure. For example, they discuss sampling bias, where some people are not included in a sample due to errors or difficulties in participant recruitment. Interviewer bias refers to how an interview is carried out – this emphasises the importance of interview skills. Question bias refers to badly constructed questionnaires, for example the use of loaded or leading questions.
Due to the difficulty in replication within qualitative research (see following section) this is one reason for the criticism that qualitative research is ‘biased’. What is usually meant by ‘bias’ in this context, is ‘lack of objectivity’ (Payne and Payne 2011: 30). However, see earlier discussion regarding objectivity.
When a piece of research has been carried out the researcher needs to be sure that it is reliable.
Reliable refers to whether a measure or effect is consistent (Coolican, 2006). In simple terms, if you know that your car will, without a shadow of a doubt, break down on a long journey, we can say that it is reliable. Ironically because you can rely on it to breakdown.
If you know that a particularly disorganised friend is always late for your meetings then we can say that she too is reliable (but perhaps not in the way we would want her to be!)
In research what is important in terms of reliability is whether it is likely that the same study could be carried out again with the same results.
This is important since the researcher needs to know whether the measure which was used previously is stable, e.g. if an IQ test is found to measure intelligence then it is important that if these tests are used again, either at a different time on the same people or at the same time to different people, that they will measure the same thing. i.e. they are consistent.
There can be different types of reliability:
The term reliability tends to be used when discussing quantitative research. These checks for reliability make sense in a quantitative framework where the emphasis is on standardisation and control in data collection (Payne and Payne 2011: 198). However for qualitative researchers, this term is problematic because of their different philosophical starting points (Shipman 1997). Social action is seen as far more complex and therefore ‘re-studying’ is less possible and likely to discover new features (Payne and Payne 2011: 198). Lincoln and Guba (1995) suggest that a better term for qualitative research is dependability. This parallels reliability, i.e. are the findings likely to apply at other times? Lecompte and Goetz (1982) also propose the test of internal reliability (see above) e.g. do all researchers on the same project study agree on the interpretations of the data/information?
When research has been completed a researcher should question whether the findings they have come up with are accurate, or specifically, are they valid. We need to have rational grounds for arguing that the research produced accurately reflects the nature of what we have studied (Payne and Payne 2011: 233).
This is concerned with the integrity of the conclusions - does the test measure what it is supposed to measure, e.g. does the intelligence test accurately measure intelligence, or do the scales in your bathroom accurately measure weight (I am at the age where I hope my scales lack validity)?
There are many different types of validity:
Again, validity is a term that is used more commonly in conjunction with quantitative research. This is not to say that qualitative researchers are not also concerned with most aspects of validity, however their vocabulary differs. For instance, two other aspects of a validity are proposed:
This compares one study’s conclusions with other studies.
This refers to the internal consistency e.g. the plausibility of the way evidence and conclusions are presented.
Lincoln and Guba (1985) propose the use of the term trustworthiness as an alternative to validity in qualitative research. They identify four aspects of trustworthiness:
This means, can the study be repeated in exactly the same way? The researcher has to have given sufficient detail of the method so that anyone can replicate the study and produce similar results.
This is very important if you want to be confident in the results produced by the original research and show that the conclusions can be generalised to other people not just the participants of the original study.
In the search for ‘proof’ the notion of replication is very important.
If a study can be replicated consistently then we may have strong evidence that it is a stable effect and that the theory is very sound.
However, again replication is more applicable in a quantitative framework. As commented earlier, re-studying is less possible in qualitative research. As Payne and Payne (2011: 198) comment, ‘social life is not repetitive or stable, and so our research perceptions of it cannot be entirely consistent’. Therefore, instead of seeking uniformity, qualitative research produces plausibility and coherence from dialogue and experience by reflecting on the researcher(s)’ own reactions and shortcomings and comparing what different techniques have produced (Payne and Payne 2011: 198).
Analysis of the findings/data is an integral part of research design and needs to be taken into consideration when developing a particular research strategy. Methods of analysis take account of whether the intention is to answer a particular type of question (pre-determined), for example a hypotheses within experimental research, or whether to allow themes to emerge, for example within ethnography. Data may be numeric and analysed and interpreted statistically, or it could be words/text that will be analysed for themes and patterns that emerge (Creswell 2009). Some examples of quantitative analysis methods include: descriptive and inferential statistics, attitude scales and contingency tables. Examples of qualitative analysis methods include coding (see also Grounded Theory), content analysis (this can also be used within quantitative methods), conversational analysis and discourse analysis.
For specific data analysis, depending on your research design, I would highly recommend Part IV of Alusuutari, P, Bickman, L & J. Brannen (eds) (2008) The Sage Handbook of Social Research Methods, London, Sage. This is available as an ebook. Also Part D of Matthews, B. & L. Ross (2014) Research Methods: A Practical Guide for the Social Sciences. Essex: Pearson Education Limited. This is available as an ebook.
Alusuutari, P, Bickman, L & J. Brannen (eds) (2008) The Sage Handbook of Social Research Methods, London, Sage. This is available as an ebook.
Bell, J. (2019) Doing Your Research Project: A Guide for First Time Researchers in Education and Social Science. Maidenhead, McGraw-Hill Education.
Blumer (1969) Symbolic Interactionism: Perspective and Method. Berkeley: University of California Press.
Brown A & P. Dowling (1998) Doing Research/ Reading Research: A Mode of Interrogation for Education. London, New York, Falmer Press.
Cherryholmes, C. H. (1992) Notes on pragmatism and scientific realism. Educational Researcher, 13-17.
Creswell J., Plano Clark, V., Gutmann, M. & W. Hanson (2003) Advanced Mixed Methods Designs, In Tashakorri A & Teddlie (Eds) (2003) Handbook of Mixed Method Research in the Social and Behavioural Sciences. Thousand Oaks, CA, Sage.
Cresswell, J.W. (2007) Qualitative Inquiry & Research Design: Choosing Among Five Approaches. 2nd edition. London, Sage.
Creswell, J. & Plano Clark, V. (2007) Designing and Conducting Mixed Methods Research. Thousand Oaks, CA, Sage
Creswell, J. (2009) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 2nd Edn. California, Sage.
Christensen, P., & James, A. (Eds.). (2008). Research with children: Perspectives and practices. Routledge.
Coolican, H., (2003) Research Methods and Statistics. Hodder & Stoughton, Tonbridge.
Denscombe, M. (1998) The Good Research Guide. Buckingham, Open University Press.
Gray, J. (1998) Narrative inquiry. Unpublished paper, Edith Cowan University, Joondalup, W.A.
Guba, E. and Lincoln, Y. (1994) Competing Paradigms in Qualitative Research. In Denzin, N. and Lincoln, Y. (eds). Handbook of Qualitative Methods. London, Sage.
Habermas, J. (1972) Knowledge and Human Interests: Theory and Practice; Communication and the Evolution of Society. J.J. Shapiro, trans. London, Heinemann.
Hammerlsey, M. (1992) What’s Wrong with Ethnography? London, Routledge.
Johnson, G. A., Pfister, A. E., & Vindrola‐Padros, C. (2012). Drawings, photos, and performances: Using visual methods with children. Visual Anthropology Review, 28(2), 164-178.
Kemmis S & Wilkinson M (1998) Participatory Action Research and the Study of Practice, In Atweh B, Kemmis S & Weeks P (Eds) Action Research in Practice: Partnerships for Social Justice in Education, New York, Routledge.
Lecompte, M. and Goetz, J. (1982) ‘Problems of Reliability and Validity in Ethnographic Research’. Review of Educational Research, 53: 32-60.
LeCompte M & Schensul J (1999) Designing and Conducting Ethnographic Research. Walnut Creek, CA, Sage. In Creswell, J (2013) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 2nd Edn. California, Sage.
Matthews, B. & L. Ross (2014) Research Methods: A Practical Guide for the Social Sciences. Essex: Pearson Education Limited. This is available as an ebook.
Miller, R.L. & Brewer, J.D. (2003) The A-Z of social research: a dictionary of key social science research concepts. London, Sage.
Mudaly, N. & Goddard, C. (2009) The ethics of involving children who have been abused in child abuse research. International Journal of Children's Rights 17 (2009) 261-281
Neuman W (2000) Social Research Methods: Qualitative and Quantitative Approaches, 4th Ed, Boston, Allyn & Bacon.
Payne, G., M. Williams & S. Chamberlain (2004) `Methodological Pluralism in British Sociology', Sociology, Vol.38:1), p.153 -64. , Google Scholar
Payne, G., & J. Payne (2011) Key Concepts in Social Research. London, Sage.
Philips D & Burbules N (2000) Postpositivism and Educational Research, New York, Rowman & Littlefield. Punch, S. (2002). Research with children: the same or different from research with adults?. Childhood, 9(3), 321-341.
Shipman, M. (1997) The Limitations of Social Research (4th Edn.) Harlow, Longman.
Slife, B. & R. Williams(1995) What’s Behind the Research? Discovering Hidden Assumptions in the Behavioural Sciences. Thousand Oaks, CA, Sage.
Stake, R. (1995) The Art of Case Study Research. Thousand Oaks, CA, Sage.
Thomson, P. (Ed.). (2009). Doing visual research with children and young people. Routledge.
Williams, M. (2000) Interpretivism and Generalisation. Sociology 34(2): 209-24.
Yin, R. (1993) Applications of Case Study Research. Newbury Park, CA, Sage.
Yin, R. (1991) Case Study Research: Design and Methods. London: Sage.