Paper 7
Syllabus
·
Variables:
Meaning of Concepts, Constructs and Variables. Types of Variables. Delineating
and Operationalizing Variables.
·
Framing
of research questions and Formulation of objectives.
·
Assumptions
and Hypotheses: Concept, Types and Formulation Process.
·
Review
of Related Literature and Referencing.
CONSTRUCTS
To summarize their observations
and to provide explanations of behavior, scientists create constructs. Constructs
are abstractions that cannot be observed directly but are useful in
interpreting empirical data and in theory building. For example, people can
observe that individuals differ in what they can learn and how quickly they can
learn it. To account for this observation, scientists invented the construct
called intelligence. They hypothesized that intelligence influences
learning and that individuals differ in the extent to which they possess this
trait. Other examples of constructs in educational research are motivation,
reading readiness, anxiety, underachievement, creativity, and self-concept.
Defining constructs is a major
concern for researchers. The further removed constructs are from the empirical
facts or phenomena they are intended to represent, the greater the possibility
for misunderstanding and the greater the need for precise definitions.
Constructs may be defined in a way that gives their general meaning, or they
may be defined in terms of the operations by which they will be measured or
manipulated in a particular study. The former type of definition is called a constitutive
definition; the latter is known as an operational definition.
Constitutive Definition
A constitutive definition is
a formal definition in which a term is defined by using other terms. It is the
dictionary type of definition. For example, intelligence may be defined as the
ability to think abstractly or the capacity to acquire knowledge. This type of
definition helps convey the general meaning of a construct, but it is not
precise enough for research purposes. The researcher needs to define constructs
so that readers know exactly what is meant by the term and so that other
investigators can replicate the research. An operational definition serves this
purpose.
Operational Definition
An operational definition ascribes
meaning to a construct by specifying operations that researchers must perform
to measure or manipulate the construct. Operational definitions may not be as
rich as constitutive definitions but are essential in research because
investigators must collect data in terms of observable events. Scientists may
deal on a theoretical level with such constructs as learning, motivation,
anxiety, or achievement, but before studying them empirically, scientists must
specify observable events to represent those constructs and the operations that
will supply relevant data. Operational definitions help the researcher bridge
the gap between the theoretical and the observable.
Although investigators are
guided by their own experience and knowledge and the reports of other
investigators, the operational definition of a concept is to some extent
arbitrary. Often, investigators choose from a variety of possible operational
definitions those that best represent their own approach to the problem.
Certainly an operational
definition does not exhaust the full scientific meaning of any concept. It is
very specific in meaning; its purpose is to delimit a term, to ensure that
everyone concerned understands the particular way a term is being used. For
example, a researcher might state, “For this study, intelligence is defined as
the subjects’ scores on the Wechsler Intelligence Scale for Children.”
Operational definitions are considered adequate if their procedures gather data
that constitute acceptable indicators of the constructs they are intended to
represent. Often, it is a matter of opinion whether they have achieved this
result.
Operational definitions are
essential to research because they permit investigators to measure abstract
constructs and permit scientists to move from the level of constructs and
theory to the level of observation, on which science is based. By using
operational definitions, researchers can proceed with investigations that might
not otherwise be possible. It is important to remember that although
researchers report their findings in terms of abstract constructs and relate
these to other research and to theory, what they have actually found is a
relationship between two sets of observable and measurable data that they selected
to represent the constructs. In practice, an investigation of the relation
between the construct creativity and the construct intelligence relates scores
on an intelligence test to scores on a measure of creativity.
VARIABLES
Researchers, especially quantitative
researchers, find it useful to think in terms of variables. A variable is
a construct or a characteristic that can take on different values or scores.
Researchers study variables and the relationships that exist among variables.
Height is one example of a variable; it can vary in an individual from one time
to another, among individuals at the same time, among the aver- ages for
groups, and so on. Social class, gender, vocabulary level, intelligence, and
spelling test scores are other examples of variables. In a study concerned with
the relation of vocabulary level to science achievement among eighth-graders,
the variables of interest are the measures of vocabulary and the measures of
science achievement. There are different ways to measure science achievement.
The researcher could use a standardized achievement test, a teacher-made test,
grades in science class, or evaluations of completed science projects. Any of
these measures could represent the variable “science achievement.”
Types of Variables
There are several ways to
classify variables. Variables can be categorical, or they can be continuous.
When researchers classify subjects by sorting them into mutually exclusive
groups, the attribute on which they base the classification is termed a categorical
variable. Home language, county of residence, father’s principal occupation,
and school in which enrolled are examples of categorical variables. The
simplest type of categorical variable has only two mutually exclusive classes
and is called a dichotomous variable. Male–female, citizen–alien, and
pass–fail are dichotomous variables. Some categorical variables have more than
two classes; examples are educational level, religious affiliation, and state
of birth.
When an attribute has an
infinite number of values within a range, it is a continuous variable. As
a child grows from 40 to 41 inches, he or she passes through an infinite number
of heights. Height, weight, age, and achievement test scores are examples of
continuous variables.
The most important
classification of variables is on the basis of their use within the
research under consideration, when they are classified as independent variables
or dependent variables. Independent variables are antecedent to dependent
variables and are known or are hypothesized to influence the dependent
variable, which is the outcome. In experimental studies, the treatment is the
independent variable and the outcome is the dependent variable. In an
experiment in which freshmen are randomly assigned to a “hands-on” unit on
weather forecasting or to a textbook-centered unit and are then given a common
exam at the end of the study, the method of instruction (hands-on versus
textbook) ante- cedes the exam scores and is the independent variable in this
study. The exam scores follow and are the dependent variable. The experimenter
is hypothesizing that the exam scores will partially depend on how the students
were taught weather forecasting. In this case, freshman status is a constant.
VARIABLES
Researchers, especially
quantitative researchers, find it useful to think in terms of variables. A variable
is a construct or a characteristic that can take on differ- ent values or
scores. Researchers study variables and the relationships that exist among
variables. Height is one example of a variable; it can vary in an individual
from one time to another, among individuals at the same time, among the aver-
ages for groups, and so on. Social class, gender, vocabulary level,
intelligence, and spelling test scores are other examples of variables. In a
study concerned with the relation of vocabulary level to science achievement
among eighth-graders, the variables of interest are the measures of vocabulary
and the measures of science achievement. There are different ways to measure
science achievement. The researcher could use a standardized achievement test,
a teacher-made test, grades in science class, or evaluations of completed
science projects. Any of these measures could represent the variable “science achievement.”
Types of Variables
There are several ways to
classify variables. Variables can be categorical, or they can be continuous.
When researchers classify subjects by sorting them into mutually exclusive
groups, the attribute on which they base the classification is termed a
categorical variable. Home language, county of residence, father’s principal
occupation, and school in which enrolled are examples of categorical variables.
The simplest type of categorical variable has only two mutually exclu- sive
classes and is called a dichotomous variable. Male–female,
citizen–alien, and pass–fail are dichotomous variables. Some categorical
variables have more than two classes; examples are educational level, religious
affiliation, and state of birth.
When an attribute has an
infinite number of values within a range, it is a continuous variable. As
a child grows from 40 to 41 inches, he or she passes through an infinite number
of heights. Height, weight, age, and achievement test scores are examples of
continuous variables.
The most important
classification of variables is on the basis of their use within the
research under consideration, when they are classified as independent variables
or dependent variables. Independent variables are antecedent to dependent
variables and are known or are hypothesized to influence the dependent
variable, which is the outcome. In experimental studies, the treatment is the
independent variable and the outcome is the dependent variable. In an
experiment in which freshmen are randomly assigned to a “hands-on” unit on
weather forecasting or to a textbook-centered unit and are then given a common
exam at the end of the study, the method of instruction (hands-on versus
textbook) ante- cedes the exam scores and is the independent variable in this
study. The exam scores follow and are the dependent variable. The experimenter
is hypothesizing that the exam scores will partially depend on how the students
were taught weather forecasting. In this case, freshman status is a constant.
However, if you wish to
determine the effect of testing procedures, classroom grouping arrangements, or
grading procedures on students’ motivation, then motivation becomes the
dependent variable. Intelligence is generally treated as an independent
variable because educators are interested in its effect on learn- ing, the
dependent variable. However, in studies investigating the effect of pre-school
experience on the intellectual development of children, intelligence is the
dependent variable.
CONSTANTS
The opposite of variable is constant.
A constant is a fixed value within a study. If all subjects in a study are
eighth-graders, then grade level is a constant. In a study comparing the
attitudes toward school of high school girls who plan professional careers with
those who do not plan professional careers, high school girls constitute a
constant; whether they plan professional careers is the independent variable,
and their attitudes constitute the dependent variable. Figure 2.3 illustrates a
process for classifying variables and constants.
QUESTIONS THAT EDUCATIONAL
RESEARCHERS ASK
The specific question chosen for
research, of course, depends on the area that interests the researchers, their
background, and the particular problem they confront. However, we may classify
questions in educational research as theoretical (having to do with fundamental
principles) or as practical (designed to solve immediate problems of the
everyday situation).
THEORETICAL QUESTIONS
Questions of a theoretical
nature are those asking “What is it?” or “How does it occur?” or “Why does it
occur?” Educational researchers formulate “what” questions more specifically as
“What is intelligence?” or “What is creativity?” Typical “how” questions are
“How does the child learn?” or “How does personality develop?” “Why” questions
might ask “Why does one forget?” or “Why are some children more
achievement-oriented than other children?”
Research with a theoretical
orientation may focus on either developing new theories or testing existing
theories. The former involves a type of study in which researchers seek to
discover generalizations about behavior, with the goal of clarifying the nature
of relationships among variables. They may believe that certain variables are
related and thus conduct research to describe the nature of
the relationship. From the
findings, they may begin to formulate a theory about the phenomenon. Theories
of learning have thus been developed because investigators have shown the
relationships among certain methods, individual and environmental variables,
and the efficiency of the learning process.
Probably more common in
quantitative educational research are studies that aim to test already existing
theories. It may be overly ambitious, especially for beginning researchers in
education, to take as a goal the development of a theory. It is usually more
realistic to seek to deduce hypotheses from existing theories of learning,
personality, motivation, and so forth, and to test these hypotheses. If the
hypotheses are logical deductions from the theory, and the empirical tests
provide evidence that supports the hypotheses, then this evidence also provides
support for the theory.
PRACTICAL QUESTIONS
Many questions in educational
research are direct and practical, aimed at solving specific problems that
educators may encounter in everyday activities. These questions are relevant
for educational research because they deal with actual problems at the level of
practice and lead to an improvement in the teaching– learning process. Slavin
(2004) writes that “enlightened educators look to education research for
well-founded evidence to help them do a better job with the children they
serve” (p. 27). Some academic researchers, however, criticize practitioner
research as not being sufficiently rigorous. But Anderson (2002) also argues
for a research continuum for doctoral students in education that includes
practitioner research. Such practical questions are, for example, “How
effective is peer tutoring in the elementary school classroom?” “How does
teaching children cognitive strategies affect their reading comprehension?”
“What is the relative effectiveness of the problem discussion method as
compared with the lecture method in teaching high school social studies?” or
“What are the most effective means of providing remediation to children who are
falling behind?” The answers to such questions may be quite valuable in helping
teachers make practical decisions in the classroom.
These practical questions can be
investigated just as scientifically as the theoretical problems. The two types
of questions differ primarily on the basis of the goals they hope to achieve
rather than on the study’s level of sophistication.
PRACTICAL QUESTIONS
Many questions in educational
research are direct and practical, aimed at solv- ing specific problems that
educators may encounter in everyday activities. These questions are relevant
for educational research because they deal with actual problems at the level of
practice and lead to an improvement in the teaching– learning process. Slavin
(2004) writes that “enlightened educators look to edu- cation research for
well-founded evidence to help them do a better job with the children they
serve” (p. 27). Some academic researchers, however, criticize prac- titioner research
as not being sufficiently rigorous. But Anderson (2002) also argues for a
research continuum for doctoral students in education that includes
practitioner research. Such practical questions are, for example, “How effec-
tive is peer tutoring in the elementary school classroom?” “How does teaching
children cognitive strategies affect their reading comprehension?” “What is the
relative effectiveness of the problem discussion method as compared with the
lecture method in teaching high school social studies?” or “What are the most
effective means of providing remediation to children who are falling behind?”
The answers to such questions may be quite valuable in helping teachers make
practical decisions in the classroom.
These practical questions can be
investigated just as scientifically as the theoretical problems. The two types
of questions differ primarily on the basis of the goals they hope to achieve
rather than on the study’s level of sophistication.
LANGUAGE OF RESEARCH
Any scientific discipline needs
a specific language for describing and summarizing observations in that area.
Scientists need terms at the empirical level to describe particular
observations; they also need terms at the theoretical level for referring to
hypothetical processes that may not be subject to direct observation.
Scientists may use words taken from everyday language, but they often ascribe
to them new and specific meanings not commonly found in ordinary usage. Or
perhaps they introduce new terms that are not a part of everyday language but
are created to meet special needs. One of these terms is construct.
THE ROLE OF RELATED LITERATURE
IN QUANTITATIVE RESEARCH
Quantitative researchers are urged not to rush
headlong into conducting their study. The search for related literature should
be completed before the actual conduct of the study begins in order to provide
a context and background that support the conduct of the study. This literature
review stage serves several important functions:
- Knowledge
of related research enables investigators to define the frontiers of their
field. To use an analogy, an explorer might say,
“We know that beyond this river there are plains for 2000 miles west, and
beyond those plains a range of mountains, but we do not know what lies
beyond the mountains. I propose to cross the plains, go over the mountains,
and proceed from there in a westerly direction.” Likewise, the researcher
in a sense says, “The work of A, B, and C has discovered this much about
my question; the investigations of D have added this much to our
knowledge. I propose to go beyond D’s work in the following manner.”
- A
thorough review of related theory and research enables researchers to
place their questions in perspective.
You should determine whether your endeavours are likely to add to
knowledge in a meaningful way. Knowledge in any given area consists of the
accumulated outcomes of numerous studies that generations of researchers
have conducted and of the theories designed to integrate this knowledge
and to explain the observed phenomena. You should review the literature to
find links between your study and the accumulated knowledge in your field
of interest. Studies with no link to the existing knowledge seldom make
significant contributions to the field. Such studies tend to produce
isolated bits of information that are of limited usefulness.
- Reviewing
related literature helps researchers to limit their research question and
to clarify and define the concepts of the study. A
research question may be too broad to be carried out or too vague to be
put into concrete operation; for example, “What do parenting practices
have to do with mental health?” A careful review of the literature can
help researchers revise their initial questions so that the final
questions can be investigated.The literature review also helps in
clarifying the constructs involved in the study and in translating these
constructs into operational definitions. Many educational and behavoral
constructs—such as stress, creativity, frustration, aggression,
achievement, motivation, and adjustment—need to be clarified and
operationally defined. These, as well as many other educational and
behavioral constructs, do not lend themselves to research until they can
be quantified. In reviewing literature, you become familiar with previous
efforts to clarify these con- structs and to define them operationally.
Successful reviews often result in the formation of hypotheses regarding
the relationships among variables in a study. The hypotheses can provide
direction and focus for the study.
- Through
studying related research, investigators learn which methodologies have
proven useful and which seem less promising.
The investigator develops increasing sophistication after digging through
the layers of research that the related literature represents. As you
delve into your topic, you soon see that the quality of research varies
greatly. Eventually, you should begin to
notice that not all studies in any one field are necessarily equal. You
will soon be critiquing studies and noticing ways in which they could be
improved. For example, early studies in any one particular field may seem
crude and ineffective because research methodology and design are
constantly being refined with each new study. Even so, many research
projects fail because they use inappropriate procedures, instruments, research
designs, or statistical analyses. Becoming proficient at evaluating
research to determine its worth helps the investigator discover the most
useful research path.
5.
A thorough search through
related research avoids unintentional replication of previous studies. Frequently, a researcher develops a worthwhile idea only to
discover that a very similar study has already been made. In such a case, the
researcher must decide whether to deliberately replicate the previous work or
to change the proposed plans and investigate a different aspect of the problem.
6.
The study of related
literature places researchers in a better position to interpret the
significance of their own results. Becoming
familiar with the- ory in the field and with previous research prepares
researchers for fitting the findings of their research into the body of
knowledge in the field.
As this discussion shows,
quantitative research is built on a study of earlier work in the field, which
helps the researcher refine his or her problem and place it in con- text. For
qualitative researchers, the approach is very different. They are advised not
to read in their area of interest because it is important that they approach
their study without any preconceived ideas that might influence their work.
INDEXING AND ABSTRACTING DATABASES
Indexing and abstracting periodicals are vital
for locating primary sources in your field. These publications subscribe to
professional journals in a given discipline. Their staff then identifies the
key terms for each article, indexes them, and typically provides an abstract
for each article.
Databases that combine several of these indexing
and abstracting periodicals are very useful because you can ask for your key
terms of interest and the data- base will identify the journal articles by
journal, date, volume number, and pages that include your key terms.
ERIC (Educational Resources
Information Center)
There are several reasons for
beginning with the ERIC database:
1.
ERIC indexes and abstracts more
education-related primary sources than any other database. It covers more than
800 journals and more than 1 million other documents.
2.
It includes useful primary
sources that were never published. In fact, ERIC was established in 1966 to
collect, store, index, and abstract unpublished (fugitive) information. Such
documents include reports submitted to the U.S. Department of Education by its
contractors and grantees, reports submitted to state and local departments of
education, papers presented at professional conferences, and so forth. The IDs
of these documents begin with ED. Only later were pro- fessional journals added
to the ERIC database. The IDs of journal articles begin with EJ. You can
download the full text of ED materials. With EJ articles, only key terms (which
ERIC calls descriptors) and abstracts can be downloaded.
3.
It can be accessed for free
from your home or office terminal at www.eric .ed.gov. The U.S. Department of
Education contracts with a private contractor to maintain the ERIC system and
provide its services free to the public.
The ERIC system formerly
produced hard copy (print) periodicals of ED mate- rials in Resources in
Education and EJ documents in Current Index to Journals in Education.
Today, it exists only in electronic form. Submissions to ERIC are now evaluated
on four criteria to determine what is included and what is not:
1.
Relevance of the submission to
education
2.
Quality of the submission
(completeness, integrity, objectivity, substantive merit, and
utility/importance)
3.
Sponsorship by professional
societies, organizations, and government agencies
4.
Editorial and peer review
criteria
TYPES OF
HYPOTHESES
There are three categories of hypotheses: research,
null, and alternate.
THE
RESEARCH HYPOTHESIS
The hypotheses we have discussed thus far are
called research hypotheses. They are the hypotheses developed from
observation, the related literature, and/or the theory described in the study.
A research hypothesis states the relationship one expects to find as a result
of the research. It may be a statement about the expected relationship or the
expected difference between the variables in the study. A hypothesis
about children’s IQs and anxiety in the classroom could be stated “There is a
positive relationship between IQ and anxiety in elemen- tary schoolchildren” or
“Children classified as having high IQs will exhibit more anxiety in the
classroom than children classified as having low IQs.” Research hypotheses may
be stated in a directional or nondirectional form. A directional
hypothesis states the direction of the predicted relationship or difference
between the variables. The preceding two hypotheses about IQ and anxiety are
directional. A directional hypothesis is stated when one has some basis for
predicting a change in the stated direction. A nondirectional hypothesis, in
contrast, states that a relationship or difference exists but without
specifying the direction or nature of the expected finding—for example, “There
is a relationship between IQ and anxiety in children.” The literature review
generally provides the basis for stating a research hypothesis as directional
or nondirectional.
THE NULL
HYPOTHESIS
It is impossible to test research hypotheses
directly. You must first state a null hypothesis (symbolized H0) and assess
the probability that this null hypothesis is true. The null hypothesis is a
statistical hypothesis. It is called the null hypothesis because it states that
there is no relationship between the variables in the popula- tion.A null
hypothesis states a negation (not the reverse) of what the experimenter expects
or predicts. A researcher may hope to show that after an experimental
treatment, two populations will have different means, but the null hypothesis
would state that after the treatment the populations’ means will not be
different.
What is the point of the null hypothesis? A null
hypothesis lets researchers assess whether apparent relationships are genuine
or are likely to be a function of chance alone. It states, “The results of this
study could easily have happened by chance.” Statistical tests are used to
determine the probability that the null hypothesis is true. If the tests
indicate that observed relationships had only a slight probability of occurring
by chance, the null hypothesis becomes an unlikely expla- nation and the
researcher rejects it. Researchers aim to reject the null hypothesis as they
try to show there is a relationship between the variables of the study.
Testing a null hypothesis is analogous to the prosecutor’s work in a criminal
trial. To establish guilt, the prosecutor (in the U.S. legal system) must
provide sufficient evidence to enable a jury to reject the presumption of
innocence beyond reason- able doubt. It is not possible for a prosecutor to
prove guilt conclusively, nor can a researcher obtain unequivocal support for a
research hypothesis. The defendant is presumed innocent until sufficient
evidence indicates that he or she is not, and the null hypothesis is presumed
true until sufficient evidence indicates otherwise.
For example, you might start with the expectation
that children will exhibit greater mastery of mathematical concepts through
individual instruction than through group instruction. In other words, you are
positing a relationship between the independent variable (method of
instruction) and the dependent variable (mastery of mathematical concepts). The
research hypothesis is “Students taught through individual instruction will
exhibit greater mastery of mathematical con- cepts than students taught through
group instruction.” The null hypothesis, the statement of no relationship
between variables, will read “The mean mastery scores (population mean μi) of all
students taught by individual instruction will equal the mean mastery scores
(population mean μg)
of all those taught by group instruction.” H0: μi = μg.*
THE ALTERNATIVE HYPOTHESIS
Note that the hypothesis “Children taught by
individual instruction will exhibit less mastery of mathematical concepts than
those taught by group instruction” posits a relationship between variables and
therefore is not a null hypothesis. It is an example of an alternative
hypothesis.
In the example, if the sample mean of the measure
of mastery of mathematical concepts is higher for the individual instruction
students than for the group instruction students, and inferential statistics
indicate that the null hypothesis is unlikely to be true, you reject the null
hypothesis and tentatively conclude that individual instruction results in
greater mastery of mathematical concepts than does group instruction. If, in
contrast, the mean for the group instruction students is higher than the mean
for the individual instruction students, and inferential statistics indicate
that this difference is not likely to be a function of chance, then you
tentatively conclude that group instruction is superior.
If inferential statistics indicate that observed
differences between the means of the two instructional groups could easily be a
function of chance, the null hypothesis is retained, and you decide that
insufficient evidence exists for concluding there is a relationship between the
dependent and independent variables. The retention of a null hypothesis is not
positive evidence that the null hypothesis is true. It indicates that the
evidence is insufficient and that the null hypothesis, the research hypothesis,
and the alternative hypothesis are all possible.
SAMPLING
An
important characteristic of inferential statistics is the process of going from
the part to the whole. For example, you might study a randomly selected group
of 500 students attending a university in order to make generalizations about
the entire student body of that university.
The
small group that is observed is called a sample, and the larger group about
which the generalization is made is called a population. A population is
defined as all members of any well-defined class of people, events, or objects.
For example, in a study in which students in American high schools constitute
the population of interest, you could define this population as all boys and
girls attending high school in the United States. A sample is a portion of a
population. For example, the students of Washington High School in Indianapolis
constitute a sample of American high school students.
Statistical
inference is a procedure by means of which you estimate parameters
(characteristics of populations) from statistics (characteristics of samples).
Such estimations are based on the laws of probability and are best estimates
rather than absolute facts. In making any such inferences, a certain degree of
error is involved. Inferential statistics can be used to test hypotheses about
populations on the basis of observations of a sample drawn from the population.
CHARACTERISTICS
OF EXPERIMENTAL
RESEARCH
The
essential requirements for experimental research are control, manipulation of
the independent variable, and observation and measurement.
CONTROL
Control
of variables is the essence of the experimental method. When a
study is completed, researchers want to attribute the outcome to the
experimental treatment. To do this, they must eliminate all other possible
explanations by controlling the influence of irrelevant variables. Without
control it is impossible to evaluate unambiguously the effects of an
independent variable or to make inferences about causality.
Basically,
the experimental method of science rests on two assumptions regard- ing
variables (Mill, 1986/1846):
1. If
two situations are equal in every respect except for a variable that is added
to or deleted from one of the situations, any difference appearing between the
two situations can be attributed to that variable. This statement is called the
law of the single independent variable.
2. If
two situations are not equal, but it can be demonstrated that none of the
variables except the independent variable is significant in producing the
phenomenon under investigation, or if significant variables other than the
independent variable are made equal, then any difference occurring between the
two situations after introducing a new variable (independent variable) to one
of the systems can be attributed to the new variable. This statement is called
the law of the single significant variable.