**Quantitative Methods****W**e turn now from the introduction, the purpose, and the questions and hypotheses to the methods section of a proposal. This chapter presents essential steps in designing quantitative methods for a research proposal or study, with specific focus on survey and experimental designs. These designs reflect postpositivist philosophical assumptions, as discussed inChapter 1. For example, determinism suggests that examining the relationships between and among variables is central to answering questions and hypotheses through surveys and experiments. The reduction to a parsimonious set of variables, tightly controlled through design or statistical analysis, provides measures or observations for testing a theory. Objective data result from empirical observations and measures. Validity and reliability of scores on instruments lead to meaningful interpretations of data.

In relating these assumptions and the procedures that implement them, this discussion does not exhaustively treat quantitative research methods, such as correlational and causal comparative approaches so that the focus can be on surveys and experiments. Excellent, detailed texts provide information about survey research (e.g., see Babbie, 2007; Creswell, 2012; Fink, 2002; Salant&Dillman, 1994). For experimental procedures, some traditional books (e.g., Campbell & Stanley, 1963; Cook & Campbell, 1979), as well as some newer texts, extend the ideas presented here (e.g., Boruch, 1998; Field & Hole, 2003; Keppel &Wickens, 2003; Lipsey, 1990; Thompson, 2006). In this chapter, the focus is on the essential components of a method section in proposals for a survey and an experiment.**DEFINING SURVEYS AND EXPERIMENTS**

A**survey design**provides a quantitative or numeric description of trends, attitudes, or opinions of a population by studying a sample of that population. From sample results, the researcher generalizes or draws inferences tothe population. In an*experiment*, investigators may also identify a sample and generalize to a population; however, the basic intent of an**experimental design**is to test the impact of a treatment (or an intervention) on an outcome, controlling for all other factors that might influence that outcome. As one form of control, researchers randomly assign individuals to groups. When one group receives a treatment and the other group does not, the experimenter can isolate whether it is the treatment and not other factors that influence the outcome.**COMPONENTS OF A SURVEY METHOD PLAN**

The design of a survey method section follows a standard format. Numerous examples of this format appear in scholarly journals, and these examples provide useful models. The following sections detail typical components. In preparing to design these components into a proposal, consider the questions on the checklist shown inTable 8.1as a general guide.**Table 8.1**A Checklist of Questions for Designing a Survey Method

_____________ | Is the purpose of a survey design stated? |

_____________ | Are the reasons for choosing the design mentioned? |

_____________ | Is the nature of the survey (cross-sectional vs. longitudinal) identified? |

_____________ | Is the population and its size mentioned? |

_____________ | Will the population be stratified? If so, how? |

_____________ | How many people will be in the sample? On what basis was this size chosen? |

_____________ | What will be the procedure for sampling these individuals (e.g., random, nonrandom)? |

_____________ | What instrument will be used in the survey? Who developed the instrument? |

_____________ | What are the content areas addressed in the survey? The scales? |

_____________ | What procedure will be used to pilot or field-test the survey? |

_____________ | What is the timeline for administering the survey? |

_____________ | What are the variables in the study? |

_____________ | How do these variables cross-reference with the research questions and items on the survey? |

What specific steps will be taken in data analysis to do the following: | |

(a)______ | Analyze returns? |

(b)______ | Check for response bias? |

(c)______ | Conduct a descriptive analysis? |

(d)______ | Collapse items into scales? |

(e)______ | Check for reliability of scales? |

(f)______ | Run inferential statistics to answer the research questions or assess practical implications of the results? |

_____________ | How will the results be interpreted? |

AssignmentTutorOnline

**The Survey Design**

In a proposal or plan, the first parts of the method section can introduce readers to the basic purpose and rationale for survey research. Begin the discussion by reviewing the purpose of a survey and the rationale for its selection for the proposed study. This discussion can do the following:

• Identify the purpose of survey research. This purpose is to generalize from a sample to a population so that inferences can be made about some characteristic, attitude, or behavior of this population. Provide a reference to this purpose from one of the survey method texts (several are identified in this chapter).

• Indicate why a survey is the preferred type of data collection procedure for the study. In this rationale, consider the advantages of survey designs, such as the economy of the design and the rapid turnaround in data collection. Discuss the advantage of identifying attributes of a large population from a small group of individuals (Fowler, 2009).

• Indicate whether the survey will be cross-sectional—with the data collected at one point in time—or whether it will be longitudinal—with data collected over time.

• Specify the form of data collection. Fowler (2009) identified the following types: mail, telephone, the Internet, personal interviews, or group administration (see also Fink, 2012; Krueger & Casey, 2009). Using an Internet survey and administering it online has been discussed extensively in the literature (Nesbary, 2000; Sue & Ritter, 2012). Regardless of the form of data collection, provide a rationale for the procedure, using arguments based on its strengths and weaknesses, costs, data availability, and convenience.**The Population and Sample**

In the methods section, follow the type of design with characteristics of the population and the sampling procedure. Methodologists have written excellent discussions about the underlying logic of sampling theory (e.g., Babbie, 2007; Fowler, 2009). Here are essential aspects of the population and sample to describe in a research plan:

• Identify the population in the study. Also state the size of this population, if size can be determined, and the means of identifying individuals in the population. Questions of access arise here, and the researcher might refer to availability of sampling frames—mail or published lists—of potential respondents in the population.

• Identify whether the sampling design for this population is single stage or multistage (called clustering). Cluster sampling is ideal when it is impossible or impractical to compile a list of the elements composing the population (Babbie, 2007). A single-stage sampling procedure is one in which the researcher has access to names in the population and can sample the people (or other elements) directly. In a multistage or clustering procedure, the researcher first identifies clusters (groups or organizations), obtains names of individuals within those clusters, and then samples within them.

• Identify the selection process for individuals. I recommend selecting a*random sample*, in which each individual in the population has an equal probability of being selected (a systematic or probabilistic sample). With randomization, a representative sample from a population provides the ability to generalize to a population. If the list of individuals is long, drawing a random sample may be difficult. Alternatively, a*systematic sample*can have precision equivalent**random sampling**(Fowler, 2009). In this approach, the researcher chooses a random start on a list and selects every*X*numbered people on the list. The*X*number is based on a fraction determined by the number of people on a list and the number that are to be selected on the list (e.g., 1 out of every 80th person). Finally, less desirable is a nonprobability sample (or*convenience sample*), in which respondents are chosen based on their convenience and availability.

• Identify whether the study will involve*stratification*of the population before selecting the sample. This requires that characteristics of the population members be known so that the population can be stratified first before selecting the sample (Fowler, 2009). Stratification means that specific characteristics of individuals (e.g., gender—females and males) are represented in the sample and the sample reflects the true proportion in the population of individuals with certain characteristics. When randomlyselecting people from a population, these characteristics may or may not be present in the sample in the same proportions as in the population; stratification ensures their representation. Also identify the characteristics used in stratifying the population (e.g., gender, income levels, education). Within each stratum, identify whether the sample contains individuals with the characteristic in the same proportion as the characteristic appears in the entire population.

• Discuss the procedures for selecting the sample from available lists. The most rigorous method for selecting the sample is to choose individuals using a random sampling, a topic discussed in many introductory statistics texts (e.g., Gravetter&Wallnau, 2009).

• Indicate the number of people in the sample and the procedures used to compute this number. In survey research, investigators often choose a sample size based on selecting a fraction of the population (say, 10%), select the size that is unusual or typical based on past studies, or base the sample size simply on the margin of error they are willing to tolerate. Instead, Fowler (2009) suggested that these approaches are all misguided. Instead, he recommended that sample size determination relates to the analysis plan for a study. One needs to first determine the subgroups to be analyzed in study. Then, he suggested going to a table found in many survey books (see Fowler, 2009) to look up the appropriate sample size. These tables require three elements. First, determine the margin of error you are willing to tolerate (say +/–4%**confidence interval**). This is a + or – figure that represents how accurate the answers given by your sample correlate to answers given by the entire population. Second, determine the confidence level for this margin of error (say 95 out of 100 times, or a 5% chance). Third, estimate the percentage of your sample that will respond in a given way (50% with 50/50 being the most conservative because people could respond either way). From here, you can then determine the sample size needed for each group. Using Fowler’s (2009) table, for example, with a margin of error of +/–4%, a confidence error of 95%, and a 50/50 chance that the sample contains our characteristic, we arrive at a sample size of 500.**Instrumentation**

As part of rigorous data collection, the proposal developer also provides detailed information about the actual survey instrument to be used in the proposed study. Consider the following:

• Name the survey instrument used to collect data. Discuss whether it is an instrument designed for this research, a modified instrument, or an intact instrument developed by someone else. If it is a modifiedinstrument, indicate whether the developer has provided appropriate permission to use it. In some survey projects, the researcher assembles an instrument from components of several instruments. Again, permission to use any part of other instruments needs to be obtained. In addition, instruments are being increasingly designed through online surveys products (see Sue & Ritter, 2012, for a discussion of products such as Survey Monkey and Zoomerang and important criteria to consider when choosing software and a survey host). Using products such as these, researchers can create their own surveys quickly using custom templates and post them on websites or e-mail them for participants to complete. The software program then can generate results and report them back to the researcher as descriptive statistics or as graphed information. The results can be downloaded into a spreadsheet or a database for further analysis.

• To use an existing instrument, describe the established validity of scores obtained from past use of the instrument. This means reporting efforts by authors to establish**validity in quantitative research**—whether one can draw meaningful and useful inferences from scores on the instruments. The three traditional forms of validity to look for are (a) content validity (do the items measure the content they were intended to measure?), (b) predictive or concurrent validity (do scores predict a criterion measure? Do results correlate with other results?), and (c)**construct validity**(do items measure hypothetical constructs or concepts?). In more recent studies, construct validity has become the overriding objective in validity, and it has focused on whether the scores serve a useful purpose and have positive consequences when they are used in practice (Humbley&Zumbo, 1996). Establishing the validity of the scores in a survey helps to identify whether an instrument might be a good one to use in survey research. This form of validity is different than identifying the threats to validity in experimental research, as discussed later in this chapter.

• Also mention whether scores resulting from past use of the instrument demonstrate**reliability**. Look for whether authors report measures of internal consistency (Are the items’ responses consistent across constructs?) and test-retest correlations (Are scores stable over time when the instrument is administered a second time?). Also determine whether there was consistency in test administration and scoring (Were errors caused by carelessness in administration or scoring? See Borg & Gall, 2006).

• When one modifies an instrument or combines instruments in a study, the original validity and reliability may not hold for the new instrument, and it becomes important to reestablish validity and reliability during data analysis.

• Include sample items from the instrument so that readers can see the actual items used. In an appendix to the proposal, attach sample items or the entire instrument.

• Indicate the major content sections in the instrument, such as the cover letter (Dillman, 2007, provides a useful list of items to include in cover letters), the items (e.g., demographics, attitudinal items, behavioral items, factual items), and the closing instructions. Also mention the type of scales used to measure the items on the instrument, such as continuous scales (e.g.,*strongly agree*to*strongly disagree*) and categorical scales (e.g., yes/no, rank from highest to lowest importance).

• Discuss plans for pilot testing or field-testing the survey and provide a rationale for these plans. This testing is important to establish the content validity of scores on an instrument and to improve questions, format, and scales. Indicate the number of people who will test the instrument and the plans to incorporate their comments into final instrument revisions.

• For a mailed survey, identify steps for administering the survey and for following up to ensure a high response rate. Salant and Dillman (1994) suggested a four-phase administration process (see Dillman, 2007, for a similar three-phase process). The first mail-out is a short advance-notice letter to all members of the sample, and the second mail-out is the actual mail survey, distributed about 1 week after the advance-notice letter. The third mail-out consists of a postcard follow-up sent to all members of the sample 4 to 8 days after the initial questionnaire. The fourth mail-out, sent to all nonrespondents, consists of a personalized cover letter with a handwritten signature, the questionnaire, and a pread-dressed return envelope with postage. Researchers send this fourth mail-out 3 weeks after the second mail-out. Thus, in total, the researcher concludes the administration period 4 weeks after its start, providing the returns meet project objectives.**Variables in the Study**

Although readers of a proposal learn about the variables in purpose statements and research questions/hypotheses sections, it is useful in the method section to relate the variables to the specific questions or hypotheses on the instrument. One technique is to relate the variables, the research questions or hypotheses, and sample items on the survey instrument so that a reader can easily determine how the data collection connects to the variables and questions/hypotheses. Plan to include a table and a discussion that cross-reference the variables, the questions or hypotheses, and specific survey items. This procedure is especially helpful in dissertations in which investigators test large-scale models.Table 8.2illustrates such a table using hypothetical data.**Table 8.2**Variables, Research Questions, and Items on a Survey

Variable Name |
Research Question |
Item on Survey |

Independent Variable 1: Prior publications | Descriptive research Question 1: How many publications did the faculty member produce prior to receipt of the doctorate? | See Questions 11, 12, 13, 14, and 15: publication counts for journal articles, books, conference papers, book chapters published before receiving the doctorate |

Dependent Variable 1: Grants funded | Descriptive research Question 2: How many grants has the faculty member received in the past 3 years? | See Questions 16, 17, and 18: grants from foundations, federal grants, state grants |

Control Variable 1: Tenure status | Descriptive research Question 3: Is the faculty member tenured? | See Question 19: tenured (yes/no) |

Relating the Independent Variable 1: Prior publications to the Dependent Variable: Grants funded | Inferential Question 4: Does prior productivity influence the number of grants received? | See Questions 11,12,13,14,15 to Questions 16, 17, 18 |

**Data Analysis and Interpretation**

In the proposal, present information about the steps involved in analyzing the data. I recommend the following research tip—presenting them as a series of steps so that a reader can see how one step leads to another for a complete discussion of the data analysis procedures.

Step 1.Report information about the number of members of the sample who did and did not return the survey. A table with numbers and percentages describing respondents and nonrespondents is a useful tool to present this information.

Step 2.Discuss the method by which**response bias**will be determined. Response bias is the effect of nonresponses on survey estimates (Fowler, 2009).*Bias*means that if nonrespondents had responded, their responses would have substantially changed the overall results. Mention the procedures used to check for response bias, such as wave analysis or a respondent/nonrespondent analysis. In wave analysis, the researcher examines returns on select items week by week to determine if average responses change (Leslie, 1972). Based on the assumption that those who return surveys in the final weeks of the response period are nearly all nonrespondents, if the responses begin to change, a potential exists for response bias. An alternativecheck for response bias is to contact a few nonrespondents by phone and determine if their responses differ substantially from respondents. This constitutes a respondent-nonrespondent check for response bias.

Step 3.Discuss a plan to provide a**descriptive analysis**of data for all independent and dependent variables in the study. This analysis should indicate the means, standard deviations, and range of scores for these variables. In some quantitative projects, the analysis stops here with descriptive analysis, especially if the number of participants is too small for more advanced, inferential analysis.

Step 4.Assuming that you proceed beyond descriptive approaches, if the proposal contains an instrument with scales or a plan to develop scales (combining items into scales), identify the statistical procedure (i.e., factor analysis) for accomplishing this. Also mention reliability checks for the internal consistency of the scales (i.e., the Cronbach alpha statistic).

Step 5.Identify the statistics and the statistical computer program for testing the major inferential research questions or hypotheses in the proposed study. The**inferential questions or hypotheses**relate variables or compare groups in terms of variables so that inferences can be drawn from the sample to a population. Provide a rationale for the choice of statistical test and mention the assumptions associated with the statistic. As shown inTable 8.3, base this choice on the nature of the research question (e.g., relating variables or comparing groups as the most popular), the number of independent and dependent variables, and the number of variables controlled (e.g., see Rudestam& Newton, 2007). Further, consider whether the variables will be measured on an instrument as a continuous score (e.g., age from 18 to 36) or as a categorical score (e.g., women = 1, men = 2). Finally, consider whether the scores from the sample might be normally distributed in a bell-shaped curve if plotted out on a graph or non-normally distributed. There are additional ways to determine if the scores are normally distributed (see Creswell, 2012). These factors, in combination, enable a researcher to determine what statistical test will be suited for answering the research question or hypothesis. InTable 8.3, I show how the factors, in combination, lead to the selection of a number of common statistical tests. For further types of statistical tests, readers are referred to statistics methods books, such as Gravetter and Wallnau (2009).

Step 6.A final step in the data analysis is to present the results in tables or figures and interpret the results from the statistical test. An**interpretation in quantitative research**means that the researcher draws conclusions from the results for the research questions, hypotheses, and the larger meaning of the results. This interpretation involves several steps.**Table 8.3**Criteria for Choosing Select Statistical Tests

• Report how the results answered the research question or hypothesis. The*Publication Manual of the American Psychological Association*(American Psychological Association [APA], 2010) suggests that the most complete meaning of the results come from reporting extensive description,**statistical significance testing**, confidence intervals, and effect sizes. Thus, it is important to clarify the meaning of these last three reports of the results. The statistical significance testing reports an assessment as to whether the observed scores reflect a pattern other than chance. A statistical test is considered to be significant if the results are unlikely by chance to have occurred, and the null hypothesis of “no effect” can be rejected. The researcher sets a rejection level of “no effect,” such as*p =*0.001, and then assesses whether the test statistic falls into this level of rejection. Typically results will be summarized as “the analysis of variance revealed a statistically significant difference between men and women in terms of attitudes toward banning smoking in restaurants*F*(2; 6) = 8.55,*p =*0.001.” Two forms of*practical evidence*of the results should also be reported: (a) the effect size and (b) the confidence interval. A confidence interval is a range of values (an interval) that describes a level of uncertainty around an estimated observed score. A confidence interval shows how good an estimated score might be. A confidence interval of 95%, for example, indicates that 95 out of 100 times the observed score will fall in the range of values. An**effect size**identifies the strength of the conclusions about group differences or the relationships among variables in quantitative studies. It is a descriptive statistic that is not dependent on whether the relationship in the data represents the true population. The calculation of effect size varies for different statistical tests: it can be used to explain the variance between two or more variables or the differences among means for groups. It shows the practical significance of the results apart from inferences being applied to the population.

• Discuss the implications of the results for practice or for future research on the topic. This will require drawing inferences and conclusions from the results. It may involve discussing theoretical and practical consequences of the results. Focus should also be on whether or not the research questions/hypotheses were supported.**Example 8.1***A Survey Method Section*

An example follows of a survey method section that illustrates many of the steps just mentioned. This excerpt (used with permission) comes from a journal article reporting a study of factors affecting student attrition in one small liberal arts college (Bean & Creswell, 1980, pp. 321–322).**Methodology**

The site of this study was a small (enrollment 1,000), religious, coeducational, liberal arts college in a Midwestern city with a population of 175,000 people.*[Authors identified the research site and population.]*

The dropout rate the previous year was 25%. Dropout rates tend to be highest among freshmen and sophomores, so an attempt was made to reach as many freshmen and sophomores as possible by distribution of the questionnaire through classes. Research on attrition indicates that males and females drop out of college for different reasons (Bean, 1978, in press; Spady, 1971). Therefore, only women were analyzed in this study.

During April 1979, 169 women returned questionnaires. A homogeneous sample of 135 women who were 25 years old or younger, unmarried, full-time U.S. citizens, and Caucasian was selected for this analysis to exclude some possible confounding variables (Kerlinger, 1973).

Of these women, 71 were freshmen, 55 were sophomores, and 9 were juniors. Of the students, 95% were between the ages of 18 and 21. This sample is biased toward higher-ability students as indicated by scores on the ACT test.*[Authors presented descriptive information about the sample.]*

Data were collected by means of a questionnaire containing 116 items. The majority of these were Likert-like items based on a scale from “a very small extent” to “a very great extent.” Other questions asked for factual information, such as ACT scores, high school grades, and parents’ educational level. All information used in this analysis was derived from questionnaire data. This questionnaire had been developed and tested at three other institutions before its use at this college.*[Authors discussed the instrument.]*

Concurrent and convergent validity (Campbell & Fiske, 1959) of these measures was established through factor analysis, and was found to be at an adequate level. Reliability of the factors was established through the coefficient alpha. The constructs were represented by 25 measures—multiple items combined on the basis of factor analysis to make indices—and 27 measures were single item indicators.*[Validity and reliability were addressed.]*

Multiple regression and path analysis (Heise, 1969; Kerlinger&Pedhazur, 1973) were used to analyze the data. In the causal model …, intent to leave was regressed on all variables which preceded it in the causal sequence. Intervening variables significantly related to intent to leave were then regressed on organizational variables, personal variables, environmental variables, and background variables.*[Data analysis steps were presented.]***COMPONENTS OF AN EXPERIMENTAL METHOD PLAN**

An experimental method discussion follows a standard form: (a) participants, (b) materials, (c) procedures, and (d) measures. These four topics generally are sufficient. In this section of the chapter, I review these components as well as information about the experimental design and statistical analysis. As with the section on surveys, the intent here is to highlight key topics to be addressed in an experimental methods section of a proposal. An overall guide to these topics is found by answering the questions on the checklist shown inTable 8.4.**Table 8.4**A Checklist of Questions for Designing an Experimental Procedure

_____________ | Who are the participants in the study? |

_____________ | What is the population to which the results of the participants will be generalized? |

_____________ | How were the participants selected? Was a random selection method used? |

_____________ | How will the participants be randomly assigned? Will they be matched? How? |

_____________ | How many participants will be in the experimental and control group(s)? |

_____________ | What is the dependent variable or variables (i.e., outcome variable) in the study? How will it be measured? Will it be measured before and after the experiment? |

_____________ | What is the treatment condition(s)? How was it operationalized? |

_____________ | Will variables be covaried in the experiment? How will they be measured? |

_____________ | What experimental research design will be used? What would a visual model of this design look like? |

_____________ | What instrument(s) will be used to measure the outcome in the study? Why was it chosen? Who developed it? Does it have established validity and reliability? Has permission been sought to use it? |

_____________ | What are the steps in the procedure (e.g., random assignment of participants to groups, collection of demographic information, administration of pretest, administration of treatment(s), administration of posttest)? |

_____________ | What are potential threats to internal and external validity for the experimental design and procedure? How will they be addressed? |

_____________ | Will a pilot test of the experiment be conducted? |

_____________ | What statistics will be used to analyze the data (e.g., descriptive and inferential)? |

_____________ | How will the results be interpreted? |

**Participants**

Readers need to know about the selection, assignment, and number of participants who will take part in the experiment. Consider the following suggestions when writing the method section for an experiment:

• Describe the selection process for participants as either random or nonrandom (e.g., conveniently selected). Researchers can select participants by random selection or random sampling. With random selection or random sampling, each individual has an equal probability of being selected from the population, ensuring that the sample will be representative of the population (Keppel &Wickens, 2003). In many experiments, however, only a convenience sample is possible because the investigator must use naturally formed groups (e.g., a classroom, an organization, a family unit) or volunteers. When individuals are not randomly assigned, the procedure is called a**quasi-experiment**.

• When individuals can be randomly assigned to groups, the procedure is called a**true experiment**. If a random assignment is made, discuss how the project will*randomly assign*individuals to the treatment groups. This means that of the pool of participants, Individual 1 goes to Group 1, Individual 2 to Group 2, and so forth so that there is no systematic bias in assigning the individuals. This procedure eliminates the possibility of systematic differences among characteristics of the participants that could affect the outcomes so that any differences in outcomes can be attributed to the experimental treatment (Keppel &Wickens, 2003).

• Identify other features in the experimental design that will systematically control the variables that might influence the outcome. One approach is*equating*the groups at the outset of the experiment so that participation in one group or the other does not influence the outcome. For example, researchers**match participants**in terms of a certain trait or characteristic and then assign one individual from each matched set to each group. For example, scores on a pretest might be obtained. Individuals might then be assigned to groups, with each group having the same numbers of high, medium, and low scorers on the pretest. Alternatively, the criteria for matching might be ability levels or demographic variables. A researcher may decide not to match, however, because it is expensive, takes time (Salkind, 1990), and leads to incomparable groups if participants leave the experiment (Rosenthal &Rosnow, 1991). Other procedures to place control into experiments involve using covariates (e.g., pretest scores) as moderating variables and controlling for their effects statistically, selecting homogeneous samples, or blocking the participants into subgroups or categories and analyzing the impact of each subgroup on the outcome (Creswell, 2012).

• Tell the reader about the number of participants in each group and the systematic procedures for determining the size of each group. For experimental research, investigators use a power analysis (Lipsey, 1990) to identify the appropriate sample size for groups. This calculation involves the following:

A consideration of the level of statistical significance for the experiment,

- Assignment status: Already Solved By Our Experts
*(USA, AUS, UK & CA PhD. Writers)***CLICK HERE TO GET A PROFESSIONAL WRITER TO WORK ON THIS PAPER AND OTHER SIMILAR PAPERS, GET A NON PLAGIARIZED PAPER FROM OUR EXPERTS**

**NO PLAGIARISM**– CUSTOM PAPER