A collection of articles on the Definition and Basic Concepts of Experimental Design, Assumptions of Analysis of Variance, Comparison of Means, and several types of Experimental Designs that are often used: Completely Randomized Design, Randomized Block Design, Latin Square Design (RBSL) Factorial Experiment, Split- Plot, Split Split Plot Design, Strip Plot Design
Tukey's test is often also called the honestly significant difference test ( HSD ), introduced by Tukey (1953). The testing procedure is similar to LSD, which has one comparison and is used as an alternative to LSD if we want to test all pairs of treatment means without a plan. Tukey's test was used to compare all pairs of mean treatments after the Analysis of Variety test was carried out.
In certain experimental cases, we may only be interested in comparisons between controls and other treatments. For example, comparing a local variety or standard chemical with a new one. For this case, we can use Dunnet's test. Dunnet developed this test and popularized it in 1955. Dunnet's test maintains the MEER at a level that is no more than the specified level of significance, eg = 0.05. In this method, only one comparison value is needed to compare the control with other treatments. The formula is similar to LSD, but in this test, the t-value used is not the t-student used in the LSD test. Dunnet uses a separate t table, which is usually attached to experimental design books.
Duncan's test is based on a set of significant difference values that increase in size depending on the distance between the powers of the two mean values being compared. Can be used to test for differences among all possible treatment pairs regardless of the number of treatments.
Scheffe's test is compatible with the analysis of variance test, where this test never states a significant contrast if the F test is not significant. Scheffe's test developed by Henry Scheffe (1959) is used for comparisons that do not need to be orthogonal. This test controls the MEER for each contrast including pairwise comparisons. The test procedure allows for different types of comparisons so it is less sensitive in finding significant differences than other comparison procedures.
The steps for performing comparisons between treatment means are similar to using Duncan's Test. The difference lies only in the comparison value used.
The analysis of variance method is useful and a reliable tool for comparing treatment means. In comparing t treatments, the null hypothesis states that all treatment means are not different (H0: 1 = 2 = ... = t). If the F test is real, then the HA is accepted, which states that not all treatment averages are the same or one treatment average is different from the others. Furthermore, a comparison is made to determine which treatment is different by parsing the Number of Treatment Squares for additional F-tests to answer several questions that have been planned. Contrasting or orthogonal methods to separate the averages require a certain level of knowledge that is a priori, either based on certain scientific considerations or based on the results of previous research.
Page 2 of 5