*How To Interpret Test Plot Data*

*How To Interpret Test Plot Data*

The cropping season has come to an end and data entry and analysis are complete. So what can we look at on our print-outs to help us interpret the data and determine if there were any real differences in the treatments that we're testing? First, let's review a few of the steps we took throughout the season to enable us to produce reliable data. - We chose uniform, well-drained field sites with minimal soil variability. - We used proper experimental designs, randomized all treated plots and check (control) plots, and used several (3 to 6) replications of each treatment to help us determine if differences were due to chance or treatment variation. - We chose an optimal plot size and carefully applied all treatments at the necessary times to the pre-determined plots. - We used proper weed and pest control management as needed to prevent variation from non-treatment related sources. - We collected data throughout the growing season on whatever factors were being studied - such as early growth, stand, plant height, weed control, grain yield, grain moisture, test weight, damage to specific plot areas etc.. - We measured our harvested plot areas. - We then entered our data into a statistical software program that performs an "Analysis of Variance" (ANOVA or AOV) on our data. The ANOVA helps us statistically analyze all the information to determine if any differences found were treatment related or due to random chance. So, now we have our data printouts and the ANOVA lists several items at the end of the printout. Three of these items listed are particularly important in evaluating our data. These are the LSD, Standard Deviation, and the CV. What do they mean and how can we tell whether or not our treatments had any effect? Let's briefly discuss what each of these signifies. LSD or LSD P=.05 : Stands for Least Significant Difference associated with a probability (P) at the 5% level. The LSD is used when comparing the averages (means) of treatments to each other. Any differences between treatments that are greater than the LSD value indicate a significant difference between the treatments, with a 95% degree of certainty that there is a real difference - hence not purely by random chance. Often, we will use a P value of .10 in our analyses and this simply indicates that we would have a 90% degree of certainty that treatment differences are real and did not happen by chance. In our summary table of the ANOVA, the treatment means are followed by a letter (a, b, c, d, e, etc.) and any means followed by the same letter are NOT significantly different from each other, whereas those followed by different letters ARE significantly different from each other. Now let's discuss Standard Deviation. Standard Deviation is a value that represents "the experimental error" in a set of data and indicates variability that was not controlled or poorly controlled. Examples of this type of variability include soil differences, natural plant variability, inaccuracy of measurements (weighing, moisture, plot lengths, etc.) and so on. The Standard Deviation is expressed in the same units as the analyzed variable, such as bushels/acre, %weed control, moisture, %vigor etc.. Finally, we will talk about the CV, or Coefficient of Variation. This number represents a ratio of the Standard Deviation divided by the Grand Mean and multiplied by 100 to give us a percent. The CV is an overall test of control or quality of the data collected throughout the experiment. Ideally, a CV should be less than 10%. A CV above 20% indicates that the experiment was very variable and that conclusions made may not be very reliable or robust. CV's can be affected by soil variability, poor data collection measurements, weather events affecting certain areas or plots in the trial, and anything else than introduces variability into the research study area or data collection. Submitted by Brad Farber, ABG AG Services, Toronto SD