By Rick Kleyn, SPESFEED (Pty) Ltd
The poultry industry has made enormous strides over the past few decades. Much of this progress has occurred because of a willingness to embrace new ideas and/or technology. Deciding on which technologies should be adopted, both in terms of improvements in technical efficiencies and/or profitability has never been easy. To complicate the issue, new technologies are introduced all the time, frequently with considerable public relations effort and using some very persuasive marketing methods.
Producers are bombarded by people selling advice, feed additives, medications or other products. Popular magazines and the internet are full of adverts, advertorials or articles written by the technical staff of the company‟s marketing the technologies concerned. We need to appreciate that most of this information is provided to us with a specific agenda in mind – to make us buy the technology. All too often products are developed and marketed based on a hypothesis without any experimentation to verify the underlying theory. Much misinformation and unfounded speculation about nutrition is disseminated through the popular press, with the worst culprits probably being advertisers. This often clouds the issue. Fortunately for those of us involved in animal agriculture this happens far less than it does in human nutrition, where nothing prevents people from bringing ideas (hypothesis) to market.
Over time, we tend to become a bit cynical, having seen so many “new” ideas come and go. However, we also know that it would be foolish to blind ourselves to the possibility that a new and innovative product might well be of significant importance in our production systems. As a result, we need to have a clearly defined processes and procedures for evaluating any new technology – before it is implemented.
We need to rely on science and what is known as scientific methodology to determine if a proposed technology does indeed perform as expected and is worth publishing.
Science is a tool. It is a logical, objective process for testing ideas, and thereby reaching a conclusion. The people who have been trained to use this tool are called scientists. The scientific process emerged out of the Dark Ages about 600 years ago when people began to challenge the authority of the church, and questioned the idea that the only truth was that which could be revealed through prayer. Early scientists, including Copernicus and Galileo, brought about the Age of Enlightenment, sometimes referred to as the Age of Reason. However, we have now entered what is referred to as the Age of Post-modernism. Post- modernism is characterised by a philosophy of political correctness which espouses that all systems of thought, all cultures and all beliefs are of equal value. To assume otherwise would be politically incorrect. Political correctness has lead to science and the scientific process becoming devalued. It suggests that there are multiple and valid “truths” and if you do not agree with the scientific truth, you should find another subjective and possibly irrational truth to explain any outcome. Facts do not always matter to the Post-modernist: whether it feels good to you, or whether you are in “touch” with yourself, may be all that is important. Political correctness and the “feel good” factor makes sorting fact from fiction even more difficult (Roche and Edmeades, 2007) . Rigorous debate and openly challenging research findings are often deemed to be politically incorrect, thus depreciating the value of science.
It is perhaps more important now than ever that lay people develop an understanding of the scientific process. This paper will deal with the topic in simple terms
The scientific process is broadly defined as research. The entire process is best illustrated by figure 1. The research process begins with astute observation. We all „observe‟ all the time. For example, we observe how an animal grows, that food intake increases in cold weather or that certain ingredients lead to better performance. In addition, by reading the scientific and popular press, many observations are made. Over time these accumulate, leading us to make assumptions, have ideas or even develop theories. These theories need to be tested, and a hypothesis is then formed to test the „logical or empirical consequences‟ of our theory. It is at this point that experimentation begins. For example, we could feed different levels of an ingredient in the diet, use different lighting program, or use a range of different feed additives.
In order to generate valid trial data an experiment needs to be properly designed and then managed. Experimental design is a complex topic and beyond the scope of this article. However, there are number of steps that commercial practitioners need to be aware of.
- A clearly defined protocol, with a complete explanation of the methods used and a description of how and where measurements are to be made is required.
- An hypotheses (the truth which is to be tested) is required. In the case of the use of a new feed additive, it should to be compared to a control and/or standard treatment. Normally this control would be a “negative” control in which the product to be tested has been omitted. The inclusion of a “positive” control in the form of an alternative product may also be very useful.
- Accurate experimentation can only be carried out if sufficient replications (the number of pens or animals) of each treatment were used. This allows us to both measure and overcome the impact of normal biological variation. The number of replications required differs depending on the experimental layout but I do not like to see a trial with less than 4 replicates per treatment, but prefer 6 or more.
- The environment needs to be uniform as possible. This means that a singlehouse or single farm should always be used for a trial. Animals of the same size, age and genotype should be used, and if possible a single person should be responsible for the day to day management of the experiment.
- The treatments (pens) should be randomised across the experiment so as to eliminate experimental bias. For example, it makes little sense to place all of the animals on one treatment on the side of the house that has the better ventilation.ResultsIt should go without saying that accurate data collection is a cornerstone of scientific research. Yet in my experience this is where things often go wrong, particularly in the case of trials that are run on commercial farms. In broad terms
We need to ask ourselves the following questions about a data set:
- Has all of the data been presented, and if not is the data available?
- Are there appropriate measurements of the biological variability? If variability is too high, question the results. Equally, if the variability is too low, the resultsmay well not be a true reflection of the experiment.
- Are the data honestly presented? Have the necessary statistics been includedintable for example.It is not enough to hypothesize, experiment or collect data if the results of the experiment are not analysed correctly. Once a valid data set has been generated, it should be submitted to statistical analysis. This will always lead to a more valid interpretation of the results, regardless of the experimental design. The science of statistics has evolved as a means of measuring and quantifying variability and probability. Variability, both between individual animals in an experiment and the pens or houses in which they are grown, leads to problems in interpreting experimental results. This variability comes as no surprise to poultry producers as they have to deal with it with every flock of layer pullets or broiler breeders that they rear. Statistical analysis allows us to calculate the probability (chance) that any differences measured are as a result of the treatment that was applied rather than from chance (environment).We need to be sure that the differences that are claimed are “real” and not simply the result of chance (numerically different). A simple comparison of two houses or sites tells us little as there is simply no way of knowing if any differences occurred as a result of chance or as a result of the treatment applied. In short, there is a 50% probability that any treatment applied will have a positive result.
A comparison of averages of large numbers of birds or animals (in a single pen) under commercial conditions is generally unlikely be meaningful. If this comparison is between birds kept in different geographic regions it is probably not valid at all. It is also questionable to compare results between two different time periods (cycles) because apart from climate differences there may also have been a number of other management and/or disease status changes.
From a practical point of view, it is important to determine if the experiments were designed and conducted under conditions that are similar to those that are experienced in practice. For example, is the stocking density used in the experimental design the same is as used on your farm, and if not does it matter? Sometimes research is conducted in other countries. It is important to asses if the conditions under which the experiment was conducted and managed were similar to those used in your own area of operation. For example, much European nutritional work is conducted using wheat based and this may or may not impact on the results.
Another major distinction that needs to be made is to determine whether data was generated using an in vitro (in a test tube) or an in vivo (in the animal‟s body) method. In vitro work can be very useful, but the method used must be proven to be a reasonable model of what occurs in vivo, before it can generally be accepted.
There is a final point about any scientific research. Science has moved forward in an iterative manner by means of peer review. This means that before an article is published in a recognised journal, fellow scientists review all aspects of the scientific process. The reviewers vet the hypothesis, the methods and materials used, the results that were attained, the way in which the results were submitted to statistical analysis, and the interpretation and/or discussion of the results that were achieved. If the reviewers are satisfied that all of the criteria have been met, the paper will be accepted for publication.
At this stage, fellow scientists read, scrutinise and criticise the work. They often repeat the experiment building upon previous studies. In this manner science proceeds cautiously and tentatively forward, reworking past observations, making new observations, and all along refining its hypothesis (Roche and Edmeades, 2007).
In practical terms the review process means is that all articles published in journals such as Poultry Science and British Poultry are as sound as is humanly possible. Articles in the popular press, the internet and even papers presented at conferences often are not peer reviewed. This is not to say that these articles are inferior, but it does mean that the practitioner needs be a little more cautious when using data from these sources.
Thus far we have discussed the manner in which experiments should be conducted, and the way the results of any research should be presented. In the absence of solid scientific data, the “effective” salesman makes use of other tools that you should be aware of (Roche and Edmeades, 2007)
- When anecdotal evidence is presented it is mostly not possible to check the source of the information or to separate the effects of the treatment from the background variation. How many practices were changed at the same time?
- Experimental evidence from abroad, where the conditions the technology was tested under is different, may not be appropriate.
- Many companies will claim to have done their own experiments and these may even have been published in local farming magazines. Often the experiments are only comparisons rather than scientific experiments and they are little more than anecdotal evidence.
- Sometimes the data is presented in an inappropriate manner. Using graphs it is easy to make a small difference look large through manipulating the shape, scale or axis of a graph.
- Facts can be manufactured through inductive reasoning. That is to say, by string facts together it is possible to create another “fact”. Companies ask scientists who have a knowledge of biochemistry and biological systems to present a logical reason why their product will work.ResponseThe response observed during experimentation is related to the performance changes the user could expect when a new technology is applied to the production system. For example, higher peak production in layers, or an improved feed conversion ratio in broilers.It is important that we recognize that the response to any treatment will vary and, by extension, that pen studies carried out in experimental facilities may not necessarily give an indication of the full potential benefit of the additive. In many cases it is known that the maximum benefit from an additive is only gained when on farm conditions are poor. The AGP, Zinc Bacitracin, is just such an example, where it is now well known that the best response to it is achieved under poor farming conditions.The method by which response data is analysed has a direct bearing on not only the measurement of the “response” but also on the manner in which the response is interpreted. This is a highly technical topic and goes beyond the realm of this article. However, it is important that nutritionists be aware that there are a number of different ways of doing this and that they use the correct approach.
Finally, it is sometimes difficult to measure response. It has been calculated that in order to measure a 1% response in broiler with any degree of statistical confidence, a trial with 400 replications of each treatment need to be carried out. This may be of little relevance for aspects such as feed conversion or growth but any researcher will tell you that it is very rare to measure any statistical difference in mortality – simply because not enough experimental replications are used.
Having satisfied yourself:
The return reflects the profitability of using a selected additive. For example, if an improvement in growth is measurable then a breakeven point can be calculated. For example, an additive may cost R 50.00 per ton (5 cents per kg) of feed. Assuming that a live broiler is valued at R 10.00 per kg and that it consumes 3.2 kg of feed in its lifetime, then the additive would cost would be 16 cents per bird. In order to break even the bird will need to weigh 16 grams more at 40 days of age. From a slightly different perspective, if a medication regime were to cost R 200 per ton then the cost is 64 cents per bird. In order to break even the mortality rate would need to drop by 3.5%.
A guideline often used is that an additive should return two Rand or more for each Rand invested. This would build in a safety margin into the calculations to cover non-responsive flocks and field conditions which could minimize the anticipated response.
In complex integrated operations the expertise of accounting personnel may well be needed for any calculation of cost/benefit of using an additive. A modest benefit in growth or feed conversion may not be enough to justify use of the product. In addition to product cost, there are hidden costs such as purchasing, warehousing, accidental spillage, and possible confusion by mill personnel.
As a matter of procedure one should take the statistical analysis into consideration when evaluating an additive from an economic perspective. Often there may be a numerical difference between two treatments but there is no statistically significant difference between them. Any financial calculation should then be based on the average of the two figures and not the numerical difference, as no proven difference exists between the two and we can not be sure that the differences measured arose through chance or the treatment effect. In cases where no significant difference is shown, unless repeatability of the results has been shown, then extra attention should be paid to these types of calculations.
Regardless of what the research benefits are, or how cost effective an additives use is in theory, they are of little benefit if we do not see measurable results on farm. Nick Dale contends that feed can become like an attic, full of forgotten odds and ends. Once we leave the relative comfort of essential nutrients behind, vigilance is essential. Many additives will have greater or lesser impact in different seasons of the year and at different ages in the life of the bird.
In short, there must be a continued economic payoff on individual farms. As such, managers and nutritionists need accurate records (a database) in order to compare and measure responses on farm. Weight for age, FCR, hen housed production and PER are the measures that need to be considered, and then these figures should be used in order to carry out critical economic evaluation of a selected additive.
The regulations that have covered the use of non medicated feed additives have possibly not been as stringent as they could have been. For example, the safety and efficacy data required for an AGP is simply not required for many of the other classes of feed additive. This is a situation that is fast changing in Europe and will in all likelihood change in other countries as well. In short though, it is the nutritionist‟s responsibility to ensure that the additives that are being recommended are safe to the birds being fed, the farm workers and the consumers who will finally consume the product. In addition, the product must have fulfilled the legal requirements in each country where it is to be used.
You may well ask why I have gone into this detail about something that would appear to be very scientific. The reality is that each one of us has to make decisions, which often have large financial implications, based on data derived from the literature, the suppliers of the various additives and most importantly from our own farms.
Using some of the points made above it can be shown just how difficult it is to make the right decision. I have shown that if an additive costs R 50.00 per ton of feed, the birds need to weigh 16 grams more to break even. Applying the „two times‟ rule, we would expect to measure a 32 gram improvement per bird if we were to use the product. This represents a 1.8% increase in the weight of a 1.8 kg broiler. As pointed out, this type of difference is difficult to measure in a test house and almost impossible to measure on farm. Remember too, that holding an additional ingredient in the feed mill ties up capital and there is always the risk of shrinkage.
We should all be wary, possibly even cynical, of the results of product trials that have not been properly designed or submitted to proper (valid) statistical analysis. Yet having said this, we need to be equally careful not to discount an additive that may have significant financial advantages, even if the data on it is scant.