If you dig through patents and literature you will find a lot of clams pertaining to various method of radioactive decay acceleration or radioactive waste decontamination. However, if you read those patents or articles closely you will see that if all cases without exception those claims are not supported by a solid experimental procedure. When we are measuring radioactive samples we are usually dealing with very small levels of activity, which are easily affected by systematic errors. Here is what we need to consider.
Detector Stability
Scintillation detector performance is strongly affected by temperature and by ‘voltage drift’ in power supply and amplifier electronics. We must make sure that we are getting stable counts by simply counting a calibration source for an extended period of time. I suggest 24 hours. If you count a source for 24 hours you must get a clean Gaussian distribution of the counts with no linear drift and no quasiperiodic oscillations. Regardless of what manufacturers claim, all detectors drift. And at the very least one has to run the detector long enough to evaluate the magnitude of the drift and thus put bounds on the magnitude of the detectable signal.
Background Stability
The environment we live in is full of natural radioactivity. Solar flares, cosmic rays and radon in the air are here to play tricks on us and provide unexpected spikes of activity. Therefore I always recommend capturing background for at least 24 hours just to get an idea about what is going on at your location. Once again, the background counts should form a nice normal distribution and should not exhibit systematic increase, decrease or quasiperiodic oscillations. But if you do see any of these factors, you must account for them in your systematic error analysis.
Sample Positioning
When we count radioactive samples we must position them at the detector in a consistent manner because small changes in the sample positioning or detector orientation will lead to huge variation in counts. Obviously, for low-activity samples we want to position the detector as close to the sample as possible to maximize the counts and thus increase the signal to noise ratio. But how do we ensure the consistent positioning? Simple: by making a sample holder that will attach to the detector and thus position the sample consistently.
I must note, that making a good sample holder may turn out into a significant engineering challenge. The quality of machining must be good, the attachment must be free of play, the orientation must be fixed, the sample must be held securely, without slack etc. But most importantly, one has to test the sample holder by repeatedly removing and reinserting the sample onto the detector to get a sense of systematic errors arising from the sample positioning. This could be the most significant source of errors, far greater than the detector or background drift. Therefore one has to remove and replace the sample and take 20-50 measurements. Once again, the resulting counts must form a nice normal distribution without linear drift or quasiperiodic oscillations. Any deviation from the mean will contribute to the systematic error.
Sample Ablation
If our sample is not sealed then surface contamination and sample ablation may be a concern. We do not know what happens to the sample when we manipulate it. If the sample is subjected to high voltage then the ion wind or corona discharge may erode the sample, attract radio (and thus contaminate the sample with a ‘concentrate’ of natural radioactivity), or may create deposits (such as dirt or dust) that would screen some of the ‘active’ sample material and thus influence the count. The least we can do is weigh our sample using high-precision digital scales (0.001 g resolution or better) and note any weight changes: a weight decrease would indicate ablation where as a weight increase would indicate contamination.
Sampling
If we are working with liquid, gaseous or dust-like samples than a whole other issue arises – the issue of representative sampling and sorption. This is such a complicated issue that I will not even dwell on it in this post. It would suffice to say, that representative sampling even in the case of chemical compounds is a very challenging process, especially when we look for small changes in material properties. Adequate mixing and homogeneity is difficult to ensure. There is also a possibility of contamination: the larger the system the more contaminants it will harbor, unusual ones, and in unexpected places. Then there is also an issue of sorption. All materials – and especially steel – are porous, porous enough for radioactive samples to be absorbed by the surface of the apparatus. The greater the surface area – the greater the sorption is. Therefore an ideal apparatus must be small and hopefully electropolished on the inside to minimize the surface area. In other words, if we introduce a small radioactive sample into an apparatus and them intend to sample the gas or the fluid our results are likely to be meaningless as it would be extremely hard to estimate just how much of the material we would loose due to sorption on the internal surface of the apparatus.
Statistical Analysis
Repeatability & Control
Last but not least, we must always run a control experiment. Suppose we are working with a sealed sample, we are checking the weight so we know that we are not loosing or gaining mass, our sample holder provides consistent results, our detector does not drift and the background is stable and we detect a statistically significant difference (P < 0.05) between the ‘before’ and ‘after’ counts. Then we should run a control experiment, where we treat the sample almost the same way as during the experiment, but omit some critical condition that in our mind is responsible for the effect. May be it is certain gas that we do not admit, or maybe we set the current and voltage to levels that we deem as insufficient to accomplish anything useful. In other words, we find a way to ‘kill’ the process without radically changing the experimental conditions. This is our control. The control experiment must yield no change in counts. If it does then our results are invalid and we cannot make any claims.
Moreover, our experiment must work every time and our control must not work every time. That is, every time we run the actual experiment the counts must change, and every time we run the control the counts must not change. And we should do it a number of times, many times and get a consistent results. This is what we need to prove the effect.
Conclusion
The above list is incomplete. Carl Sagan famously said that “extraordinary claims require extraordinary evidence”. Extraordinary in this context means scrutinized like hell. People make mistakes (there is an experimenter bias). Measurement tools malfunction. Environment plays tricks on us. We are dealing with complex systems, which behaves in unexpected ways, so it is naturally that we may miss something. This is where ‘peer review’ comes into play. If we spent enough time scrutinizing our own results and cannot find a mistake in them – it is time that we share our results with our colleagues and see if they can spot a problem or make a suggestion to try this or that. When we follow through with their suggestions we may eventually meet the publication standard and have a paper accepted by a major journal.
The bottom line is – conclusive measurements require a lot of work! The process only looks simple but like most experimental physics is anything but simple. It will take countless measurements, countless instrument calibration and recalibration and many days or months to arrive at a conclusion. And if you do not see this level of diligence in a publication – it is not worth paper it is written on. The natural world is complex and at times confusing, and doing science is hard. But we must do the work in order to gain the understanding.