Today I have finally got to run a very crude version of the infamous ‘bubble fusion’ experiment. This was by no means a careful replication of previous work but rather a foolhardy yet hopeful attempt to detect an interesting signal without any complexity of a ‘proper’ scientific experiment. I figured that my He-3 NEUTRON BANK 10 x 12″ and GAMMA-PRO 5″ systems are so insanely sensitive that if there is any gamma or neutron emission – I will detect it.

Experiment Setup

The experiment (Fig. 1) was designed as follows: to initiate acoustic cavitation I have procured a Branson SFX550 sonifier, which is extremely powerful. It is more than capable of producing acoustic pressures that were associated with bubble fusion in published (although controversial) works. For working fluid I chose transformer oil: this oil is thoroughly degased and has very low vapor pressure, and therefore is a decent medium for cavitation experiments (high vapor pressure prevents strong cavitation because bubbles fill up with vapor, which arrests their collapse). For working gas I chose 99.9% pure D2 from Sigma-Aldrich. I used stainless diffuser (Fig. 2) with 1-micron pore size to produce tiny deuterium bubbles in oil, which I collapsed by firing up the Branson sonifier. The sonifier in action was scary: I could clearly see cavitation cloud under the horn and the screech was ear-piercing. Also, the fluid mixed violently and droplets occasionally spilled over. The intensity setting was at 70%, continuous waveform. I have also tried lowering intensity to 20%, which resulted in much less screeching, less oil movement or fewer spilling, but I could see the cavitation cloud just as clearly. At high intensity the oil warmed up rather quickly: in less than 10 minutes the cup was hot to the touch, so I had to let it cool (high temperature increases vapor pressure, which is a detrimental effect).

Bubble Fusion Experiment Setup
Fig. 1. Bubble Fusion Experiment Setup; He-3 NEUTRON BANK 10 x 12″ is on the far left, GAMMA-PRO 5″ is in the middle, Branson SFX550 sonifier is on the right. The sonifier horn is immersed in transformer oil in a plastic cup.
Fig. 2. Stainless steel diffuser with 1-micron pore size. The pores are so fine they are not visible on the photo.

Measurement Process

I organized my measurements as follows: first I captured the ‘background’ neutron and gamma counts (Fig. 3-4), then I started deuterium flow to produce bubbles, fired up Branson sonifier and captured the ‘experiment’ counts. I have interleaved the ‘background’ and the ‘experiment’ measurements and recorded about two dozens of measurements to get good statistics. Also, I occasionally cavitated without injecting deuterium bubbles (the valve on the lecture bottle was off). I have also played with deuterium flow and sometimes produced a lot of bubbles and other times very little. When there was a lot of bubbles cavitation did not produce as much noise – the foam was forming on top of oil surface, which must have dampened the sound coming from the oil.

Fig. 3. Neutron Bank counting background.
FIg. 4. Gamma detector counting background.

Observations

I did not observe any excess neutron or gamma counts correlated with cavitation. But I did observe curious systematic errors, which to an untrained or biased eye would have appeared as a genuine signal: mechanical (i.e. thoughtless) application of statistical analysis was producing P-values that indicated an unquestionable statistical significance (P < 0.000)! Yet critical analysis revealed that the apparently statistically significant signal was caused entirely by systematic errors.

Neutron Counts

Using my PulseCounter software I have produced partial neutron count summary limited to the first 26 measurements, which is shown on Fig. 5 (each measurement was 60 seconds long, counts logged in CPS).

Fig. 5. Partial summary of neutron counts indicating that the ‘experiment’ counts are significantly lower than ‘background’ counts (P = 0.009).

First two dozens of measurements followed a curious pattern where ‘background’ counts were consistently higher than the ‘experiment’ counts. Of course, this is an impossible behavior because nothing that we know of can make background neutrons magically disappear. The detector signal was clear without EM interference and the thermal neutron spectrum appeared normal. So, what did this mean? This meant that either there was a curious periodicity in background neutron flux, which coincided with the frequency of my measurements or my measurement equipment was somehow affected by the action of Branson sonifier (which produced a LOT of high-frequency ~ 20 kHz acoustic noise).

After I waited for the oil to cool and restarted my butterfly-style measurements (I was interleaving ‘background’ and ‘experiment’ counts) the pattern have apparently broken down and the difference between the ‘experiment’ and the ‘background’ counts became less pronounced, although it did not disappear completely. Fig. 6 displays summary of all neutron counts. The difference between the ‘experiment’ and the ‘background’ is not statistically significant, but barely so.

In a way I was lucky that the higher counts related to ‘background’ measurements. Because this result is nonsensical I knew that it had to be caused by a systematic error. The situation would have been very different if i was getting higher counts when I expected them, i.e. during the cavitation runs. That would have prompted (perhaps a premature) conclusion that I was indeed seeing a neutron flux.

Fig. 6. Complete summary neutron counts indicating that the ‘experiment’ counts are not significantly different from the ‘background’ counts (P = 0.051).

Gamma Counts

Gamma counts were interesting as well: I was getting consistently higher counts during cavitation as opposed to background – Fig. 7.

Fig. 7. Summary of the first 9 gamma counts. There is an unquestionable difference (P = 0.000) between ‘experiment’ and ‘background’.

P = 0.000! How could this be? Well, the 5″ diameter by 5″ long NaI(Tl) detector is a humongously sensitive beast: when I looked at the spectrum I could clearly see a Cs-137 peak from a check source, that was stored elsewhere in the lab in a lead pig. What I was really seeing a very small (less than 1%) difference in ‘experiment’ and ‘background’ counts. So, I realized that I was moving as I was conducting the experiment and therefore I was blocking (with my body) some background gamma radiation from arriving at the detector. To test this hypothesis I have moved the detector closer to the cup with cavitating bubbles. As a result both the ‘background’ and the ‘experiment’ counts decreased because now the cup was blocking some of the background gammas. And this time I managed to stand back such that my body was not blocking as much of the background flux as before. Consequently, for the next batch of measurements (Fig. 8) the new ‘background’ and ‘experiment’ counts remained consistent. Also, there was no increase in the ‘experiment’ counts when I moved the detector at point-blank to the cavitation vessel. This close positioning was a necessary test the hypothesis that the excess gamma counts were originating from the vessel. So, when I moved the detector closer and the counts actually decreased and the difference between the ‘background’ and ‘experiment’ disappeared I effectively disproved the hypothesis that the excess of the ‘experiment’ counts was originating from the cavitation vessel.

Fig. 8. Comparison of the ‘experiment’ and the ‘background’ gamma counts with the GAMMA-PRO 5″ detector positioned at point blank to the cavitation vessel. This time there is no difference whatsoever between the ‘experiment’ and the ‘background’ counts.

Scientific Honesty & Integrity

It is hard to eliminate experimenter’s bias from an experiment. Regardless of what we are trying to accomplish and whether we want it or not we almost always lean one way or the other as far as interpreting the experimental results goes. Sometimes we are too hopeful that we have made a discovery and other times we are too critical and dismissive of positive results. About the only way to remain objective is to report our results truthfully without discarding measurements that did not match our expectations even if we have determined the sources of systematic errors. And for sure we should report observations that we could not really explain even though they may appear preposterous. Doubt kills science. E.g. Erwin Schrodinger have discovered relativistc Klein-Gordon equation but did not publish it because the equation did not predict the correct hydrogen spectrum. Klein & Gordon later discovered the same equation and published it, and now we know that it correctly describes spin-less particles but does not apply to electrons. When measuring electron charge Robert Millikan discarded measurements that yielded 1/3 of an electron charge as outliers because fractional charges were not known to exist back then; and now we wonder if somehow was able to detect free quarks…

So, I must admit that I did not fully resolve the issue of why my ‘experiment’ neutron counts were systematically lower than ‘background’. I suspect that the neutron bank was affected by high-amplitude / high-frequency acoustic noise. I did not bother to investigate. However, in a ‘proper’ scientific experiment one absolutely has to drill down and eliminate the discovered systematic errors prior to proceeding with new measurements. Otherwise one will be acquiring faulty data.

The Point

My point is that one has to be infinitely critical of their own results and undertake countless sanity checks in order to prove that the signal is real, regardless of how convincing the measurement values are and how good the statistics is. Sensitive instruments will find a way to trick us – everything is connected to everything and one cannot remove human interference from the equation.

So question your own results. But report all of your data and all of your observations truthfully.