I asked the mods here to let me replace the old ranting salt study thread with this new, more refined post. I think, especially due to the heated nature of the argument, that the debate should try to be as "intellectual" as possible. Like I said before, I'm not at all trying to attack Mr. Borneman or Ms. Lowe. I'm ONLY questioning the merits of this study.
Ok so here it goes again...
Basics first:
First, let's talk a little about experiment design so you all can understand what's going on here:
The most basic kind of experiment design is where you look at one dependent variable responding/correlating to one independent variable. For example, let's pretend you're doing a study that looks at muscle growth (dependent variable) and steroid use (independent variable). You could take a 100 mice, inject half with steroids and half with saline solution, give them the same exercise routine and diet for 3 months, then measure their muscle mass at the end. You have to use a lot of mice for both the experimental group getting the steroids and the control group getting saline solution in order to minimize error due to differences among individual mice. To understand this more clearly... suppose you had only used two mice, one control and one getting steroids. At the end of the 3 months, you wouldn't be able to "trust" the results because you can't be sure that the mouse who got the steroids wasn't at a genetic advantage for muscle growth. I think everyone gets this basic idea, right?
Moving on...
So, what do you do when you don't have 100 mice? What if you only have two mice? Can you still do the study? Perhaps. You might be able to do a repeated measures study. What the heck is a repeated measures study? Glad you asked...
One of the most popular and well known repeated measures design is the pretest, posttest experimental design. For example, you can take the two mice, measure their muscle mass at the very start of the study, then weekly for 4 weeks. Then you inject both with saline solution and continue your measurements for another 4 weeks. Next you inject both with steroids and repeat weekly measurements for yet another 4 weeks. Because you're not comparing results of two different mice, but results over time at intervals on the same mice, you gain statistical power. Get it? Think about it for a sec, you will.
This of course, is not the only example of a repeated measures design. There are all kinds of these study designs. But the basic idea is the same... to test the individuals with different "treatments" over time. You tend to do this when you don't have enough subjects to separate into study groups as you would in a "normal" experiment.
Now, finally, about the salt study:
We have 10g tanks, one for each salt plus a control of natural sea water. We have one independent variable and multiple dependent variables measured over time. Now, first off, what kind of study does this look like? Does it look more like the first example I gave of having 100 mice or the second of having 2 mice? It kinda looks like a mix of both, right? Let's take a deeper look...
Statistically and conceptually, it looks very much like a classic experimental design flawed by having only one subject per variance of the independent variable (i.e. one tank per salt).
Note: "We show that there can be extreme variation among identical tanks, even without any live animals" - Toonen and Wee (http://www.advancedaquarist.com/2005/7/aafeature)
Mr. Borneman, however, would like us to think of this as being more like a kind of repeated measures study to be analyzed with ANOVA (a mathematical concept/model used to analyze this kind of data). Even being most generous with the boundaries of logic and reason, I could only accept this claim if the salt brands were consistent. But they are not. Again, as Mr. Borneman himself concedes, the salt brands are often inconsistent even between batches. So, even with all the power and forgiveness one can gain from a repeated measures study, it doesn't apply here because the batches of the sand brands weren't consistent and experimenters only made this inconsistency more pronounced by doing 100% water changes with each new batch of salt.
Now for how this study could have been done (in light of the statistical power afforded to some repeated measure study designs):
Instead of studying one salt in one tank, they should have studied all the salts in all the tanks... over time. For example, the experimenters could have started with natural sea water until the tanks were "cycled." Then every 3-4 months, changed the salt brand in all the tanks until all the tanks had seen all the salts for a period of 3-4 months (taking measurements of dependent variables at time intervals all along the way and with each change of salt brand). Granted, there are a lot of salt brands to test, so this could take a long time. However, they could have also split the tanks into groups of 5 and tested half the salts on 5 tanks and the other half of the salts on the other 5. Then they could have halved the time to do this kind of study.
The downfall of this proposed idea, and the problem with many repeated measures studies, is that the subjects can "fatigue" or "learn" over time. In the example given with the mice, the mice may have "bulked" up by the time they got the steroids, therefore perhaps limiting the additional effect the steroids might have. In this case, the tanks would be experiencing the salts at different ages... and that would be a problem. However, that would be a statistically manageable problem since all the tanks would be aging at the same time.
Ok, I could have more I could say, but now I'm getting tired. And I think I've made my point. I'm not being "close-minded" and my objections are not "non-sensical"... nor am I trying to embarrass/offend the experimenters. I'm simply looking at this study with a critical eye and right now it looks worthless.
Ok so here it goes again...
Basics first:
First, let's talk a little about experiment design so you all can understand what's going on here:
The most basic kind of experiment design is where you look at one dependent variable responding/correlating to one independent variable. For example, let's pretend you're doing a study that looks at muscle growth (dependent variable) and steroid use (independent variable). You could take a 100 mice, inject half with steroids and half with saline solution, give them the same exercise routine and diet for 3 months, then measure their muscle mass at the end. You have to use a lot of mice for both the experimental group getting the steroids and the control group getting saline solution in order to minimize error due to differences among individual mice. To understand this more clearly... suppose you had only used two mice, one control and one getting steroids. At the end of the 3 months, you wouldn't be able to "trust" the results because you can't be sure that the mouse who got the steroids wasn't at a genetic advantage for muscle growth. I think everyone gets this basic idea, right?
Moving on...
So, what do you do when you don't have 100 mice? What if you only have two mice? Can you still do the study? Perhaps. You might be able to do a repeated measures study. What the heck is a repeated measures study? Glad you asked...
One of the most popular and well known repeated measures design is the pretest, posttest experimental design. For example, you can take the two mice, measure their muscle mass at the very start of the study, then weekly for 4 weeks. Then you inject both with saline solution and continue your measurements for another 4 weeks. Next you inject both with steroids and repeat weekly measurements for yet another 4 weeks. Because you're not comparing results of two different mice, but results over time at intervals on the same mice, you gain statistical power. Get it? Think about it for a sec, you will.
This of course, is not the only example of a repeated measures design. There are all kinds of these study designs. But the basic idea is the same... to test the individuals with different "treatments" over time. You tend to do this when you don't have enough subjects to separate into study groups as you would in a "normal" experiment.
Now, finally, about the salt study:
We have 10g tanks, one for each salt plus a control of natural sea water. We have one independent variable and multiple dependent variables measured over time. Now, first off, what kind of study does this look like? Does it look more like the first example I gave of having 100 mice or the second of having 2 mice? It kinda looks like a mix of both, right? Let's take a deeper look...
Statistically and conceptually, it looks very much like a classic experimental design flawed by having only one subject per variance of the independent variable (i.e. one tank per salt).
Note: "We show that there can be extreme variation among identical tanks, even without any live animals" - Toonen and Wee (http://www.advancedaquarist.com/2005/7/aafeature)
Mr. Borneman, however, would like us to think of this as being more like a kind of repeated measures study to be analyzed with ANOVA (a mathematical concept/model used to analyze this kind of data). Even being most generous with the boundaries of logic and reason, I could only accept this claim if the salt brands were consistent. But they are not. Again, as Mr. Borneman himself concedes, the salt brands are often inconsistent even between batches. So, even with all the power and forgiveness one can gain from a repeated measures study, it doesn't apply here because the batches of the sand brands weren't consistent and experimenters only made this inconsistency more pronounced by doing 100% water changes with each new batch of salt.
Now for how this study could have been done (in light of the statistical power afforded to some repeated measure study designs):
Instead of studying one salt in one tank, they should have studied all the salts in all the tanks... over time. For example, the experimenters could have started with natural sea water until the tanks were "cycled." Then every 3-4 months, changed the salt brand in all the tanks until all the tanks had seen all the salts for a period of 3-4 months (taking measurements of dependent variables at time intervals all along the way and with each change of salt brand). Granted, there are a lot of salt brands to test, so this could take a long time. However, they could have also split the tanks into groups of 5 and tested half the salts on 5 tanks and the other half of the salts on the other 5. Then they could have halved the time to do this kind of study.
The downfall of this proposed idea, and the problem with many repeated measures studies, is that the subjects can "fatigue" or "learn" over time. In the example given with the mice, the mice may have "bulked" up by the time they got the steroids, therefore perhaps limiting the additional effect the steroids might have. In this case, the tanks would be experiencing the salts at different ages... and that would be a problem. However, that would be a statistically manageable problem since all the tanks would be aging at the same time.
Ok, I could have more I could say, but now I'm getting tired. And I think I've made my point. I'm not being "close-minded" and my objections are not "non-sensical"... nor am I trying to embarrass/offend the experimenters. I'm simply looking at this study with a critical eye and right now it looks worthless.