• 💖 [Donate To Keep MyPTSD Online] 💖 Every contribution, no matter how small, fuels our mission and helps us continue to provide peer-to-peer services. Your generosity keeps us independent and available freely to the world. MyPTSD closes if we can't reach our annual goal.

Research Studies

Status
Not open for further replies.

shimmerz

MyPTSD Pro
I find that there are so many conflicting conclusions with studies. It hurts my head at times. One study firmly says 'this' another firmly states 'not a chance this'. Each seems to have valid data to back it up.

So the question is, how do you actually determine the good from the bad? If a study makes sense to you do you apply it to see if it works for you? Do you latch onto conclusions based on a study as gospel truth? When you research, do you research many angles of studies relating to the topic?

How do you deal with this information age where one can virtually have any conclusion backed up by a study while another negates it.
 
First thing I do is look at their sample size & timeframe.

American studies tend to be small & short. A "big" study may have 5,000 participants over 5 years. Average is only a few hundred for a few months. Meanwhile, hop over into socialized med countries? A "big" study may have 200,000 - 2,000,000 participants, over 30/60/90 years. Now, not everything needs to be big... But if you have 2 studies with opposite results? One tiny and short, the other huge & comprehensive? Guess which one is more likely to be an accurate representation?

Second thing I do is look at their methods.

The whole autism vaccine thing? Never passed Peer Review. Not the more famous "study" with falsified results, but the original one... Which in the half dozen schools it polled (a ridiculously small sample to begin with)? Included a school for autistic kids. That's like going to catholic mass, asking how many folks there are Christians, and extrapolating how many Christians are I the world based off of an unrepresentative sample. It was laughed out of the Scientific community. No one was more surprised when the news grabbed onto it with both hands, and other "studies" (like the famous debunked one) started cropping up all over.

Those are my baselines to even bothering reading further. But here's a great cheat sheet:

image.jpg
 
Ah, thank you both. Anthony, I will read your link shortly.

The thing that sticks out in your embedded info Friday, is the 'Cherry Picked' results. I am asking with all due respect for opinions on this problem I seem to be stuck on.

Your example is good. If I take the autism/vaccine study for example, I find two juxtaposed links that seem to have equally credible links to studies.

Link Removed
vs
http://thinktwice.com/HET_study2.pdf

If the VAERS database exists, (reference found in link 2), clearly there is an issue with link 1 which does not make any reference to issues found in link 2 or even that the database exists. That seems to me to be 'Cherry Picking' as the VAERS database is an FDA organized data repository. Should link one not include references to link 2 data somehow to give a full picture? Am I interpreting this correctly?

I am neither for nor against vaccinations, not because I don't care, but because I haven't researched, so please don't be misled into thinking that I have a bias here. I am just attempting to pick up tricks that others may have to help me cut through the massive amounts of conflicting data I am finding out there. Thanks so much. I am more than open to comments explaining the errors of my ways!
 
Lol... I probably should have picked a less volatile example. It was just the first that came to mind.

A great example of cherry picking would be a PTSD example we're all familiar with: Fight vs Flight.

If a study's conclusion was that PTSD makes one violent? You're familiar enough with PTSD to know that they cherry picked their results to match what they wanted their conclusion to look like: by ignoring all the people with flight response, and ignoring all the people with controlled fight response, and only using respondents who match their preconceived notion. (That or went and conducted the study in a maximum security prison for violent offenders, which will also bias the results / is not a representative sample... Also known as cherry picking in advance, or staging the results). Bad science.
 
familiar enough with PTSD to know that they cherry picked their results to match what they wanted their conclusion to look like: by ignoring all the people with flight response, and ignoring all the people with controlled fight response
So then it is important to know what the information of the said topic is prior to looking at studies. One would not know what cherry picking was if the topic was not known, at least to some extent, to the researcher, yes?
 
That's the importance of the Peer Review Process.

If a study has passed Peer Review? Then it was made public to all the professionals (and students) in that field to go over with a fine tooth comb. Anyone with accreditation can chime in with questions, comments, etc., and then the whole mess goes to a committee/panel/board to make sure it's really been thrashed out/ put through its paces. So that experts in the field can look for tell-tales, bad procedure, bad design, etc. that are particular to their field. An astrophysicist being best qualified to look at astrophysics, a medical doctor medicine, a geneticist genes, chemists... Et cetera.

It used to be no study was published without first passing Peer Review. These days companies pay for non-academic studies to promote their products, student studies are published in student mags, but taken by press & public as if they're already professional journals... Not a 3 month long project for a class they're already failing, the press can (and does) grab anything exciting looking and splashes it about... Whether it's passed peer review, or still in process, doesn't qualify for Peer Review, or even flat out rejected as bad science... As if they're all gospel.

I read a lot of peer review journals... So my knee jerk isn't to first check for Peer Review status, but really, that should probably be the first thing one looks into; whether it's passed, or not, or is in process, or not even being submitted to peer review (any serious study will be, otherwise it's just advertising or student work... and good student work is picked up by researchers, fleshed out, and sent through peer review).

A way to look for cherry picking, if you're not familiar with the subject though, is numbers: if they have 700 participants, 60 who demonstraighted XYZ, and they're holding those 60 up as proof positive? Waaaaaait a second! What happened to the other 640??? More often, though, you won't find such blatant abuse. The researchers are cherry picking before they even start. Which does require familiarity to be able to spot / and we're back to why Peer Review is so dang important.
 
Last edited:
All of the above, plus (backing up two steps)

Ok, so we read studies because we want to know what is true. We want to know the point at which, "the world pushes back."

Studies are just individual attempts to get some aspect of the world to push back.

Truth about the world, generally speaking, comes in three flavors: Correspondence, (do our ideas match up with the observed reality?) Coherence (Does this bit of knowledge or idea fit in with or contradict other things we believe to be true?) and Pragmatic (yeah, yeah, but does it WORK and get me the results I want?)

Studies gather data, just information, bits and pieces about the world. The data is the data. If we monkey with the data (include some, exclude others) no matter how well intentioned or justified we think we are we are "cherry picking." Most cherry picking goes on under the radar, before anything is ever written up. So we must look at study design. Is this a good investigation? Are we setting things up in such a way that the bit of reality we are interested in will actually "push back?" Or are we just going to get what we expect to see? This is the most crucial (and boring) part - the study design. One has to think like a detective mapping out an investigation before the fact.

A study is only as good as the theory that informs it. Here is a good study about a pretty straightforward question. The question is; at the level of individual human interaction, are there basic emotions that have the same facial expression associated with them across cultures? Paul Ekman set out to demonstrate that there were not such basic emotions. He got together some funding, a list of emotions (and proto-typical situations that evoked them to test the words from culture to culture) and a camera and some travel money. He told people stories and asked them to compose their faces in the expressions of the emotions the story evoked. Damn. Everyone had the same expressions. WRONG. So, he demonstrated that his hypothesis was wrong. But maybe it is subtle. So he and crew put together a super complex coding system to identify and record facial muscle activation in expressions. Then they took photos of people in real life situations (and gathered photos of people in situations) that captured the emotions. The muscle activation mapping proved it. Universal emotional expression for the basic emotions. The world pushed back. Lots of interesting stuff then grows out of this research.

That's pretty simple. Other things are not so simple. And in those cases the more you know, the better your study is likely to be. The thing is, a good study controls for/eliminates confounding variables - those things that will make it look like the thing you are looking for is pushing back, even when its not. But if you don't know what those things are... so the thing is tricky.

Peer reviewers are valuable if they are good at experimental design, don't have an ax to grind, and have good theories of knowledge. Not all do. Unfortunately. I always with the peer reviewers would write up the strengths and weakness of the study in question...

I always want to look at the raw data - which people don't publish nearly as often as they could - because I am always suspicious of people's interpretations....

You have to ask the right questions to get real information out of stuff.
 
Bouncing off what has been said - when digging through research/studies you - the reader - have to be aware of your own confirmation bias. It is easy for lay-people to accidentally skip what is not relevant to them - but it actually matters.

I got interested in the MFTR gene and its link to depression. Physical evidence! Whooot! And the first study I read eventually got down to the data, which was about the gene showing potential as a marker in post-menopausal women and (I think) type 2 diabetes. Also the numbers were very speculative. The research is still in its infancy. Yet, I really wanted to ignore those two things I'm not - post menopausal or diabetic - just because I wanted the dang thing to be true. Sigh.

The boring parts - the charts, tables, graphs - are much more important than the narrative, in my opinion.

Finally - if you really want to know about a research subject, find at least 3 studies, if not more. Never stop at 2.
 
So then it is important to know what the information of the said topic is prior to looking at studies.
For me the anwser is yes. The less informed you are about the topic, the worse your interpretation, understanding and critical analysis will be.

I use the "Dunning-Kruger effect" to help me judge the value of the speaker. Basically the way it works is the less someone knows about a topic, the more they think they know...and the opposite is ... the more someone learns about a topic, the more they realise how much they have yet to learn. I was so excited when I found out ther is a name for this effect...and it's usually quite easy to spot which side someone is on this particular fence.

Below is a more formal explanation copied from the web....not sure how reputable the source is, but I like the below quote in particular.

“”I am wiser than this man, for neither of us appears to know anything great and good; but he fancies he knows something, although he knows nothing; whereas I, as I do not know anything, so I do not fancy I do. In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know.
—attributed to Socrates, from Plato, Apology

The Dunning-Kruger effect, named after David Dunning and Justin Kruger of Cornell University, occurs where people fail to adequately assess their level of competence — or specifically, their incompetence — at a task and thus consider themselves much more competent than everyone else. This lack of awareness is attributed to their lower level of competence robbing them of the ability to critically analyse their performance, leading to a significant overestimate of themselves.
 
Thanks so much everyone. I am looking through each of your posts carefully. I do tend to have a very large bias....I recognize this more so now than ever. My bias is that many studies have underlying biases. Aaaarrrggghhh! If I believe that studies are done with ulterior motives underlying, my bias will negate all studies. There is a basic mistrust underlying any study I look into.

So for instance, if I believe that big pharma is concerned with monetary gain only without a good set of ethics, and I can't really determine who else gains from the monetary benefits that pharma takes in, how exactly does one determine a reliable source? My bias, and potentially their bias, makes researching this issue counter productive. lol. Honestly, I have not been deemed paranoid, even though I may sound it!
 
Hence the need to look directly at the data. And, in some sense, trust the process, which is the peer review process. The idea is that the truth is one, and so if you know a broad sweep of things, they will, somehow or other, fit together. Drug studies are tough in one way, and easy in another. Usually there is more than one. There are studies on things that are closely related, there is understanding (sometimes) of the basic mechanisms. So... it is possible to "triangulate" between them to see if they are making sense.

Ulterior motives are inescapable. That doesn't mean science as a process doesn't eventually get to the bottom of things.
 
Status
Not open for further replies.
Back
Top