Problem-solving research

(Response to “Hypothesis-Overdrive”

While I agree with the critique of “hypothesis” by Glass and by Glass & Hall, I believe that basing research funded by the public dollar on “questions” misses the point. What intriques a researcher in the form of a question may be very hard to explain to a layperson, or even a colleague in the same field. I prefer to explain to my students that research proposals must be “problem-solving”. These are typically mechanistic or design/engineering or targeted description (essentially the same as Freedman’s taxonomy of research problems -Freedman, P. (1960). The Principles of Scientific Research. Oxford, Pergamon Press. ), but inevitably the solution to scientific problems involves all three modes of research. Problems are solved by proposed design, hypotheses or search strategies, all of three of which can be collected under the rubric of ‘testable solutions”. Hypotheses are tools for solving problems; and we should dispense with any hypothesis as soon as it no longer is helping us solve the problem. I would suggest that “questions” and “models” (more or elss as described by Glass and Hall) are also tools for problem solving. Scientists should love hypotheses or questions no more than a journeyman loves her hammer.
Now, I suggest, that anyone can come to understand a problem, even if they never understand the hypothesis. Now, problems can be posed as questions, but not all questions identify problems. “How does Toxoplasma infection cause mice to lose their fear of cats?,” refers to a problem but “Does Toxoplasma damage fear-processing neurons in the amygdala” refers to an hypothesis that MIGHT solve the problem. Similarly, we can identify a design-engineering problem as a question (“How can we send humans to Mars and bring them back alive”) or as a statement (“The problem is to send humans to Mars and bring them back alive”). Similarly, we can express the targetted description of the human genome as a questions (“What is the sequence of the human genome?”) or as a mechanistic question (“How can we explain the genetic basis of human disease?) or as search stragety (“Sequencing the human genome should reveals details critical to explaining the genetic basis of human disease”).

Stephen Stigler’s “The History of Statistics: The Measurement of Uncertainty before 1900″

finished reading Stephen  M. Stigler‘s History of Statistics: The Measurement of Uncertainty before 1900.  Statistics.  Harvard U. Press. 1990 I I confess I  not understand a lot of it, but it left me the math but this book left me thirsting for what I hope will be a the sequel.  I was impressed with the quality of  his writing and story telling, attempting and purporting to get inside the intentions of the historical figures of whom he writes.  Most of my lack of understanding I think is due to my own lack of understanding of maths and statistics, not to Stigler, although there are occasional  lapses I think on his part; but these weren’t fatal, just minor annoyances.
This book gave me what I think is a much better insight into the stakes, and into the problems of my own research is the sociology of scientist development, the success of which will depend heavily on the success of the statistical l techniques of social science, which is largely Stigler’s thread.
The problem he wanted to address was “why did it take so long for statistics to be adopted by the social sciences” (my framing).  One of the conclusions is that in astronomy and geodesy there was a relatively simple mathematical theory of gravitation and motion in which there were assumed constants (g, etc).  It was necessary to develop a theory of observational error and a theory of statistical analysis to allow the combination of multiple and diverse observations in order to determine the value of these constants, but little disagreement as to whether there were such constants.  (A notable exception, which Stigler does not mention, was the question of whether c, the speed of light, was to be a constant.  This wasn’t settled theoretically until Einstein and experimentally not really until perhaps Michelson‘s measurements.)   Nonetheless, the assumption of gravitational constants and power laws such as Kepler’s Laws of motion, meant that the utility of statistical analysis was readily appreciated.  Moreover, in the physical sciences it was possible to used controlled experiments independently to verify the constants.
In contrast, it wasn’t clear in the social sciences that the was anything like a “constant”.  The mean values of Quetelet‘s “Homme Moyen” were statistical constructs, not universal constants.  Part of the struggle that took 200 years to complete was to re-vision the problem as well as advance the math.  In particular, it took quite a long time for the notion to become established that variability itself was an object of interest.  The “analysis of variance” (“ANOVA”) is a technique useful in physical sciences as well, but in social science it is also a field of interest.  A good bit of this came from advances in genetics- Francis Galton and Karl Pearson and in the 20th Century Ronald Fisher were driven by problems of biology, genetics, evolution, in which Variability rather than Constants is of prime interest. 

This tension is potent today when some of struggle with whether to represent our data in terms of mean +/- standard deviation (SD)  or mean +/- standard error of the mean (SEM).  The latter presentation always “looks better” in a graph but emphasizes the value of the mean itself- how well do we know the average value, as if the average value itself was important.   The former presentation emphasizes the variability of the data.  SEM can be made arbitrarily small, simply by making more measurements. If we are measuring a constant such as g, this makes great sense.  SEM is high when our technical methods of measuring the constant are weak, so the SEM is inversely related to our technical prowess.   SD reflects the “natural” spread in the population, which makes sense in biological and social situations when we think there is natural variability. We might even be more interested in the SD than the mean value, so that we wish to know the error in the SD!   Nevertheless, technical prowess still plays a role, since crummy measurements will still poison our measurements.  This is a persistent conundrum.

NIH promotes concept of Individual Development Plans for all Trainees

Sally Rockey, Deputy Director for Extramural Research at NIH  explains the recent history of this concept at NIH.  NIH

encourages grantees to develop an institutional policy requiring an Individual Development Plan (IDP) for every graduate student and postdoc supported by any NIH grant, regardless of the type of NIH grant that is used for support. -“
  • Currently, IT APPEARS THAT the NIH has no plans for requiring such a plan, but it should be clear that Training Programs such as T32 are on notice that they should implement them.
  • IT APPEARS There is no provision in this notion for dealing with the fact that “post-doc” is a loose concept, and that unless you are listed as a post-doc on a T32, there is no (useful) operational meaning of the word “post-doc”.

“Buddy-mentoring” or “Invitational Near-Peer Dyadic” Mentoring

  • In many school or professional settings, slightly more senior “buddies” are assigned  or volunteer to mentor newbies.  In one case, the senior mentor wrote

Dear X,
I will be your  “Senior Student Mentor” to assist you in your first year. Please feel free to talk to me about anything that you are not feeling comfortable with … it can be issues in the …, classes, …presentations, exams and courses or any other thing. In short, talk to me about anything that you need to know (finding things/ resources or even classrooms, …) or are worried about. I will try my best to help you.
Good luck with your classes and exams.”

  •  Comment: the intent is to be congratulated, but the technique has a potential weakness. Framing the interactions as problem-based might be disinviting to some protégés,   Instead, framing  the relationship as “invitational”  might make it easier for the protégé to accept the interactions as low-cost.
  • There are many forms of mentoring.  Broadly, mentoring is classified as sponsorship/instrumental or developmental /psychosocial.  Sponsorship or instrumental mentoring refers to mentors who provide instrumental means:  salary,  equipment, lab space, etc.   Developmental or psychosocial mentoring provides most of the rest: “friendship” aspects (though mentors do not need to be friends), advice, counseling, and, most important, acceptance, among other functions.
  • Mentoring interactions can be described in these broad terms which describe stereotypical “dyadic” relationships between a “mentor” and “protégé”, but we also see “peer mentoring” between near-equals.  Note that the same person can be simultaneously be the protégé of a senior person, the mentor of a junior person, and a peer mentor.  Especially when we examine psychosocial mentoring, we see that even between senior and junior persons (or, between more or less powerful or skilled persons) that mentoring “goes both ways”.
  •  That makes you near-peers.  We are asking each of you to “take on” a relationship with a single protégé (also called “mentee”)- that makes it dyadic.  You don’t provide much means of instrumental support (though you can offer to buy your newbie a “cup of coffee”- but that’s really a psychosocial gesture rather than instrumental support).

Finally, “invitational” mentoring can be contrasted with “needs-based” mentoring.  In needs-based mentoring, the mentor responds to specific needs of a protégé, whether identified by the mentor or identified by the protégé.  This might correspond to what you see as the kind of mentoring you receive from the faculty around here, including me, for the most part.   Needs-based mentoring can be intimidating, however, because it seems to required that both parties agree there are “needs”.  Some protégés are quite content to admit their problems and needs; others will either too shy, too proud, or unable to recognize their own “needs”.

In invitational  (“buddy”) mentoring, the  mentor deliberately takes the lead, thus (in principle) overcoming the shyness or reluctance of the protégé. The invitation is to share a cup of something, or a meal, or music in the park,  or a visit to the pub with other friends. It’s what a “big brother” or “big sister” might do, or what a friend might do for a friend from out of town.  It helps the protégé feel accepted and THEN, IF there is a need, the protégé might feel comfortable in bringing it up, or accepting the (so-called) “free” advice.

Probability: “Common sense” vs. context-dependent views of “likely” and “not likely”

In an student’s write-up for a course in medical ethics I noticed the following interpretation of the word “not likely”

The provided cases stated that the

“neurologist believes that this patient has a good chance of recovery with little functional deficit; ….[she] might have right-sided paralysis but will likely regain cognitive function”.

A student’s summary noted:

“ she has a good prognosis but is not likely to regain function in her right side”.

Note the shift of probability assessment.    This kind of shift was quite common and, when made, always tend toward increased likelihood.  Thus I never saw a student suggest that “it is unlikely that she will have right-sided paralysis”.

Comment: Probability in decision making is complicated, and few Americans are educated to appreciate the complexities of probability statements or the nuances of rhetoric dealing with probabilities.  Such modifiers as “can”, “may”, and “might” are often used indiscriminately.  The public is notorious for handling proportions poorly, and common usage in law plays off the enormous ambiguity of probability statements.

For example, according to the  Texas Civil Commitment-Outpatient Sexually Violent Predator Treatment Program (OSVPTP) Health & Safety Code, Chapter 841, civil commitment requires a jury answer this question in the affirmative:

“Does the person suffer from a behavioral abnormality that makes him/her likely to engage in a predatory act of sexual violence?”

Now the typical citizen (based on a unscientific sampling) would answer that the word “likely” as in the phrase “it is likely to rain” or “I am likely to get hit by a car crossing that road” means, roughly, “more likely than not” or, in mathematical terms, “occurrence is expected in more than 50% of the opportunities.”   In this view,  affirmation of predator status might mean “Of one hundred convicts for which we find in the affirmative, we expect at least 51 to engage in a predatory at of sexual violence”.

In contrast, the common usage meaning of the term “not likely” seems to be NOT  “less than likely” but “rare”.  Thus, if I say that I think it “not likely” that I will be killed while crossing the road, I mean that I would be very much surprised indeed to hear I had been killed while crossing the road.

Available data indicate the rate of recidivism of the kind covered by Chapter 841 is  less, perhaps much less than 50%.  [The present analysis doesn't take into account the meaning of the word "that" in the legal criterion]. Expert witnesses for the defense cite the common-usage meaning of “likely”, but prosecutors and juries use a meaning closer to “more likely than not likely”.  This observation suggests that citizens use a flexible definition of “likely”.

If a neurologist finds that a patient “might  experience paralysis” and “will likely regain cognitive function”, one  “might” interpret this to mean that paralysis is something less than “likely but more than “unlikely”; and that is “more likely than not” to regain cognitive functions.  However, it is clear that students reading this scenario are, like prosecutors and juries [based on this reporter's personal experience, and the very high rate of commitment in the Texas court], likely to interpret this using a non-standard definition.

The non-standard definition might be seen as ad hoc but also might be seen as “context-dependent”:  if the risk is very high, the mathematical value of the  informal probability assessment of the term “likely” might decrease.  Thus, we might deny that one is likely to be struck by lightning ( in general; the lifetime chance of being hit is ~1/10,000)  but we might agree  that it one is “likely” to be hit by lightning when standing outside in a thunderstorm, even if the chances of being struck in a storm  are judged less than 50%.

I am unaware of any study testing the hypothesis that the average person adjusts their sense of “likely” according to perceived risk.

Resistance to Mentoring…is futile? or too easy?

  • Faculty often complain that graduate students are unwilling to take their advice, are passive-aggressive, or otherwise exhibit “resistance to mentoring”.   This is often interpreted in the sense of  “resistance to being mentored”, as meaning the student “doesn’t care”, and is the source of disappointment and resentment on part of the faculty member.
  • Resistance to being mentored might have many explanations, including genuine lack of interest.  Stereotype threat is a well-documented process that might make students distrustful of their research advisors, but there could well be other psychosocial mechanisms that induce resistance to being mentored.
  • But the other sense is “resistance to mentoring others”, in which the might-be mentor actually resists mentoring the protege, or sees mentoring in a very narrow way that inhibits or precludes effective mentoring.

The commentative or reflective footnote: lost art and lost science.

Don’t know where to put that thought? Put it in a commentative footnote!
This is a writing tool invented long ago (see Anthony Grafton’s EXCELLENT The Footnote: A curious history) that is way underused in writing science! You can comment on

  • methods
  • personalities
  • ideas
  • theories
  • problems
  • words
  • history
  • future directions
  • references
  • other parts of your thesis (endophoric citations)

Some students report that their thesis advisors frown on this technique.  This might mean they are merely naive about writing.  But even if you can’t keep your footnotes in your final version, the commentative footnote is a way for YOU to write reflectively about your own area of expertise and to develop your own voice.

Stephen Stearns, PhD, Professor of Ecology and Evolutionary Biology at Yale University writes Modest Advice for Graduate Students including these headings:

    [proceed as if] nobody cares for you

    Know why YOUR work is important to you

    watch out for psychological traps

    avoid lecture courses

    write a research or fellowship proposal ASAP

    learn to manage your Research Advisors

    publish early

    don’t look down on the master’s thesis

    publish regularly, but not too often

    Comments: Some of Stearn’s comments will seem a bit off the mark to students working in laboratories funded by high pressure grants such as from the N.I.H., especially the notion that a student can develop their own thesis project.
    Thanks to JL for pointing Stearn’s essay out to me!- JR

Link

tokbox conference

https://api.opentok.com/hl/embed/1emba08278c5726fa7ca998b348f141488561cb7

Real-world accecssibilty, valuation and Learning

Douglas Galbi’s post at purplemotes about a recent study published in the American Economic Review (Benjamin Bushong, Lindsay M. King, Colin F. Camerer, and Antonio Rangel, Pavlovian Processes in Consumer Choice: The Physical Presence of a Good Increases Willingness-to-pay, American Economic Review 100 (September 2010): 1–18.) sparks my interest. The conclusion of the paper is that real-world accessibility of an object will increase its value to consumers. Implications: more deserts sold through a desert tray than through a printed menu; widgets you can hold are valued more than widgets you see on-line.

The consumer choice model of this paper might speak more to impulse buying rather than some longer-term valuation. In any event, this paper might have implications for learning.
This paper makes me wonder if teachers of graduate and medical students shouldn’t reconsider using physical objects to teach instead of relying so much on text and visuals.

It reminds me of the method we used to teach one of my daughters spelling. In second grade she was getting 20’s and 30’s on her spelling tests; the psychologist’s evaluation indicated an extreme deficit in short term visual member (bottom 2%-ile) and recommended a kinaesthetic approach involving tracing on a white board letters. Indeed, within a few weeks she was nearly perfect on her spelling and we could shortly thereafter drop the whiteboarding and she is today an excellent speller.

Perhaps this is a “learning styles” issue. Or perhaps it is a valuation issue, which might drive a “learning style” issue. Perhaps we learn best what we value most (one of the ideas of andragogy) and that kinaesthesis drives valuation better than text. An object is worth a thousand pictures?

Likewise, we might consider the ‘value’ of books. Is a book read on-line or on a kindle valued as highly as one that one holds? Is a book that one marks up and dog-ears a better teacher than a book that is kept in pristine condition (so that it can be re-sold?)

This model might also apply to certain political decisions. For example, a citizen might value more highly a tangible right or good, such as as a “gas-guzzling truck”, over an intangible good, such as a “sustainable future”.

Moreover, there might be individual, developmental and possibly heritable differences in the role of kinaesthesis in valuation.

Previous Older Entries

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2 other followers

Follow

Get every new post delivered to your Inbox.