Significantly Unexpected: How to Deal with Data That You Did Not Anticipate by Jacob Nelson


Image credit : Jacob Nelson

Some people may still believe that researchers shout “Eureka!” when they reach a breakthrough. In reality, most of us probably say something quite different: “That’s weird.” For all our knowledge and the resources available to us, we sometimes find ourselves profoundly stumped as to how to interpret our own data. If we are really lucky, our supervisors are stumped too. Then the fun begins.


This is the situation I found myself in a few months ago. I am a human movement scientist studying sensorimotor control. My research concerns rhythm and timing; specifically, I want to understand what sensory information people use when they synchronize a movement with an external rhythm. As a participant in my experiments, your job is simple: pay attention to a rhythmic stimulus, and try to tap your finger along with it. The rhythmic stimulus may be auditory (a beeping sound) or visual (a flashing light), and similarly, whenever you tap your finger, another stimulus goes off (either a beep or a flash) to indicate that you have tapped. These possible combinations of rhythm and feedback stimuli (visual-visual, visual-auditory, auditory-visual, and auditory-auditory) comprise the four conditions of my experiment. Your hand is hidden from view, so aside from what you feel, these lights and sounds are all you have.


My initial question was simple: do people rely more on the beeps or the flashes? One might imagine that either vision or audition is simply the better option, but it is also possible that we switch sometimes, or that we use both. Using some subtle tricks in the timing of the stimuli, we began to tease apart the importance of each sense by throwing off participants’ performance and seeing how they reacted. Several test participants and many painful hours of Matlab coding later, I had an answer: in three of my four conditions, the stimulus type didn’t seem to matter. Participants were doing a decent job of tapping along and were unaffected by those small timing shifts in stimulus presentation. There was one exception, however: when the rhythm was visual and the feedback was auditory, my data showed something very different. I could reliably alter people’s performance, but instead of subliminally trying to correct their own timing errors, it seemed like they got worse. If the stimuli told them they were early on a given tap, they tapped even earlier on the next, and similarly, when they were told they were late, their following tap came even later.


That’s weird.


After combing through my analysis script to make sure I hadn’t misplaced a pesky minus sign somewhere, I brought my data to my supervisor to see what he thought. Much to my relief, his immediate reaction was to begin discussing the next steps toward untangling this mystery. He could not explain the results either, but he was determined to find out just what we had stumbled upon.


Our new approach was threefold: collect more participants to rule out statistical noise, run a new experiment to understand timing delays in vision and audition, and build a computational model to fit our data. The first task is complete, and as the effect only grew and the error bars only shrank, we became convinced that we had found something real. The rest is ongoing, and we are beginning to suspect that we asked the wrong question to begin with. Instead of asking, “How do people know if they’re tapping along accurately?” perhaps we should ask, “What are people actually doing when they tap along with our stimulus?” Maybe the fact that people’s timing performance worsens is a sign that they are paying attention specifically to the time between their taps rather than simply planning their next tap to coincide with the rhythm. If we are right, it may have new and exciting implications for the course of my PhD. Even if this does turn out to be something of a rabbit hole, however, I am sure there will be more than a few “Eureka!” moments along the way.