There is a small programming assignment I like to give my beginning grad students or upper-level undergrads who want to do research in my group. The assignment is a reasonably simple but quite accurate simulation of a system they all encountered during undergraduate studies. Most students never really ask themselves what the approximations are that result in the textbook results. The simulation, which is perhaps several hundred lines of code, solves several coupled partial differential equations in one spatial dimension; the students learn about the numerics as well as about the theory that describes the behavior of the physical system beyond the textbook approximations.
I have a new undergrad who is great, smart and motivated, and who fit in the group very well. If I am lucky, he will stay here to get his masters and then will likely go someplace with better brand-name recognition to do a PhD; I understand that’s what the student must do as it’s the reasonable thing to do, but at baseline it’s always somewhat infuriating. When some colleagues at Über Unis look down on us from State Schools, I wonder if they ever realize that those awesome kids with research experience who get into their labs did not sprout from the ground, somebody who knows how to do science has actually trained them. The best American students in the physical sciences have plenty of options to go to the most prestigious universities and, if they are well advised (by whom, I wonder?) to realize what’s out there for them to apply for, they can do it on prestigious fellowships. Luckily for me, there are plenty of very smart international graduate students with whom I get to work, in part because options are more limited for them for a number of reasons. But that’s a topic for another post…
The undergrad did a great job, was able to write the code and the code performs the required checks accurately. Then I said “Great! Now you can use it to teach us something new about the model system.” The student was puzzled, we talked a bit, he came back a week later with what he felt were pretty boring results, not knowing whether there would be anything interesting in such a simple system. I said “These are all the things off the top of my head that you could inquire — how realistic are all the textbook approximations, to what extent they hold up in a more realistic simulation, how important are a these different details in the simulation for the physics, what happens if you completely disregard this or that and how it would translate to reality…” You get the point. He thought there was nothing there and to me there were 15 interesting things to ask. I gave him my little speech about how code is like a piece of experimental equipment — once you are done lovingly building it, the science part is deciding what questions are both worth asking and are possible to answer with the tool that you have.
This exchange reminded me of this very nice blog post on Dynamic Ecology, in which Brian McGill discussed the pretty famous paper by William Shockley, a Nobel Prize winner (with Bardeen and Brattain) for the transistor and thought by many to be one of most brilliant and most nasty people they had ever met. Shockley had been able to identify and recruit smart people for Shockley Semiconductor Labs, whom he then drove away (the Traitorous Eight) into Fairchild Semiconductor, a company that became the incubator for the Silicon Valley, having spun off a number of companies, “Fairchildren”, such as Intel and AMD. Anyway, Shockley’s paper is worth reading for a number of reasons; it is actually pretty famous for its discussion of the log-normal distribution of productivity over professional scientists. What Dynamic Ecology pulled forward and what I find interesting here is Shockley’s hypothesis that productivity depends on the ability to clear multiple hurdles, and he names 8. Being good at all of them is key, you cannot be exceptional at one thing and inadequate at another, as success depends on the product of functions that measure one’s:
- ability to think of a good problem
- ability to work on it
- ability to recognize a worthwhile result
- ability to make a decision as to when to stop and write up the results
- ability to write adequately
- ability to profit constructively from criticism
- determination to submit the paper to a journal
- persistence in making changes (if necessary as a result of journal action).
Everything here depends in part on talent, personality/temperament, and training (much of the latter by osmosis).
For instance, there are many students who have #2, i.e. they are smart enough to work on a good problem, provided that someone else formulates it (#1). It takes talent as well as experience to learn what constitutes a good problem, the right combination of interesting and doable in a reasonable time and with available resources. Similarly, with #3 and #4 — it takes experience to know when something has become a publishable nugget, when the data is enough to support a compelling and convincing insight. Once you realize that #5 and #7 are important (and they really, really are: all the nice work you might have done is as good as nonexistent until you publish it), you need to have a good PhD or postdoc advisor from whom you can learn how to write well. If you are a talented person, you can become really good at many of these aspects early in your career with good focused training. Otherwise, it can take you much longer to realize the importance and then teach yourself the skills, and your early career can be impeded.
#6 and #8 essentially mean grit and they are extremely important; probably even more important for grants than papers these days. Most of my grad students get discouraged when we get revise and resubmit with potentially lengthy revisions, because they feel we had already submitted a great product so why this silliness now. And they may or may not have a point, but the key is to go on. I have had to do it many times already, and I am simply desensitized to it. We have to do it so we do it. But I see that students can wonder whether all the effort is worth it — at the end the result is a paper. You gotta love getting papers published. And having a thick skin does not hurt.
Anyway, this post was written in fits and starts, so I think I totally lost my train of thought and with it my point. But it’s fun to think about what success entails and exciting to see a young person starting to learn about the moving parts of the enterprise of science — what it means to formulate a problem, execute a project, and finally disseminate new knowledge. I guess my point is that I really love advising students.
Using code is as close as we theorists get to being experimentalists. Fundamentally, it is still theory, because we are ultimately solving equations. Still, when I start playing with issues of precision, fine-tuning error bars, etc., I feel very much like an experimentalist must feel.
When some colleagues at Über Unis look down on us from State Schools, I wonder if they ever realize that those awesome kids with research experience who get into their labs did not sprout from the ground, somebody who knows how to do science has actually trained them.
Of course we do, but we still eagerly snarf them up.
My favorite space was the interplay between the code/computational experiments and the wet lab experiments- the computations would predict something, and then we’d design experiments to see if the predictions were true. We could try out really wild ideas in simulation and pick one or two of the most promising to pursue in lab.
That list makes a lot of sense to me and I can definitely pinpoint the times during my training when I finally got a handle on several of them. For example it wasn’t until I was writing up my thesis when I was like… wow actually, I had enough data to publish some of these things 3 years ago. I just didn’t know that at the time.