# Bernd Schroeder, in response to “excessive science”

*Below is a post written by Bernd Schroeder in response to Helena Dodziuk’s post on “too much publication.” He shares his thoughts on the topic and experiences as a author, reviewer, and colleague at university. *

**+ + +**

When I was asked to write a response to Helena Dodziuk’s post “Excessive Science,” I was wondering what I could say, because I agree with her. However, because the overpublication of results is a challenge for all branches of academia, the subject merits discussion from a variety of angles and in the context of different research areas. I will discuss aspects of the problem in mathematics, a likely cause, and a (probably too naïve) way in which we can start addressing the problem. To keep the post short, I will not include a discussion of academic fraud here. Dodziuk does a good job of naming incidents and I think we can all agree that fraud is simply not acceptable. *(Would it not be nice if declaring certain behaviors unacceptable was the solution?)*

Simplistically speaking, the problem of overpublication would be significantly reduced if there was no pressure to publish. Where does the pressure to publish come from? We all know the answer. Your publications define you as a scientist/mathematician. Although every paper should be judged according to its quality, a large number can look impressive on a grant proposal, a tenure application or a job application.

Regarding grants, I will always remember an National Science Foundation officer’s very helpful statement: “NSF is not interested in funding incremental research.” Many of the works Dodziuk describes as not cited and not read at all may well fall into a category to be labeled “less than incremental,” and they may well have the effect of lowering the author’s funding potential. So, with NSF being the major funding source for mathematics in the USA, pressure to publish probably originates (in mathematics in the USA) not with grants, but with a mathematician’s desire to have a career at a university. To have such a career, first a tenure-track position needs to be acquired and then the requirements for tenure must be satisfied.

That means, the pressure to publish originates with **all of us**: As we progress along our individual career paths, it is likely that at some point in time we will be asked to judge a colleague’s career, for example, as members of a tenure and promotion committee at the department, college or even the university level. At such times, it is important to have realistic expectations of the candidate, which means it is important to understand the candidate’s discipline’s culture of publication. If the candidate is in your own discipline, it helps to be able to explain the special features of your own discipline’s culture. Having and being able to communicate realistic expectations is especially important in interdisciplinary and administrative settings, when non-experts supervise experts in other areas. Deans are by default non-specialists for all but one of the areas they supervise. In collaborations, each collaborator is an expert in their personal area of specialization, but not necessarily in the area(s) of the other collaborators.* (Why collaborate with a group in which everyone has the same background?)*

Some insights regarding another discipline’s culture can be quick, such as that, in computer science, there are quite a few conferences for which a publication in the proceedings ranks higher than a journal publication. Other insights can lead to good fun between colleagues: When a colleague in physics told me, tongue-in-cheek, that he had more papers than Einstein, I asked him to, for each of his papers, take the reciprocal of the number of authors, add all the fractions, and to notify me when the total reached 1. I am still waiting, but we enjoyed the banter.

So what are realistic expectations in mathematics? According to Jerrold W. Grossman, (*Patterns of Collaboration in Mathematical Research, SIAM News, Volume 35, Number 9, November 2002*), 57% of all mathematicians who publish mathematics at all publish a grand total of 1 or 2 papers *in their lives*. Moreover, even for the top 10% (in terms of number of publications) it is hard to maintain a rate of two papers per year, as less than 2.5% of all mathematicians ever reach 50 publications. This paper was quite eye-opening to me. It is a tremendous help when I need to communicate why the numbers of publications are rather low for mathematicians when compared with colleagues in other areas.

Such data notwithstanding, mathematics needs to safeguard against excessive publication, just like any other discipline. Dodziuk mentions certain papers for which the results could not be replicated. The first problem with overpublication in mathematics may well be the opposite: There are certain very natural results that are (with pretty much the same proof) periodically rediscovered. However, replicating a proof is not research, it’s homework. (This is opposite to some advances in experimental sciences which are and often need to be further validated by duplication.) In my area, the Abian-Brown Theorem may be the result that is rediscovered most often, and I have rejected multiple papers by enthusiastic young authors who were unaware that the result and proof are long known.

We could argue that such duplication should not occur in the age of electronic databases, but that would be too hasty. Although databases of mathematical papers, such as *Mathematical Reviews* and *Zentralblatt*, do a good job, they are only useful if you know the words that you are looking for. So far, even a description of a theorem with slightly altered terminology is not likely to be detected. We could argue that that is why people should stay with their areas of specialization and why students should only work on topics that are well-represented by experts at their home institutions … but I strongly disagree: A lot of non-incremental research happens when researchers step outside their comfort zone into another area. If, in that area, there is no mentor available, then some initial duplication will occur.

So how do we handle refereeing a paper that only rediscovers something that we consider old news? Personally, I write a review that clearly explains that the result is known, which is why the paper is not acceptable. If possible, I give suggestions how the research could be expanded. Typically, it does not take long to write such a review and being courteous is, of course, of no cost to me or to my institution. Maybe I have that attitude because of what happened to my first paper in 1991: It was a beautiful characterization of the fixed point property in infinite ordered sets, a result that, though imperfect, has not been improved upon to this day … and Aleksander Rutkowski had proved it in the mid 1980s. I had checked the Science Citation Index (volumes of bound books at the time) for papers citing articles that were available to me, but the journal in which Rutkowski’s paper was published was not included in SCI. I was unfamiliar with *Mathematical Reviews* at the time (probably my fault, but short of reading every volume, I may not have found the reference either), so I submitted the paper and also sent it to the author of one of the papers that I referenced to. Shortly thereafter, I received a very nice note from this author, explaining where to find Rutkowski’s paper. Of course I was unhappy, mainly because I was not as thorough as I thought I was. Yet, when you hold a mirror to my face, and I don’t like what I see, whose fault is it – yours, mine or the mirror’s? Something close to a treasure trove of references was opened up by the note and a little more than a year later, I published my first (original) paper on the fixed point property of ordered sets. (I have done some more work in that area since.)

Aside from duplication, overpublication can occur in mathematics through the publication of results that are perceived to be too simple. My attitude has always been that, if a result is sufficiently novel and the proof is correct, there should be a place to publish it. “Sufficiently novel” is a term for which there are probably as many definitions as there are referees. Let’s just say that if I could predict a proof using standard methods, I would not consider it “sufficiently novel.”

So far, I have talked about overpublication of results that are correct. Certainly, it can also happen that an incorrect result sees publication. Primarily, the onus of assuring that a paper is correct lies with the author. However, as referees, one of our jobs is to make sure we can understand every argument in the paper. This is a distinct advantage of proofs over experimental sciences: Usually, we do not need a specialized lab to double check results.

Along these lines, a final story for this post: A colleague once gave me a paper and asked me to tell him what I thought. I read through the paper, thought it was nice, but there was one part that I did not understand – one of these typical places in a mathematics paper where it is written that “we obviously conclude …” followed by an inequality. The colleague told me that he, too, could not figure out this line and, because he was refereeing the paper, he would send it back asking that this line be explained. A few weeks later, the paper was resubmitted. My colleague and I looked at it together and immediately went to the line that we did not understand. The one line had turned into two lines … and then it was obvious.

So, overall, be careful, be patient and don’t be afraid to ask for clarifications. Safeguard for the worst, but do not, by default, assume the worst. That’s about all we can do on an individual level.

# Link: New York Times special section on maths & science education

During our survey, we found out that many of you are teachers in some capacity. Whether it be at a college or university or other educational institution, there is a lot of interest in reading about the state of STEM education.

This past week, **New York Times** published a special section entitled* Learning What Works*. It features a variety of articles rooted in maths and sciences education.

Here are some of the articles that may be of interest to you as authors and educators:

- Young Students Against Bad Science: Profiles of students who took a stand against teaching “bad science” like creationism and climate-change denial.
- Standard-Barer in Evolution Fight: A profile of Eugenie C. Scott, who fights teaching creationism in schools.
- Field-testing the Maths Apps: Software developers are trying to create educational apps that teach pre-schoolers Math and other important subjects. This article outlines some of the apps being developed and tested.
- Guesses and Hype Give Way to Data in Study of Education: The Institute of Education Sciences, a little-known group at the Department of Education, is now collecting rigorous data from experimental curriculum and other pilot programs. The expanding research from a variety of sources and angles may lead to more definitive conclusions about what education techniques work and which ones do not.
- Milestones in Science Education: A timeline of various developments in science education, all the way back to the 1800s.

Do you have any thoughts about the state of STEM education? If so, reply in our comments section.

# Guest Post: Bernd Schroeder on teaching non-experts expert material

This is the second in our series of posts by Bernd Schroeder, Wiley author and academic director and program chair of Mathematics and Statistics at Louisiana Tech University.

In this post, he talks about how we can expertly teach non-expert students in a manageable way.

Click here to read his previous post about preparing STEM and non-STEM students for a workplace that demands mathematical skill sets.

**+ + +**

**How to Teach Analysis?**

As with my other post, the question mark should indicate that this is not a “How To” manual but food for thought instead. I will not claim to be right, but your reaction to what follows can tell you a bit about your own comfort level with change.

At my institution, the sooner students are ready for work in Numerical Partial Differential Equations or Physics, the better. This should be common because by the time most people know *all* the mathematical details they need for Numerical Partial Differential Equations or Physics, they’re old.

We can make the case that much of the requisite theory builds on pretty deep functional analysis, which builds on measure theory and linear algebra, which is best taken after a first proof class in analysis. So, assuming that these concepts also need time to settle in your mind, a time of 3 years between the first analysis proof and the end of a functional analysis class may not be unrealistic, possibly even too fast.

It is typically the case that a mathematics graduate student who invests 3 years into these fundamentals should have had some of them as an undergraduate. But even if 2 years of graduate school are spent on fundamentals, graduation in a total of 4 years becomes a challenge.

For non-mathematics students, investing 3 years into the mathematical background for the work they do seems unreasonable. Unsurprisingly, many of them do not take “our” (*that is mathematics departments’*) classes.

In summer 2013, I taught the spectral theorem for unbounded self-adjoint operators on dense subspaces of infinite dimensional Hilbert spaces to a group of 5 students most of whom started their first proof class in analysis in December 2012. (*Disclaimer:* As I recall it, students asked about the mathematical background for quantum mechanics and I decided to provide it to those who would volunteer for the ride.) The net exposure to analysis for most of my students was two 10-week quarters in which we had a semester’s worth of instructional time, plus the 5 week summer session (another semester’s worth of instructional time). The pace and density of material were quite murderous, as there was nary a result in the early part of the development that was not quoted later.

At the same time, we only left a small number of logical gaps in the presentation: Fubini’s theorem and products of measure spaces, the density of the compactly supported infinitely differentiable functions in L^{p}, and the Stone-Weierstrass Theorem were discussed, but not proved. We also spent the last day discussing how the powerful functional calculus leads to the Spectral Theorems for Unitary and for Self-Adjoint Operators and did not go through all the technical parts of the proofs, which would have taken two days. Overall, given another 3 weeks, maybe less, we could have done it all without gaps.

Given the short time of exposure, we cannot expect the students to have the same deep connection to the content that an expert has. However, I feel that these students can construct a decent proof in analysis and elementary functional analysis on a regular basis. That is not a bad outcome for being 8 months removed from being first exposed to analysis proofs. Moreover, these students have seen a lot of content that will be useful in their applied classes (spectral theorems, the elements of complex and functional analysis needed to prove them, L^{p} spaces, convergence of Fourier series in L^{2} and plenty of in-class remarks attempting to make connections to numerical analysis, physics, etc.). To me, this is preferable to spending a lot of time in training exercises, which leads to students not even seeing the Lebesgue integral in their first year.

Are there gaps? Certainly. Anything that did not directly contribute to progress towards the spectral theorem for self-adjoint operators was omitted. The students have not proved, using ε and N, that the limit of the n^{th} root of n is 1 as n goes to infinity, they have not proved the limit comparison test for series, etc. Is that acceptable? Personally, I find L^{p} spaces much more important than lots of details on series (just about everything that we needed went back to facility with the geometric series). Similarly, ε-N type training can be provided by analyzing the Dirichlet kernel rather than with training exercises. So, overall, I feel reasonably good about the job we did. Further iterations of this sequence can always be improved by picking the right exercises (and by slowing down a little).

How do you define success? How do you assess it? Here is where judgment calls are needed. It is a virtual certainty that there are problems from a first analysis proof class that would be a lot harder for my students than for students who went the usual route. I also noticed that, although I did not need specifics from the topology classes I took, I was a lot more comfortable with “continuity means inverse images of open sets are open” than my students were: For them, that was one theorem among many with the importance slowly emerging in the course this summer. If that is considered to be a problem, then my experiment (if you will) failed. On the other hand, these students are developing a feel for Hilbert spaces and L^{2} at a time when other students just learn the definition of an open set.

Is it hard to design such a new approach? Well, let’s say that I was surprised when I thought that I could avoid using the Hahn-Banach Theorem. So I designed the course without proving the Hahn Banach Theorem and the surprise lasted until I ran into a proof (the Cauchy Integral Theorem for Banach space valued functions) that is best done by using a consequence of the Hahn-Banach Theorem. That result could also be proved by simply reworking the proof from complex analysis with the range being a Banach space instead of the complex numbers, but nonetheless …

Along the lines of creating new approaches, the challenge for this sequence of classes is the same as for any change to canonical approaches: First of all, because we are supposed to model logical thought, we have to create something that is logically consistent. After this first step in our student-centered approach, we then have to figure out if it does what we intended to do. For example, the mind can be overchallenged by an approach that is too dense or too fast. You may rightly say that the class was both and, if so, I will not argue against you. However, my experience shows that the mind can stand up to much stricter rigors than we may give it credit for.

Overall, I certainly recommend approaching change with care. The “race to the spectral theorem” above is the product of about 10 years tinkering with the structure of fundamental analysis. Abject failure at any stage would have likely diverted the project from the result described above.

So be careful, and when something does not quite work, learn from it. As long as the bumps in the road can be navigated and you have at least half as much fun as I had with my spectral theory class, you’ll do fine. Just make sure your department head knows and supports what you’re doing. I had a slight advantage there, because I am the department head

I do answer to a dean, though…