# Bernd Schroeder, in response to “excessive science”

*Below is a post written by Bernd Schroeder in response to Helena Dodziuk’s post on “too much publication.” He shares his thoughts on the topic and experiences as a author, reviewer, and colleague at university. *

**+ + +**

When I was asked to write a response to Helena Dodziuk’s post “Excessive Science,” I was wondering what I could say, because I agree with her. However, because the overpublication of results is a challenge for all branches of academia, the subject merits discussion from a variety of angles and in the context of different research areas. I will discuss aspects of the problem in mathematics, a likely cause, and a (probably too naïve) way in which we can start addressing the problem. To keep the post short, I will not include a discussion of academic fraud here. Dodziuk does a good job of naming incidents and I think we can all agree that fraud is simply not acceptable. *(Would it not be nice if declaring certain behaviors unacceptable was the solution?)*

Simplistically speaking, the problem of overpublication would be significantly reduced if there was no pressure to publish. Where does the pressure to publish come from? We all know the answer. Your publications define you as a scientist/mathematician. Although every paper should be judged according to its quality, a large number can look impressive on a grant proposal, a tenure application or a job application.

Regarding grants, I will always remember an National Science Foundation officer’s very helpful statement: “NSF is not interested in funding incremental research.” Many of the works Dodziuk describes as not cited and not read at all may well fall into a category to be labeled “less than incremental,” and they may well have the effect of lowering the author’s funding potential. So, with NSF being the major funding source for mathematics in the USA, pressure to publish probably originates (in mathematics in the USA) not with grants, but with a mathematician’s desire to have a career at a university. To have such a career, first a tenure-track position needs to be acquired and then the requirements for tenure must be satisfied.

That means, the pressure to publish originates with **all of us**: As we progress along our individual career paths, it is likely that at some point in time we will be asked to judge a colleague’s career, for example, as members of a tenure and promotion committee at the department, college or even the university level. At such times, it is important to have realistic expectations of the candidate, which means it is important to understand the candidate’s discipline’s culture of publication. If the candidate is in your own discipline, it helps to be able to explain the special features of your own discipline’s culture. Having and being able to communicate realistic expectations is especially important in interdisciplinary and administrative settings, when non-experts supervise experts in other areas. Deans are by default non-specialists for all but one of the areas they supervise. In collaborations, each collaborator is an expert in their personal area of specialization, but not necessarily in the area(s) of the other collaborators.* (Why collaborate with a group in which everyone has the same background?)*

Some insights regarding another discipline’s culture can be quick, such as that, in computer science, there are quite a few conferences for which a publication in the proceedings ranks higher than a journal publication. Other insights can lead to good fun between colleagues: When a colleague in physics told me, tongue-in-cheek, that he had more papers than Einstein, I asked him to, for each of his papers, take the reciprocal of the number of authors, add all the fractions, and to notify me when the total reached 1. I am still waiting, but we enjoyed the banter.

So what are realistic expectations in mathematics? According to Jerrold W. Grossman, (*Patterns of Collaboration in Mathematical Research, SIAM News, Volume 35, Number 9, November 2002*), 57% of all mathematicians who publish mathematics at all publish a grand total of 1 or 2 papers *in their lives*. Moreover, even for the top 10% (in terms of number of publications) it is hard to maintain a rate of two papers per year, as less than 2.5% of all mathematicians ever reach 50 publications. This paper was quite eye-opening to me. It is a tremendous help when I need to communicate why the numbers of publications are rather low for mathematicians when compared with colleagues in other areas.

Such data notwithstanding, mathematics needs to safeguard against excessive publication, just like any other discipline. Dodziuk mentions certain papers for which the results could not be replicated. The first problem with overpublication in mathematics may well be the opposite: There are certain very natural results that are (with pretty much the same proof) periodically rediscovered. However, replicating a proof is not research, it’s homework. (This is opposite to some advances in experimental sciences which are and often need to be further validated by duplication.) In my area, the Abian-Brown Theorem may be the result that is rediscovered most often, and I have rejected multiple papers by enthusiastic young authors who were unaware that the result and proof are long known.

We could argue that such duplication should not occur in the age of electronic databases, but that would be too hasty. Although databases of mathematical papers, such as *Mathematical Reviews* and *Zentralblatt*, do a good job, they are only useful if you know the words that you are looking for. So far, even a description of a theorem with slightly altered terminology is not likely to be detected. We could argue that that is why people should stay with their areas of specialization and why students should only work on topics that are well-represented by experts at their home institutions … but I strongly disagree: A lot of non-incremental research happens when researchers step outside their comfort zone into another area. If, in that area, there is no mentor available, then some initial duplication will occur.

So how do we handle refereeing a paper that only rediscovers something that we consider old news? Personally, I write a review that clearly explains that the result is known, which is why the paper is not acceptable. If possible, I give suggestions how the research could be expanded. Typically, it does not take long to write such a review and being courteous is, of course, of no cost to me or to my institution. Maybe I have that attitude because of what happened to my first paper in 1991: It was a beautiful characterization of the fixed point property in infinite ordered sets, a result that, though imperfect, has not been improved upon to this day … and Aleksander Rutkowski had proved it in the mid 1980s. I had checked the Science Citation Index (volumes of bound books at the time) for papers citing articles that were available to me, but the journal in which Rutkowski’s paper was published was not included in SCI. I was unfamiliar with *Mathematical Reviews* at the time (probably my fault, but short of reading every volume, I may not have found the reference either), so I submitted the paper and also sent it to the author of one of the papers that I referenced to. Shortly thereafter, I received a very nice note from this author, explaining where to find Rutkowski’s paper. Of course I was unhappy, mainly because I was not as thorough as I thought I was. Yet, when you hold a mirror to my face, and I don’t like what I see, whose fault is it – yours, mine or the mirror’s? Something close to a treasure trove of references was opened up by the note and a little more than a year later, I published my first (original) paper on the fixed point property of ordered sets. (I have done some more work in that area since.)

Aside from duplication, overpublication can occur in mathematics through the publication of results that are perceived to be too simple. My attitude has always been that, if a result is sufficiently novel and the proof is correct, there should be a place to publish it. “Sufficiently novel” is a term for which there are probably as many definitions as there are referees. Let’s just say that if I could predict a proof using standard methods, I would not consider it “sufficiently novel.”

So far, I have talked about overpublication of results that are correct. Certainly, it can also happen that an incorrect result sees publication. Primarily, the onus of assuring that a paper is correct lies with the author. However, as referees, one of our jobs is to make sure we can understand every argument in the paper. This is a distinct advantage of proofs over experimental sciences: Usually, we do not need a specialized lab to double check results.

Along these lines, a final story for this post: A colleague once gave me a paper and asked me to tell him what I thought. I read through the paper, thought it was nice, but there was one part that I did not understand – one of these typical places in a mathematics paper where it is written that “we obviously conclude …” followed by an inequality. The colleague told me that he, too, could not figure out this line and, because he was refereeing the paper, he would send it back asking that this line be explained. A few weeks later, the paper was resubmitted. My colleague and I looked at it together and immediately went to the line that we did not understand. The one line had turned into two lines … and then it was obvious.

So, overall, be careful, be patient and don’t be afraid to ask for clarifications. Safeguard for the worst, but do not, by default, assume the worst. That’s about all we can do on an individual level.

# Guest Post: Bernd Schroeder on teaching non-experts expert material

This is the second in our series of posts by Bernd Schroeder, Wiley author and academic director and program chair of Mathematics and Statistics at Louisiana Tech University.

In this post, he talks about how we can expertly teach non-expert students in a manageable way.

Click here to read his previous post about preparing STEM and non-STEM students for a workplace that demands mathematical skill sets.

**+ + +**

**How to Teach Analysis?**

As with my other post, the question mark should indicate that this is not a “How To” manual but food for thought instead. I will not claim to be right, but your reaction to what follows can tell you a bit about your own comfort level with change.

At my institution, the sooner students are ready for work in Numerical Partial Differential Equations or Physics, the better. This should be common because by the time most people know *all* the mathematical details they need for Numerical Partial Differential Equations or Physics, they’re old.

We can make the case that much of the requisite theory builds on pretty deep functional analysis, which builds on measure theory and linear algebra, which is best taken after a first proof class in analysis. So, assuming that these concepts also need time to settle in your mind, a time of 3 years between the first analysis proof and the end of a functional analysis class may not be unrealistic, possibly even too fast.

It is typically the case that a mathematics graduate student who invests 3 years into these fundamentals should have had some of them as an undergraduate. But even if 2 years of graduate school are spent on fundamentals, graduation in a total of 4 years becomes a challenge.

For non-mathematics students, investing 3 years into the mathematical background for the work they do seems unreasonable. Unsurprisingly, many of them do not take “our” (*that is mathematics departments’*) classes.

In summer 2013, I taught the spectral theorem for unbounded self-adjoint operators on dense subspaces of infinite dimensional Hilbert spaces to a group of 5 students most of whom started their first proof class in analysis in December 2012. (*Disclaimer:* As I recall it, students asked about the mathematical background for quantum mechanics and I decided to provide it to those who would volunteer for the ride.) The net exposure to analysis for most of my students was two 10-week quarters in which we had a semester’s worth of instructional time, plus the 5 week summer session (another semester’s worth of instructional time). The pace and density of material were quite murderous, as there was nary a result in the early part of the development that was not quoted later.

At the same time, we only left a small number of logical gaps in the presentation: Fubini’s theorem and products of measure spaces, the density of the compactly supported infinitely differentiable functions in L^{p}, and the Stone-Weierstrass Theorem were discussed, but not proved. We also spent the last day discussing how the powerful functional calculus leads to the Spectral Theorems for Unitary and for Self-Adjoint Operators and did not go through all the technical parts of the proofs, which would have taken two days. Overall, given another 3 weeks, maybe less, we could have done it all without gaps.

Given the short time of exposure, we cannot expect the students to have the same deep connection to the content that an expert has. However, I feel that these students can construct a decent proof in analysis and elementary functional analysis on a regular basis. That is not a bad outcome for being 8 months removed from being first exposed to analysis proofs. Moreover, these students have seen a lot of content that will be useful in their applied classes (spectral theorems, the elements of complex and functional analysis needed to prove them, L^{p} spaces, convergence of Fourier series in L^{2} and plenty of in-class remarks attempting to make connections to numerical analysis, physics, etc.). To me, this is preferable to spending a lot of time in training exercises, which leads to students not even seeing the Lebesgue integral in their first year.

Are there gaps? Certainly. Anything that did not directly contribute to progress towards the spectral theorem for self-adjoint operators was omitted. The students have not proved, using ε and N, that the limit of the n^{th} root of n is 1 as n goes to infinity, they have not proved the limit comparison test for series, etc. Is that acceptable? Personally, I find L^{p} spaces much more important than lots of details on series (just about everything that we needed went back to facility with the geometric series). Similarly, ε-N type training can be provided by analyzing the Dirichlet kernel rather than with training exercises. So, overall, I feel reasonably good about the job we did. Further iterations of this sequence can always be improved by picking the right exercises (and by slowing down a little).

How do you define success? How do you assess it? Here is where judgment calls are needed. It is a virtual certainty that there are problems from a first analysis proof class that would be a lot harder for my students than for students who went the usual route. I also noticed that, although I did not need specifics from the topology classes I took, I was a lot more comfortable with “continuity means inverse images of open sets are open” than my students were: For them, that was one theorem among many with the importance slowly emerging in the course this summer. If that is considered to be a problem, then my experiment (if you will) failed. On the other hand, these students are developing a feel for Hilbert spaces and L^{2} at a time when other students just learn the definition of an open set.

Is it hard to design such a new approach? Well, let’s say that I was surprised when I thought that I could avoid using the Hahn-Banach Theorem. So I designed the course without proving the Hahn Banach Theorem and the surprise lasted until I ran into a proof (the Cauchy Integral Theorem for Banach space valued functions) that is best done by using a consequence of the Hahn-Banach Theorem. That result could also be proved by simply reworking the proof from complex analysis with the range being a Banach space instead of the complex numbers, but nonetheless …

Along the lines of creating new approaches, the challenge for this sequence of classes is the same as for any change to canonical approaches: First of all, because we are supposed to model logical thought, we have to create something that is logically consistent. After this first step in our student-centered approach, we then have to figure out if it does what we intended to do. For example, the mind can be overchallenged by an approach that is too dense or too fast. You may rightly say that the class was both and, if so, I will not argue against you. However, my experience shows that the mind can stand up to much stricter rigors than we may give it credit for.

Overall, I certainly recommend approaching change with care. The “race to the spectral theorem” above is the product of about 10 years tinkering with the structure of fundamental analysis. Abject failure at any stage would have likely diverted the project from the result described above.

So be careful, and when something does not quite work, learn from it. As long as the bumps in the road can be navigated and you have at least half as much fun as I had with my spectral theory class, you’ll do fine. Just make sure your department head knows and supports what you’re doing. I had a slight advantage there, because I am the department head

I do answer to a dean, though…

# Guest Post: Bernd Schroeder on preparing students, creating texts

The following is a guest post from Bernd Schroeder, the current academic director and program chair of Mathematics and Statistics at Louisiana Tech University. His specialties include discrete mathematics, harmonic analysis, and probability theory. He has about 20 years of teaching experience and wrote a few titles including *Fundamentals of Mathematics: An Introduction to Proofs, Logic, Sets, and Numbers *(2010) and *A Workbook for Differential Equations* (2009).

Below, he talks about the difficult task of providing both STEM and non-STEM students with the skills they need to succeed in this increasingly analytic workplace. He says that he does not to have all of the answers but wanted to share some observations with fellow authors.

This is his first of two posts. Check back next week for a piece on how to teach analysis to the varied levels of those seeking this skill set.

If you have any comments or thoughts on this topic, feel free to reply to the post below.

**+ + + **

# How to Prepare Mathematics Majors and STEM Undergraduates for Jobs and for Work with Non-Mathematicians?

Labor statistics suggest that there is an impending shortage of STEM talent but not at the Ph.D. level. Therefore, the mathematical community needs to (re)focus on students whose final degree will be a bachelor’s or a master’s degree. Moreover, we need to focus on students whose degree will not be in mathematics. This is a substantial task even if your program’s focus is already wider than the stereotype in which “the courses get you ready for your Mathematics Ph.D. qualifying exams.”

*So how can mathematics and the job relevant “mathematics abilities” (see page 57 of this document) be made “more accessible” without “watering down” the students’ preparation?
*

For *some* student populations, changes in the first two years may well turn out to be minimal.

For engineering, physics, and mathematics majors, there does not seem to be a replacement for calculus. Multivariable calculus is the mathematical basis for the theories of fields and flows, which are central to subjects like Physics, Electrical Engineering, Mechanical Engineering, and more.

For mathematics majors, the case can be made that calculus is the “applied version” of mathematical analysis. When I surveyed the AMS subject classification, in my judgment, I found that analysis is the indispensable foundation for 38 of the 62 branches of mathematics. In addition, another 9 branches were closely related to analysis. This means that less than one fourth (15/62) of all branches of mathematics *might* get by without analysis. If you go through the same exercise, I’m quite confident that your count will be similar to mine. For many disciplines, the question is not *if* to teach calculus, but *how*. Similarly, at a more advanced level, the question is not *if* to teach analysis, but *how*.

Although the populations above are sizable, for other students, changes are possible, maybe even desirable.

A calculus prerequisite for Discrete Mathematics helps assure that students have “mathematical maturity.” But questions arise like:

- What parts of an introductory Discrete Mathematics class truly need calculus
*content*? - Can these parts be replaced with content that is similarly beneficial?
- Would it be sacrilege to consider a hypothetical computer science graduate who has not had calculus?

My only constraint in this regard is that I would want this graduate to have developed/trained certain overall cognitive abilities (ability to concentrate, ability to think about a problem in different ways, ability to correctly follow a procedure, deductive reasoning, etc.) to the same level as would have been gained through calculus.

The situation gets muddier in disciplines that do not require the (whole) calculus sequence and need STEM skills, such as business and biology. One reason STEM majors are considered useful is because they can analyze data. However, data analysis appears to be fundamentally different from proving theorems. For example, I proofread a proof of the Central Limit Theorem as a graduate student, but I only developed an understanding of the Central Limit Theorem when I wrote this simulation.

So how can we authors help with the changes that will either be made by the mathematical community or that someone else will make for us?

First of all, we face a paradox that affects something publishers care about: sales. Take a calculus book as an example. If you write a calculus book that is similar to the standard texts in the field (10 years ago, sequences and series was always Chapter 8), then why would people buy your book? On the other hand, if you write a text that is very different from the standard texts, will people dare adopt your book?

Personally, I have no interest in writing a book that’s already been written by someone else. The above remarks on future needs also indicate that there is little to be gained from incremental changes. That is, unless you write an incrementally changed text that sells millions of copies, in which case at least one person has gained substantially: you; and (without sarcasm or envy) congratulations. The market leaders are market leaders because their products are good. Specific users will always find certain things that they wish would be different, but the texts satisfy the needs of many quite well.

So here is the hard questions are when you write something that radically departs from the canonical setup:

*What do you include? What do you drop and what will be the effect? *

For the above examples, we immediately obtain some specific questions:

Can you design a reasonably deep Discrete Mathematics book that does not need to touch upon calculus (*definitely*) and would students who have not been vetted by calculus respond to the presentation (*There is a lack of data available*)?

Can you teach the deep data analysis skills needed in the working world without touching upon the theory of continuous distributions (*probably, as long as you’re okay using tables and computers and simply quoting results*) and would students who have not been vetted by calculus respond well to the presentation (*There is a lack of data available to me but a colleague told me about positive experiences with graduate students in biology*)?

Finally, once you have answered these (and other) questions in a convincing fashion, *how do you get other people to agree that your answer is convincing?*

I have not tackled the questions above yet, but I would be interested in doing so. Discrete Mathematics and its connection to computer science should be well within my competence. However, I have never let a lack of education stop me from exploring other areas. Every book of mine has at least one chapter of which I knew little when I started writing. (Replies of the nature “All chapters read so poorly, which one is the one you had no clue about?” are not needed. ) So, despite my shortcomings, data analysis would be interesting, too.