Скачать книгу

new Mathematics, Physical Sciences and Engineering (MPE) Directorate.

      DCR had two sections from 1974 to 1975: computer science and engineering, and computer applications in research. The former ran programs in theory, programming languages and systems, and systems design. The latter ran programs in techniques and systems, software quality research, and networking for science. The FY75 NSF Annual Report includes these comments:

      The discipline of computer science is barely 10 years old, only vaguely defined, and mushrooming. . . . In a field as new and as rich as computer science it is not surprising that new areas appear, create a flurry of activity, and then level off or stagnate; automata theory, mechanical translation, and theory of formal languages are a few such . . . researchers in computer science are anxious to follow new leads into uncharted regions. This kind of process of extension to new areas and pruning of less productive ones partly accounts for the lack of definition of the field.47

      The report goes on to suggest that the Arden COSERS initiative, described above, was a necessary disciplinary self-examination. By the time of my arrival at NSF,48 toward the end of Transition Quarter 1976,49 the Assistant Director for the Mathematical and Physical Sciences and Engineering (MPE) Directorate, Ed Creutz, had decided to merge computer research with mathematics.

      John Pasta became Division Director for the Division of Mathematical and Computer Sciences (MCS). The three sections in DCR (Computer Science and Engineering, Computer Applications, and Computer Impact on Society) became one section within MCS. Kent Curtis, who had been section head for Computer Science and Engineering (CS&E), became section head for the Computer Science Section (CSS). William H. Pell led the Mathematics Section. Don Aufenkamp, who had been section head for applications, took over the NSF US-USSR program. The program directors in the DCR Computer Science and Engineering Section—Bruce Barnes (Theory), Thomas Keenan (Programming Languages and Systems), and John Lehman (Computer Architecture)—moved with their programs into CSS. Sally Sedelow from Techniques and Systems became the Intelligent Systems program director in CSS. Fredrick Weingarten became Special Projects program director. Walter Sedelow came over from the applications section, where he had overseen computer networking-related grants, to join Weingarten in Special Projects. While Kent had recruited me for the Software Engineering program, he decided to have Bruce Barnes head that program because of his experience and interest. I was assigned instead to the Theoretical Computer Science program. The Sedelows left in 1977, and Sally was replaced by Eamon Barrett (from ESL Inc.)50 and Walter by Larry Oliver (from NSF Education).

      Engineering, which also was a division in MPE in 1976, had an Electrical Sciences and Analysis Section, which funded research on digital systems and communications, and information theory. Later, after a possibility that a separate National Engineering Foundation might be created, NSF merged applied research and engineering to create a new Engineering Directorate with an Electrical, Computer, and Systems Engineering (ECSE) Division. Steve Kahne, Thelma Estrin, and others served as ECSE division directors. The Division of Science Information in the Scientific, Technological, and International Affairs (STIA) Directorate supported fundamental research on information sciences and applied research on information access and user requirements. This division would later be renamed the Division of Information Science and Technology and moved to the Biological and Behavioral Sciences (BBS) Directorate.

      The new Computer Science Section had six programs—Theoretical Computer Science, Software Systems Science, Software Engineering, Intelligent Systems, Computer Systems Design, and Special Projects—each described in the NSF Guide to Programs as shown in Figure 2.1 (before Software Engineering was added). The programs had no deadlines, target dates, or solicitations; and all proposals were essentially “unsolicited” without restrictions on page length, format, font size, etc. Prospective principal investigators were encouraged to submit proposals in the fall if they wanted summer funding for the following year.

      William Aspray writes in Chapter 7 , “Foundation staff did not generally set a research agenda for funding. They relied instead on the scientific community to set the agenda, both through the proposals individual scientists submitted and the reviews the scientific community gave to these proposals.” I would argue that, while we placed no constraints on what could be submitted and solicited no proposals, the program directors, Kent Curtis, and John Pasta were very proactive in encouraging people to submit and in publicizing the programs. The proposals the section funded and the people we encouraged, in effect, defined an agenda.

image

      Before FastLane51 made web-based submissions possible, proposals were mailed to NSF with approximately 25 copies arriving in the one office that processed all arriving proposals. After the CSS administrative officer picked up the proposals from central processing and distributed copies to the program officers, they would do a quick check on the appropriateness and redistribute if needed. Since the volume of applications was modest,52 program directors took time reading each proposal in detail and consulting colleagues for suggested reviewers. One also could walk down the street to the George Washington University Library (or use the much smaller NSF library) to read related or cited papers to help in understanding the proposals and selecting reviewers.

      Typically, a program director needed three to four reviews to support a recommendation. Proposals were sent to six to eight reviewers, given the low response rate. These reviews were carried out as “mail reviews,” that is, copies of the proposals along with a review form and check boxes for an “adjective” review (poor, fair, good, very good, excellent). Proposals were triaged: the clearly fundable proposals were recommended as soon as possible, the clearly non-fundable proposals were declined, and the remaining were held for discussion in weekly meetings with all six program directors, Kent Curtis, and often John Pasta. In these meetings, we discussed the status of our programs and the awards and declinations we were planning. These were often lively discussions about priorities and high-risk proposals.

      The primary issue delaying recommendations was the time it took to get solid reviews. “We read the comments very carefully, used our best judgement, and did not really put much weight on the adjective ratings.”53 The directorate, however, did consider the ratings and compared our recommendations against the other programs in MPE/MPS. The field was young and the “shooting inward”54 phenomenon was at its height. Our first strategy was to plead with the researchers to evaluate proposals fairly and to understand that there were risks that could be overcome with good new approaches. Our second strategy to address both response rate and review quality was to employ John Lehmann’s skill at mining the NSF databases. We gathered data for every reviewer on the time to review, the number of reviews, and the average review, and compared their performance with other reviewers of the same proposals. So, if Mary Smith seldom gave “excellent” ratings and typically gave ratings below those of other reviewers, we could use that in the recommendation. Our next strategy was to remove the adjective ratings from the review forms entirely. This had two good outcomes: it left interpretation more to the program directors rather than depending on scoring, and the lack of the option to just check a box resulted in longer and more thoughtful reviews. In the long run and because of a desire to have uniform measures across the Foundation, however, we were asked to return to using adjective reviews.

      Computing research funding rose relatively slowly over the period 1974–1980 (see Figure 2.2) with the first significant increase coming with the establishment of the Coordinated Experimental Research Programs (described below) in 1980. There were several ways in which we managed our program portfolios.

      Once an adequate number of reviews arrived, we would seek out other program managers in computer science, mathematics, engineering, or information

Скачать книгу