...

~wrrent@omm*nts@ EUGENE GARFIELD

by user

on
Category: Documents
21

views

Report

Comments

Transcript

~wrrent@omm*nts@ EUGENE GARFIELD
Essays of an Information Scientist, Vol:10, #38, September 21, 1987 Current Contents, #38, p.3,
September 21, 1987
See Reprint on p.265:"Accrediting knowledge: Journal stature and citation impact in social
science"
~wrrent@omm*nts@
EUGENE GARFIELD
INSTITUTE
FOR SCIENTIFIC
lNFORMATION~
3501 MARKET
ST., PHILADELPHIA,
PA 19104
Prestige Versus Impact: Established
Images of Journals, like Institutions,
Are Resistant to Change
Number
38
September
How closely does stature (reputation) correspond to objective measures of actual performance? In the following article, reprinted from Social Science Quarterly, 1 James
A. Christenson, Department of Sociology,
University of Kentucky, Lexington, and Lee
Sigelman, now dean. Faculty of Social and
Behavioral Sciences, University of Arizona,
Tucson, compare impact-factor data from
the Social Sciences Citm”on Indexm (SSCF )
with prestige rankings of journals in
sociology and political science. These journal hierarchies were obtained in previous
studies in which sociologists and political
scientists were asked to rank journals on the
basis of their importance to their respective
fields. Christenson and Sigelrnan, comparing these rankings with SSCI data, noted a
nonlinear relationship between reputation
and impact, particularly at the low and high
ends of the prestige ranking, or “pecking
order. ” In other words, the authors concluded that some journals have a reputation that
is disproportionate to their impact, while
other, newer joumrds are presumably not
rewarded with the reputation that their impact warrants.
The reprinted paper is reminiscent of
other studies that have attempted. through
both subjective and unobtrusive “objective”
means, to establish or examine hierarchies
based on prestige. For example, in 1970
Kemeth D. Roose and Charles J. Andersen,
American Council on Education, Washington, DC, published their report based on a
survey of some 6,000 scholars. They measured the quality and effectiveness of grad263
21, 1987
uate programs in more than 30 disciplines
at over 100 American institutions.z A 1971
study by Warren O. Hagstrom, University
of Wisconsin, Madison, compared departmental prestige (based on a 1966 American
Council on Education study that rated the
quality of graduate faculty) with such measures as department size, research production, and faculty awards in 125 departments
of mathematics, physics, chemistry, and biology. 3 Other similar studies include a
1977 paper by Julie A. Virgo, vice president, Carroll Group, Chicago, Illinois, that
compared the citation frequency of scientific papers with their judged ‘‘importance,”4
and a 1982 assessment of US research-doctorate programs edited by Lyle V. Jones,
University of North Carolina, Chapel Hill,
and colleagues. s
Prestige, in the words of Thomas Vocino
and Robert H. Elliott, Auburn University,
Montgomery, Alabama, is’ ‘an abstract psychological concept” that is ‘‘multidimensional. ”They discussed two aspects of journal prestige: “intensity,” which they gauged
by asking respondents to assign weights to
journals using a single journal as a reference
point (Public Administration Review, in this
case); and’ ‘extensity, ” which refers to how
widely a journal is known within a
profession.b
In a 1978 paper, Thomas Roche, Pennsylvania State University, University Park,
and David L. Smith, Virginia Polytechnic
Institute and State University, Blacksburg,
used the SSCf to establish methods of ranking sociology journals and departments on
the basis of citation data.T The authors
compared their citation-based ranking with
various other quantitative rankings. The
Roche and Smith paper, like the study reprinted here, compared a subjective ranking of political-science and sociology journals, obtained through a questionnaire, with
objective SSCI data on the impact of these
journals.
As I’ve noted previously, a frequent, if
unwarranted, criticism of citation indexing
is that raw citation counts do not take into
account the standing or prestige of the journal in which the cited work appears. s However, that leaves open the question of what
determines the meaning of prestige. The
Christenson-Sigelman study suggests that
the reputation of a journal is not always justified. Its actual, current intluence may be
more accurately demonstrated through such
measures as citation impact. As the authors
conclude, citation impact may provide a
more meaningful indication of a journal’s
importance than the number of articles it has
published or a total reliance on what may
bean obsolete reputation, On the other hand,
we must guard against simply equating citation impact with reputational standing. As
David L. Tan, Pahner College of Chiropractic, Davenport, Iowa, noted in a review of
these quality-assessment studies, “No one
is certain what reputationrd studies are measuring—reputation or quality?”9
In a recent critique of the study reprinted
here, Gillian Stevens, University of Illinois,
Urbana-Champaign, points out flaws in part
of the Christenson-Sigelman methodology,
specifically in their use of residuals. These
measures, Stevens notes, do not completely support the conclusion that’ ‘journal rep
utations survive past the time that they are
warranted. ” 10 In a reply, the authors,
while defending their methods and conclusions, acknowledge Stevens’s criticisms and
express agreement that “better evidence is
needed to document the assertion that journals rated high on the prestige scale are receiving too much credit and those on the low
end tw Iitt]e. ” 1I However, they ~so Point
to the larger issues discussed in their paper.
These include the extent to which commonly accepted prestige scales reflect the utilization of scholarly works as documented in the
SSCI. Such issues, according to the authors,
are valid and worthy of further study.
c ,%?,s,
REFERENCES
1. CfrrSaterraorr J A & SSgelrnmr L. Accrediting knowkdge: journal srature and citation impact in social
acierrce. .%c. Sci. @rrn. 66:964-75, 1985.
2. Rowe K D & Arrderaerr C J. A raring of graduote programs,
Waahirrgton, DC: American Cormcif on Education, 1970, 111 p.
3. Hagatrorrr W O. Inputs, outputs, and the prestige of university science departments.
SOciol. E2Juc. 44:375-97, 1971.
4. Virgo J A. A statistical procedure for evaluating rk importance of scientific papers.
Libr. Quart 47:415-30, 1977.
5. Jones L V, Lhrdaey G & CoggeahaSl P E, eds. An assessment of research-docrorore program in the
O’rri/edSrafes. Washington,DC: NationalAcademy Press, 1982.5 VOIS.
6. Voctno T & Elffott R H. Journalprestige in public adminisrcation. A&nin. SW. 14:5-14, 1982.
7. Roctte T & Srrrfth D L. Frequency of citations as criterion for tk ranking of departments, journals, and
individuals. Sociol. Inq. 48:49-57, 1978.
8. Garfteld E. Citation indexing-its
theory and application in science, technology, and hronarrities.
New York WiSey, 1979. p. 247.
9. Tm D L. The assessment of quality in higher education: a critical review of the litcramre and research.
Res. High. Educ. 24:223-65, 1986.
10. Stevens G. A note on journal stature and citation inrpact. Sot, Sci. Quart. 68(2):418-9, 1987.
11. CfrrMenaon J A & Stgehurn L. Journal stature and citation impact: a rejoinder,
Sot. Sci. Quart. 68(2):419-20, 1987.
264
Reprinted
from .%aal
Science Quarterly,
Vol, 66, No, 4,
tkemrber
1985, Copyright
19S5, by
permission
of rhe Wriverairyof Texas
Press.
ACCREDITING
KNOWLEDGE:
JOURNAL
IMPACT IN SOCIAL SCIENCE1
James A. CHRISTENSON, University of Kentucky
Lee SIGELMAN, University
of Kenruc~
STATURE
AND CITAtiON
The impact of the nerwork of journals through which acholara disseminate and accredit their ideas is compared
to the presrige hierarchy of journals in aoeiology arrd politieat science, A comparison of prestige rankings of journals with Socia/ Sciences Citation Aro’e.@ impact scores suggests a nonlinear rdationship: many reputed “top”
journals receive inordiire credt and many new and less prestigious joumrds reeeive Iesa credk than their impact
warranta.
The institutional goal of science is the extension of accre&xi knowledge (Merron, 1942). In
order to achieve this goal, scholars need to communicate with one another through regular, open
channela. This exchange of ideas is thus an inescapable aspect of aeientificresearch and development. The question for scholars is how and where
to disseminate and thus to accredit their work.
Aecredhed knowledge is grounded in collegial
recognition of the indlviduaf and her/his work.
Not all ideas win equaJ acceptance, and neither
do all the scholam who generate these ideaa or
all the institutions that house these aeholars. For
thla reason, mnsiderabie research has been done
on the stratification of Bcholarlyfieldg-research
designed to pinpoint the beat, or at least the most
reputable, aeholarsand programs in various fields
(e.g., Allison and Stewart, 1974; Bingham and
Vertz, 1983; Cole, 1983; Cole and Cole, 1973;
Crone, 1%5; Long, 1978;Merton, 1968;Reskirt,
1977). However, the network of uunrmmications
undergirding driaatratifrcationsystem remains little understood (Garfield, 1972). IdeaJly, good
ideas, insights, theories, and findhga would
achieve the impact they deserve on the basis of
their merits; but, in accrediting knowledge, the
medium of dissemination maybe as important as
the message.
Journals, along with booka, are the prime medium for accrediting knowledge. Cole (1983) obaerwti that’ ‘we read papers ittjotsrttaiaofdy after
they havebeersevaiuated [accredited] by others.
We give people more credit for publishingin prestigious joumrds” @ 137). But what do we mean
by “accredited’? And to what extent is prestige
independent of quaiity or impact? A paper published in a refereed joumai haa met the standards
of that journal. As Glem (1971:298) has noted,
it is widely recognized that there are status differentials amongtilourrrrds in any field. To pubIiah a paper in eertr& journals maybe a Idghly
visible badge of snccea$. If a paper appears in a
“top” journal, the presumption is that it must be
good. Journals, beeause they are refereed, provide accreditation.But some referesdjoumrdsprovidemuch more accreditation than others.
A different form of accreditation is provided
by one’s peers when they make use of one’s work.
Seen from this perspective, good work is work
that others find useful and eorrsequently cite in
their own work. Hargena and Fehrdee (1984)
summarized their literature review by asserting
that “the number of citations to a scientist’s work
is ofien recommendedas the best sin81eindicator
of scholarly recognition” (p. 686). so the accreditation of one’s work can be measured in at least
two ways: the prestige of the journal in which it
is published and the frequency with which it is
cited. Of course, work can be widely cited preciselybecauseof where it was published,but these
two aspects of accreditation are at least e4meeptually dlatinet. Our researeh questioneoneems the
extent to which they are empirically distinct. How
does the latter form of accrexfhation(citation impact) relate to the former (journal prestige)?
Questions concerning accredited knowledge
have both theoretical and practical implications.
A journal can achieve the status of a “top” publication outlet for reasons unrelated to the quaJity or impact of the articles it publishes—reasons
that include, but are by no means restricted to,
its sponsorship, age, the quantitative/qualitative,
theoreticallempirieal, and professionrd/practitioner orientations of its articles, the visibility of
its edhor and editorial boards, and its past reputation. It seemslikdy thatjournals, fike departments
and universities,establishimagesthat are relariveIy resistant to change. Thus, jourtrsdX, a long-established, discipline-supportedjournal, may outrank new journal Y in lerms of prestige even
though Y is publishing more important articles
than X is in terms of citation impact.
Martyprofessionalsare interestedin the accreditation of knowledgefor praetieal reasons. For example, the interests of librarians and information
system designers stem from their aaaumptionthat
the quality of a joumai affects user demand for
the journal. Seierreeplatmergfind journal ratings
helpfutin aaacssingthe payoffsof variousresearch
progrrunsand the productivityof variousresearchers and research teams. Journal editors and sponsors use ratings as performance indicators and
1Audrmr are Iisvd atphabsricatly.Each has made an equal contribution. Thk researchwaapartially fucded by the KentuckyAsricukural Experiment Ration.
265
Prestige, Impact, and
TABLE 1
Measuresfor SaciologyJourna!s
Related
SSCP
Impact
Glenn
Prestige
Joursud
scorrY
Amen”con Sociological Review
Arneticm Jourrrd of Sociology
Social Forces
Social Psychology Qrsorierly (formerly Sociorrr.wy)
British Jourrd of Sociology
American Anthropologist
Social Problems
Amen”can Political Scienre Review
Derrrography
Annals of the Amen”can Aca&?my of Political ond Social Science
Public Opinion Quarterly
Amen”con Ecoaorrric Review
Jourrrol of Personality orrd Social Psychology
Europem Jourrrol of Sociology
Behavioral Science
Rural Sociology
Hurnon Organization
Journal of Social Ps~ho/ogy
Aa%rinistrotive Science Quorterly
Milkmk Memonid Fund Quorterly
Intenrationol Jourrrol of Comparative Sociology
American Behavioral Scientist
Journol of Social Issues
Social Research
Daedalra
Humon Relorion.r
Poprdorion Studies
Harvard l?ducationol Review
Current Sociology
Conodion Review of Soriology and Anthropology
.$ociologid Review
Irrtenrational Social Science Jourrrol
Amen”con Sociologist
Jourrrol of Marriage aad Fornily
Journol of Conflict Resolution
Jounrol of Health ond Social Research
Sociology of Education
planning guides. Researchers themselves want to
pursue a sensiblemanuscriptsubmissionstrategy,
whfie department chairs and deans are faced with
the need to document the quality of facuiry publications in conjunction with tenure and promotion decisions, departmental reviews, and the fike
(Gordon, 1982).
The Lhsfs between Reputation and
Performance
In the fields of sociology and political science
fairiy clear-cut journal prestige hierarchies have
been documented. Giem (1971) soiicited evrduations of professional joumais from a sampie of
sociologists at Ph.D.-granting programs in the
266
10.0
9.6
8.1
7,8
7.8
7.7
7.6
7.5
7.4
7.2
7.1
7,1
7.1
6.9
6.8
6.7
6.7
6.7
6.7
6.7
6.7
6.6
6.6
6.6
6.5
6.5
6.5
6.4
6.4
6.4
6.3
6.3
6.2
6.2
6.2
6.2
6.1
Scord
(Average,
prestige
19?7-79)
Residual
SeOr~
3.367
2.034
0.971
0.944
0.535
1.815
1.041
1.973
1.133
0.425
0.851
1.552
2.39U
0.435
0.587
0.798
0.436
0.283
2.293
1.192
0.171
0.483
1.031
0.395
0.958
0.519
1.017
2.816
0.095
0.233
0.244
0.230
0.740
0.988
0.638
1.602
0.403
1.662
2.310
1.&t5
i. 367
1.688
0.582
1.090
0.258
0.818
1.174
0.740
0.189
-0,470
0.867
0.647
0,381
0,666
0.786
4.794
0.072
0.874
0.529
0.098
0.598
0.056
0,401
0.009
-1.505
0.634
0.525
0.417
0.428
-0.073
-0.268
0.007
-0.751
0.092
United .%tes., and Giles and Wright (1975)ur!&rtook a similar survey of political scientists. Glenn
asked his respondentstojudge 63journrdsin terms
of’ ‘the average im~mmce of their contributions
to the field” of soaology, instructing them to use
the American Sociological Review as an anchor
for their evaluations. The American sociological
Review was given an arbitrary score of 10, and
respondents were told to assign a score of 5 to
a journaI they considered onfy half as implant
as Ihe Anrcricon Soci&@al Review, 20 to a journal they considered twice as important as the
American Soeiofogieaf Review, and so on. Giles
and Wright’s respondents. who rated 63 joumafs
commonfy used by political scientists, also
employed a lo-point rating system, but their scale
was marked by verbal descriptors (O = pmr,
Prestige,
Impact,
TABLE l--continued
Related Meaaureafor SociologyJoumata
and
SSCP
Impact
Glenn
prestige
Journal
SrOr@
Sociological Quarterly
Ac?a .$ociologica
Social Science Quarterly
Southwestern Journal ojAnrhropology
Sociology and Social Research
Sociology
Soci0108ical Inquiry
Society (formerly Transaction)
Sociological Perspectives (formerly Pacific Socio[08ical Review)
Lrw and Society Review
Sociological
Analysis
Journal of Grrontdogy
Journal of Research in Crime and Delinquency
Amcricarr Journal of Economics and Saciology
British .Iourmd of Criminology
Gerontologist
Crime and Delinquency
Science and Society
Journal of Crime I.aw, Criminology, and Police Science
Phylon
.$ocia! tfio[08y
Jewish Journal of Sociology
American Journal of Correction
Eugenics Review
Journal of Negro Eakotion
New
!joriety
Federal Probation
6.1
6.1
6.0
6.0
5,9
5.9
5.8
5.7
5.7
5.7
5,7
5.4
5.4
5.3
5.3
5,3
5.2
5.2
5.1
5.0
5.0
4.9
4.8
4.7
4,5
4,5
3,8
Scoreb
(Average,
1977-79)
Prestige
Residual
Ssor@
0.221
0.174
0.479
NA’4
0.235
0.272
-0.068
NA
0.127
-0.337
-0.039
-0.147
-0.166
-1.375
-0.146
-1.326
-0.869
-0.578
-0,701
-1.081
-1.145
-0.734
-2,101
-0.769
-1.140
-1.018
NA
NA
-1,251
-1,243
-2.148
0.103
0,694
0,187
0.198
0.222
1.760
0.197
1.316
0.735
0.237
0.394
0.877
0.831
0.309
1.921
0.098
0.571
0.288
NA
NA
0.076
0,065
0.326
Wource: Glenn (197 I ).
b!%urce: Social Sciences Citation hake
Annual, VOIS. 1-3.
CThis is the actual value of the Glenn prestige score, less the prestige score predicted from the regression of
prestige scores on impact factor scores.
‘NA: Not available,
2 = fair, 4 = adeqyrde, 6 = good, 8 = very
good, and 10 = outstanding) rather than being
anchored by a prominent journal.
Sociologists’ and politiwd scientists’ ratings of
their professional journafs, as determined by the
Glenn and GJea-Wrightsurveys,are summuimd
1 and 2, respectivein the first cohunn of T3&s
ly. Two of the top five ~ourrttdson the political
SociologicalReview
scientists’list(the American
and the American Journal of Sociology) were the
top-rated sociology journals. Sociologists, for
their part, also gave high marks to the principal
journals of their sister dkciplines, ranking the
American Anthropologist sixth and the American
Pdiricd Science Review eighth.
How closely are these reputational ratings related to the actual influence or quality of these
joumafs? “Extensive past research indicates that
citationsare a validindicatorof the relativequality
267
of work” (Cole, 1983:116). Number of citations
is also highly correlated with other measures of
quality that miologisfa of sciencehave employed
(e.g., access to resources, status of degree-granting institutions, initiaf appointments, mobility).
However,qualityin thk contextis defined as intellecrrralinfluence-the impact of one’s ideas as
aeeredked by others through use in their own
work. Citationa are a measure of qurdity, in that
they suggest that other professionrds working in
the aarne area have found one’s ideas valuable.
The Sociai Sciences Citation ktdexa (SSCP )
provides “impact factor” scores for more than
1,300 social science joumafs. Journals from the
diaeiplin~ of psychology folfowedby psychiatry,
econornica,and law generaflyhave higher impact
scores. Sociologyjournals rank about loth, with
political science joumsfs about 25th. Such differences among disciplines reflect, among other
TABLE2
Prestige, Isnpaet, and Related Measuresfor Polltkd
Bcience Jourmds
SSCP
Impsct
Journsl
World Politics
American Sociological Review
AIOCriConJouml of [nternutiond .!.aw
American Journal of Sociology
Antckm
Pofilicfd Science Review
Journal of Politics
Comparative Politics
American Journal of Political Science
Administrative Science Quarterly
Public Opinion Quarterly
Daedolus
Journal of Public Law
Public Aa?ninistrotion
Review
British Jounud of Po/itica( Science
Public Interest
Political 21e0Q
Law and society Review
[ntewmtional Organikxion
Social Forces
Political
Studies
Science Quurterly
Sage Professional Papers
GOvemment and Opposition
Politics and Society
Behavioral Science
Public Choice
Public Policy
Polity
Canadian Journal of Polirica[ Science
Journal of Conflict Resolution
Intem”onal
Affairs
Comparative Political Studies
Urban Affairs Qaarrerly
Foreign Affairs
Western Political Quorterly
Aahrinistration and Socieiy
A&rinistrative law Review
Social
things, the relative size and professional diversity of the disciplines.
The earliest journal impact scores SSCI publishedare for 1977and are baaedon citationsfrom
articles publishedduring 1975-76.A journal’s impact factor wore for 1977is definedas the number
of citations during 1977 to articles that the journal publishtxlduring 1975-76,dividedby the total
number of articles the joumaf published during
1975 and 1976 (i.e., the ratio of citations to “citable” items for a given journal). Dividing the
number of citationsby the number of citable items
controls for the journal’s size and the frequency
with which it is published. Gordon (1982) found
that impact scores were highfy correlated over
time. For example, the correlation of impact
scores between 1977 and 1978 for 59 of Glenn’s
Giles-Wrigtst
Prestige
score
8eoreb
(Average,
1977-79)
Prestige
Residual
ScOr.#
7.3
7.1
7.0
7.0
7.0
6.7
6.6
6.6
6.5
6.5
6,4
6.4
6.3
6.2
6,2
6.2
6.2
6.2
6.1
6.0
6.0
6.0
6,0
6.0
6.0
6.0
6.0
5,9
5,9
5.9
5.8
5.8
5.8
5.8
5.8
5.8
5.8
0.970
3.367
1.323
2.034
I .973
0.378
0.708
1.027
2.293
0.851
0,958
NAd
1.282
-0.322
0.775
0.359
0.394
1.028
0.735
0.548
-0.293
0,551
0.489
NA
0.735
0.335
-0.476
0.593
-0.281
0.187
0.081
0.346
0.269
NA
0.341
0.308
0.206
0.331
0.101
0.347
0.177
0.076
-0.127
0.043
0.03 i
-0.851
0.174
0.158
4,374
0.195
0.708
2.093
0,267
1.760
0.%1
0.971
0.348
0.479
NA
0.357
0.412
0.587
0.374
0.7645
0.175
0.465
0.638
0.814
0.523
0.544
2.050
o.3m
0.328
1.235
journals was .84, To mitigate the possibilities of
yearly fluctuations,a three-year(1977-79)average
is calculated in thk research for each joumaf. If
journal prestige influences submissiondecisions,
the prestige ratings published in the early to mid
1970s would influence publications in the mid
1970sand citation counts in the latter 1970s, the
time of our assessment.
What is the relationship between citation impact andjoumrd reputation?The Glesmand GilesWright reputationalratings are related to the SSC1
impact factor scores (which are shown in the second column of Tables 1 and 2): the correlation
betwwenthe Glem (scwiology)and SSC1measures
is .526, artd the correlation between the GilesWright (political science) and SSCJ mcaaures is
.572. This suggests that reputations are perfor-
TABLE 2-contbmed
Prestige, Impact, and Related Measures for Politicaf Science Journals
SSCP
Impact
Journal
American Politics Quorterly
[ntew’onal
Studies QaanerIy
Publiu.r
Asian Suwey
Political .UedtodOlOgy
Polirical Science
Dissent
American Behavioral Scientist
Polirical Science Qr4uneriy
Political Quarterly
Journal of Peace Research
hsterrratioaal Social Science Jouraal
.lourrral of Irrterrratioard Afairs
Siraalation and Games
Anrrtds of dre American Academy Of PO[itiCa[ ad .$OCia[ .!CienCe
Review of Politics
Interrratiorrol Interactions
Journal of Developing. Areas
Experimental Studies oj Politics
Policy Stuales Journal
Orbis
Ps
Midwest Review of Pablic Aahrinisxratimr
NatiomdCivicReview
Journal of Inter-Amen’can
Studies and World Affairs
Social Science Journal
Score
Gil&Wright
PreatJge
Score
(Average,
5.8
5.7
5,7
5.7
5.6
S.6
5.6
5,6
5.6
S.6
5.6
5.4
5.4
5.3
5.3
5.3
5,0
5.0
4.9
4.8
4.8
4,7
4.2
4.1
4. I
3.8
0,597
0.581
0.172
0.446
NA
0.388
0.205
0.483
0.504
0.134
0.557
0.230
0.312
0.268
0.425
0.217
NA
0.146
NA
0.106
0.457
0,518
NA
NA
0.272
0,196
1977-79)
prestige
Residual
Scorr+
O.llxt
-0.09[
0.149
-0.012
4N:78
0.030
-0.133
-0.146
0.071
-0.177
-0.185
-0.233
-0.307
-0.399
-0.277
NA
-0.536
NA
-0.712
-0.918
-1.054
NA
NA
-1.510
-1.765
%ource: Giles and Wright (1975).
bSource: Social Sciences Citation Irrdexa &suad,
VOIS. I-3.
C’tlsis is the actual value of the Giles-Wright prestige score, less the prestige score predicted from the regression of prestige scores on impact factor tiom-s.
-
‘NA: Not
available.
martce-baaedto some degree, for the journals that
are perceived as most prestigious in each discipline tend to be the ones that have the greatest
scholarly impact. On the other hand, these correlation are not rteariy strongenoughto permit
us to concludethatajottrnsd’sreputationis a simple fonctionof schokarlyitdluence. Approximately
two-thirds of the variance in the reputed quality
of political sciencejournals and three-quarters of
the variance in the repined importance of aociolOSYjournals remain unexplained by the SSCI impact scores,
The unexplainexivariance in journals’ reputations might simply reflect the operation of random error in the reputatiottal measures. Moreover, there is a lag of several years between the
reputatiomd meaaures attd the impact measure.
But we doubt that either random measurement error w 4 time lag tells the whole story. Rather,
we think it quite likelythat scholarlyjourrttds, lie
academic departments, tend to establish reputa-
269
tions that endure in spite of what they merit. Once
a journal has been placedon a diwiplii’s prestige
Iadder, it tends to retain its place because ifa reputation is accepted at face value and is not continuously reevaluated in light of changing
circumstances.
We certainly do not claim to possess deftitive
proof of this interpretation, but some intriguing
evidence is available. For the 56 journala for
which both Giles-Wright reputational and SSCI
impact data are available, the correlation between
reputational scores and the reskfuul in these reputatiomdscores (theportian left over after regressing the Glles-Wright scores on the SSCI impact
scores) is extremely high: r = .820. For the so-’.,
ciology joumats, the correlation between the’t
Glenn measure and the residual unexplained by
the SSCI impact score is even higher: r = .851.
(These residuals are shown in the tfdrd column
of Tables 1 and 2.) These highly autocorrelated
error terms suggest that in each field high-atattta
journals tend to have better reputations than their
influence would warrant, while lower-statusjournals tend to have poorer reputations than their influence would warrant. Subsequent examination
of scatter plots supfmrtsthis argument. This is nor
to say that none of the highly reputable journafs
deserves its reputation. For example, the GilesWright reputational score for the American Sociological Review, 7.1, is very close to what
would be predicted from a regression of GilesWright scoreson the SSCIimpactscores. Generally, however, both very good and very bad reputations tend to be exaggerations of what the impact
data suggest are merited, Especially noteworthy
in this regard among the political science journals are World Politics and the Journal of Politics, both of whose reputationaf scores are far
above what would be predicted from the citation
data: World Politics, whose score of 7.3 places
it first among afl the political science journals,
has a predicred reputatioml score of 6.07, close
to the mean on the Giles-Wright scale, and the
reputational rating of 6.7 for the sixth-ranked
Journal of Politics is also much higher than the
predicted score (5.7). Among the journals rated
by sociologists, the American Sociological
Rew”ew,the AmericanJoumaf of Sociology,Social
Forces, and the British Journal of Sociology
display the largest positive residuals, i.e., the
largest “unearned” reputations, though the first
two would still be very highly rated even if their
ratingswere exacrfyconsonantwith their intluence
as measured by the SSCf citation data.
In short, the residuals provide strong presumptive evidence that reputational measures of journaf quality reflect persisting stereotypes rather
than simply summarizing actuaf influence. This
suggests at the very least that widely held stereotypes about some of the most prominent sociology
and political science journals may need to be reconsidered. It also suggests that in thinking about
the role variousjournals play in accreditingkrmwledge, it would be well to incorporate a behavioral
as well as a reputational dimension.
Rating
Sociology and Pofitieaf &ience Jourrsafs
Since the prestige rankingsof sociologyand political sciencejournals were published in the early 1970s, many new journals have been established, the stature of journals may have changed,
and citation information has become available.
This rwent citation information provides behavior-based comparative data for a wide range of
journrds in the sociaf sciences.
The SSCI journal impact data do pose some
problems, which we need to acknowledge. One
problem is that of incompletecoverage. The SSCI
data base does not inchrde seversd journals that
are increasingly important publication outlets in
270
sociology and political science, In political science, the list of exclusions includes Polirica/Behavior, Micropo[itics, and Political Psychology,
to name only three examples from one relatively
small comer of the discipline. If journal ratings
are to be based on the SSCf impact scores, then
being excluded from the SSCI data base is tantamount to being excluded from consideration
altogether.
Exclusion of journals from the SSCI data base
is a problem, but it is a problem of limited scope:
the journals that are not included in the SSCI data
base are, for the most part, joumafs that would
not score very high in terms of impact if they were
included, The truly major problem stems from the
difficultyof defining the boundariesof a scholarly
discipline. If we wish to determine which are the
best sociology or political science journals, we
must first be certain what we mean by a sociology
or political sciencejoumaf. This is a very difficult
problem, and it is by no means Peculiar to the
SSCI data base; indeed, it affects every attempt
to evaluate journals in any field. For example,
Glenn’s list of 63 journals includes several top
journals from other disciplines(e.g., the American
PoliricalScienceReview, the American Ecorrom”c
Review, and the Harvard Iiducaricma/Review) as
well as numerous interdisciplinaryjournals (e.g.,
Public Opinion Quarterly, Behavioral Science,
and Social Science Quarterly), The SSCI, for its
part, categorizes journals according to their dkciplinary affiliation, but its categories are hard]y
authoritative. To cite only three examples, should
Current History, IPW Benchre, and the Journal
of CanadianStuales redly be considered three of
the 77 journals subsumed under SSCI’Spolitical
science category?
Despite these problems, the SSCI impact data
seem to us to provide a firmer foundation for assessing the quality of sociology and political science journals than rm~ other method devised to
this point. On the bas]s of the SSCI impact data,
we get a fresh picture of the quality of several
establishedjournals. For example, Sociology and
Social Research, which has been published for
almost three-quarters of a century, has an impact
score of only 0.103, which places it about 58th
of the 66 journals in the SSCI scciology category.
Similarly, the impact score of the venerable Political Science QuarTerly(0.504) places it well below the other established political science journals. More dramatically, Worki Pofirics, the most
prestigiousjournal according to the Giles-Wright
ratings, has an impact score of 0.970, which
would not place it among the top 10 in the GilesWright rating. Many regional journals also have
lower impact ratings than might have been expected (e.g., Sociological Quarrerly, Sociological Perspectives, and others not reported such as
Sociological Spectrum and Sociological Focus)
(data not presented). And some specializedjournals (e.g., AdministrativeScienceQuarrerly,Journal of Healrhond SocialResearch,PublicInterest)
have a greater impact than their reputationswordd
suggest. Of course, these comments should not
be taken out of the context of the SSCI impact
scores upon which they are based; any problems
associated with the SSC1data to measure journal
quality will have to be borne in mind in interpreting journrd ratings based on the SSC1 data.
Tbe availability of behavior-based journal ratings should mitigate the mmmon tendency simply
to eormtnumber of articles publishedas a measure
of scientificproductivityor to limitjournal evaluations to outdatedreputatiorudhierarchies, It is easy
to count articles, but it is difficult to draw meaningfulcomparisons.We believethat impact should
be weightedmuch more heavily than simple number of articles or stereotypic journal reputations
in assessing accreditation of scholarly work.
The SSC1citation data permit scholars to evaluate the importanceofjournals based not on opinion but on the frequency of citations. While such
assessments do not directly measure the quslity
of journals, frequencyof citationimplies scholarly
acceptance, or at least acknowledgmentof importance through utilization of others’ work. However, the SSCI should not become the litmus test
for qurdityof social research. Journals have prestige, but their prestige is only derived from the
usefulness of the articles they publish. In the long
mn, individual articles and t-woksbexmmethe litmus test of quality. But practically, most of us
work within very limited time parameters, Thus,
in the short run journal citation data do provide
deans, tenure committees, and those studying
stratification in sciencea more defensibleand less
stereotyped means of measuring “accredited”
knowledge than any other method now available.
Conehssion
If the medium accredits knowledge, assessment
of the impact of joumafs that constitute the medium for the exchangeof scholarly ideas demands
more scrutinythan it has previouslyreceived. This
study indicates that the prestige accorded many
journals se-emsout of line with the impact these
journrds have had in the social science research
community. The relationship between reputation
and citation impact is nonlinear, best described
as a sigmoid curve. A fairly clear-art prestige
hierarchy is present, but many of the most prestigious joumaJs have less impact than might be
expected, and many other journals have more impact than is attributed to them by the reputational
ratings.
REFERENCES
Paul D., and John A. Stewan.
Alliwn,
American
Sociological
Review,
1974. ‘‘Productwiry
39 (Ausus1):5%-6C6.
Differences
atnons Scientists:
[email protected], Richard D., and Laura L. Verrz 19S3. “TIM Social Stmcture
Science, ” Social Science Quarterly, 64 (Junc):275-S7.
Cole,
Jonathan
R., and Stephen
Cole.
S1cphen.
19S3. ‘The
R. Cole.
Hieramhy
1973. social
Crane, Diana. I%5. “scientists at Major and
Review, 30 (October) :699-7 14.
Garfield.
Giks,
Eugene.
1972.
“Citation
Stmtgi’cation
of the sciences,’”
American
of an Academic
Evidence
Dkcipline:
in Science (Chicaso:
Journal
of .%ciolqry,
for Accumulative
Networks snd PrestiSe in PoliticsJ
University
of Chicago
Minor Universities: A StudYin Productivity and Recosniticm,” American Socidogicd
AnaJysis as a TCKIIin JmIrnsf Evfduation,’”
Science,
17S (3 November):471
Glem, Nomat D. 1971. ‘‘Amertcan Sociologists’ Evaluation of SIxtyThree
Journals,”
Anwican
Gordon, Michael D. 19S2. “Citation Ran!@
versus Subjective Evaluation in tk Determination
Sciences,”
Journal of the A?menedn .$oaefy for Information Science, 33 (Janusry):55-57.
La@, J. Scott. 197S. “Pmductiviry
(lkember):SS9-90S.
Merton, Robert K. 1942. “science
----------------
1%S.
Reskm, Barbara
(June) :491-504
F.
‘The
19S4. ‘Wructumf
and Academic
Determinants
Positions
Pmiwtivity
of Stratification
in the fkientific
ml Techtwlogy in a Democratic Order,”
MattJIew Effect in Science, ” Science,
1977. “Scientific
Press).
S9 (July): 111-39.
Michael W., and Gerafd C, Wright, Jr. 1975. “Politicaf Scientists’ EvaJuatkms of Sixty-’three Journals,
Harsens, Lowell L., and Dkne Felndee.
49 (Gctober):6S5 -97.
AdvamaSe,”
SociologJsr, 6 (November) :29S-303.
of JomttaJ Hierarchies
in Science, ” Ametizzn
Career,”
Joum/oftigddPolidd
-79.
” PS. 8 (Summer):25’P57.
American
sociological
.%ciobgical
tiobgy,
in the Sociaf
Review,
Review.
43
1 (Gaober): J J5-26.
159 (5 January ):56-63.
and the Reward
271
Stmctttre
of Science.”
American
.$ociological
Review,
42
Fly UP